Compare commits

..

2 Commits

Author SHA1 Message Date
Gordon Lam (SH)
30bbcf32a4 chore(prompts): remove model references from prompt files 2026-01-21 17:29:32 +08:00
Gordon Lam
5422bc31bb docs(prompts): Enhance release notes generation and implement GitHub issue tools (#44762)
<!-- Enter a brief description/summary of your PR here. What does it
fix/what does it change/how was it tested (even manually, if necessary)?
-->
## Summary of the Pull Request
This update improves the release notes generation process by
restructuring documentation for clarity and enhancing workflows. It
introduces new tools for handling GitHub issue images and attachments,
along with automated tests to ensure functionality.

<!-- Please review the items on the PR checklist before submitting-->
## PR Checklist

- [ ] Closes: N/A
- [ ] **Communication:** I've discussed this with core contributors
already. If the work hasn't been agreed, this work might be rejected
- [x] **Tests:** Added/updated and all pass
- [ ] **Localization:** All end-user-facing strings can be localized
- [x] **Dev docs:** Added/updated
- [ ] **New binaries:** Added on the required places
- [ ] [JSON for
signing](https://github.com/microsoft/PowerToys/blob/main/.pipelines/ESRPSigning_core.json)
for new binaries
- [ ] [WXS for
installer](https://github.com/microsoft/PowerToys/blob/main/installer/PowerToysSetup/Product.wxs)
for new binaries and localization folder
- [ ] [YML for CI
pipeline](https://github.com/microsoft/PowerToys/blob/main/.pipelines/ci/templates/build-powertoys-steps.yml)
for new test projects
- [ ] [YML for signed
pipeline](https://github.com/microsoft/PowerToys/blob/main/.pipelines/release.yml)
- [x] **Documentation updated:** If checked, please file a pull request
on [our docs
repo](https://github.com/MicrosoftDocs/windows-uwp/tree/docs/hub/powertoys)
and link it here: N/A

<!-- Provide a more detailed description of the PR, other things fixed,
or any additional comments/features here -->
## Detailed Description of the Pull Request / Additional comments
The changes include updates to the release notes summarization process
and workflow documentation for better clarity. New scripts for managing
PR diffs, grouping, and milestone assignments have been added.
Additionally, tools for downloading images and attachments from GitHub
issues have been implemented, ensuring a more streamlined process.

<!-- Describe how you validated the behavior. Add automated tests
wherever possible, but list manual validation steps taken as well -->
## Validation Steps Performed
Automated tests were added for the new GitHub issue tools, and all tests
passed successfully. Manual validation included testing the new
workflows and ensuring the documentation accurately reflects the changes
made.
```
2026-01-21 16:18:57 +08:00
61 changed files with 3364 additions and 2510 deletions

View File

@@ -32,6 +32,7 @@ advfirewall
AFeature
affordances
AFX
agentskills
AGGREGATABLE
AHK
AHybrid
@@ -219,6 +220,7 @@ CIELCh
cim
CImage
cla
claude
CLASSDC
classmethod
CLASSNOTAVAILABLE
@@ -1040,6 +1042,7 @@ mmi
mmsys
mobileredirect
mockapi
modelcontextprotocol
MODALFRAME
MODESPRUNED
MONITORENUMPROC
@@ -1737,6 +1740,7 @@ STICKYKEYS
sticpl
storelogo
stprintf
streamable
streamjsonrpc
STRINGIZE
stringtable
@@ -1756,6 +1760,7 @@ SUBMODULEUPDATE
subresource
Superbar
sut
swe
svchost
SVGIn
SVGIO
@@ -1962,6 +1967,7 @@ visualeffects
vkey
vmovl
VMs
vnd
vorrq
VOS
vpaddlq

92
.github/agents/FixIssue.agent.md vendored Normal file
View File

@@ -0,0 +1,92 @@
---
description: 'Implements fixes for GitHub issues based on implementation plans'
name: 'FixIssue'
tools: ['read', 'edit', 'search', 'execute', 'agent', 'usages', 'problems', 'changes', 'testFailure', 'github/*', 'github.vscode-pull-request-github/*']
argument-hint: 'GitHub issue number (e.g., #12345)'
infer: true
---
# FixIssue Agent
You are an **IMPLEMENTATION AGENT** specialized in executing implementation plans to fix GitHub issues.
## Identity & Expertise
- Expert at translating plans into working code
- Deep knowledge of PowerToys codebase patterns and conventions
- Skilled at writing tests, handling edge cases, and validating builds
- You follow plans precisely while handling ambiguity gracefully
## Goal
For the given **issue_number**, execute the implementation plan and produce:
1. Working code changes applied directly to the repository
2. `Generated Files/issueFix/{{issue_number}}/pr-description.md` — PR-ready description
3. `Generated Files/issueFix/{{issue_number}}/manual-steps.md` — Only if human action needed
## Core Directive
**Follow the implementation plan in `Generated Files/issueReview/{{issue_number}}/implementation-plan.md` as the single source of truth.**
If the plan doesn't exist, invoke PlanIssue agent first via `runSubagent`.
## Working Principles
- **Plan First**: Read and understand the entire implementation plan before coding
- **Validate Always**: For each change: Edit → Build → Verify → Commit. Never proceed if build fails.
- **Atomic Commits**: Each commit must be self-contained, buildable, and meaningful
- **Ask, Don't Guess**: When uncertain, insert `// TODO(Human input needed): <question>` and document in manual-steps.md
## Strategy
**Core Loop** — For every unit of work:
1. **Edit**: Make focused changes to implement one logical piece
2. **Build**: Run `tools\build\build.cmd` and check for exit code 0
3. **Verify**: Use `problems` tool for lint/compile errors; run relevant tests
4. **Commit**: Only after build passes — use `.github/prompts/create-commit-title.prompt.md`
Never skip steps. Never commit broken code. Never proceed if build fails.
**Feature-by-Feature E2E**: For big scenarios with multiple features, complete each feature end-to-end before moving to the next:
- Settings UI → Functionality → Logging → Tests (for Feature 1)
- Then repeat for Feature 2
- Benefits: Each feature is self-contained, testable, easier to review, can ship incrementally
**Large Changes** (3+ files or cross-module):
- Use `tools\build\New-WorktreeFromBranch.ps1` for isolated worktrees
- Create separate branches per feature (e.g., `issue/{{issue_number}}-export`, `issue/{{issue_number}}-import`)
- Merge feature branches back after each is validated
**Recovery**: If implementation goes wrong:
- Create a checkpoint branch before risky changes
- On failure: branch from last known-good state, cherry-pick working changes, abandon broken branch
- For complex changes, consider multiple smaller PRs
## Guidelines
**DO**:
- Follow the plan exactly
- Validate build before every commit — **NEVER commit broken code**
- Use `.github/prompts/create-commit-title.prompt.md` for commit messages
- Add comprehensive tests for changed behavior
- Use worktrees for large changes (3+ files or cross-module)
- Document deviations from plan
**DON'T**:
- Implement everything in a single massive commit
- Continue after a failed build without fixing
- Make drive-by refactors outside issue scope
- Skip tests for behavioral changes
- Add noisy logs in hot paths
- Break IPC/JSON contracts without updating both sides
- Introduce dependencies without documenting in NOTICE.md
## References
- [Build Guidelines](../../tools/build/BUILD-GUIDELINES.md) — Build commands and validation
- [Coding Style](../../doc/devdocs/development/style.md) — Formatting and conventions
- [AGENTS.md](../../AGENTS.md) — Full contributor guide
## Parameter
- **issue_number**: Extract from `#123`, `issue 123`, or plain number. If missing, ask user.

65
.github/agents/PlanIssue.agent.md vendored Normal file
View File

@@ -0,0 +1,65 @@
---
description: 'Analyzes GitHub issues to produce overview and implementation plans'
name: 'PlanIssue'
tools: ['execute', 'read', 'edit', 'search', 'web', 'github/*', 'agent', 'github-artifacts/*', 'todo']
argument-hint: 'GitHub issue number (e.g., #12345)'
handoffs:
- label: Start Implementation
agent: FixIssue
prompt: 'Fix issue #{{issue_number}} using the implementation plan'
- label: Open Plan in Editor
agent: agent
prompt: 'Open Generated Files/issueReview/{{issue_number}}/overview.md and implementation-plan.md'
showContinueOn: false
send: true
infer: true
---
# PlanIssue Agent
You are a **PLANNING AGENT** specialized in analyzing GitHub issues and producing comprehensive planning documentation.
## Identity & Expertise
- Expert at issue triage, priority scoring, and technical analysis
- Deep knowledge of PowerToys architecture and codebase patterns
- Skilled at breaking down problems into actionable implementation steps
- You research thoroughly before planning, gathering 80% confidence before drafting
## Goal
For the given **issue_number**, produce two deliverables:
1. `Generated Files/issueReview/{{issue_number}}/overview.md` — Issue analysis with scoring
2. `Generated Files/issueReview/{{issue_number}}/implementation-plan.md` — Technical implementation plan
Above is the core interaction with the end user. If you cannot produce the files above, you fail the task. Each time, you must check whether the files exist or have been modified by the end user, without assuming you know their contents.
3. `Generated Files/issueReview/{{issue_number}}/logs/**` — logs for your diagnostic of root cause, research steps, and reasoning
## Core Directive
**Follow the template in `.github/prompts/review-issue.prompt.md` exactly.** Read it first, then apply every section as specified.
- Fetch issue details: reactions, comments, linked PRs, images, logs
- Search related code and similar past fixes
- Ask clarifying questions when ambiguous
- Identify subject matter experts via git history
<stopping_rules>
You are a PLANNING agent, NOT an implementation agent.
STOP if you catch yourself:
- Writing code or editing source files outside `Generated Files/issueReview/`
- Making assumptions without researching
- Skipping the scoring/assessment phase
Plans describe what the USER or FixIssue agent will execute later.
</stopping_rules>
## References
- [Review Issue Prompt](../.github/prompts/review-issue.prompt.md) — Template for plan structure
- [Architecture Overview](../../doc/devdocs/core/architecture.md) — System design context
- [AGENTS.md](../../AGENTS.md) — Full contributor guide
## Parameter
- **issue_number**: Extract from `#123`, `issue 123`, or plain number. If missing, ask user.

View File

@@ -0,0 +1,261 @@
---
description: 'Guidelines for creating high-quality Agent Skills for GitHub Copilot'
applyTo: '**/.github/skills/**/SKILL.md, **/.claude/skills/**/SKILL.md'
---
# Agent Skills File Guidelines
Instructions for creating effective and portable Agent Skills that enhance GitHub Copilot with specialized capabilities, workflows, and bundled resources.
## What Are Agent Skills?
Agent Skills are self-contained folders with instructions and bundled resources that teach AI agents specialized capabilities. Unlike custom instructions (which define coding standards), skills enable task-specific workflows that can include scripts, examples, templates, and reference data.
Key characteristics:
- **Portable**: Works across VS Code, Copilot CLI, and Copilot coding agent
- **Progressive loading**: Only loaded when relevant to the user's request
- **Resource-bundled**: Can include scripts, templates, examples alongside instructions
- **On-demand**: Activated automatically based on prompt relevance
## Directory Structure
Skills are stored in specific locations:
| Location | Scope | Recommendation |
|----------|-------|----------------|
| `.github/skills/<skill-name>/` | Project/repository | Recommended for project skills |
| `.claude/skills/<skill-name>/` | Project/repository | Legacy, for backward compatibility |
| `~/.github/skills/<skill-name>/` | Personal (user-wide) | Recommended for personal skills |
| `~/.claude/skills/<skill-name>/` | Personal (user-wide) | Legacy, for backward compatibility |
Each skill **must** have its own subdirectory containing at minimum a `SKILL.md` file.
## Required SKILL.md Format
### Frontmatter (Required)
```yaml
---
name: webapp-testing
description: Toolkit for testing local web applications using Playwright. Use when asked to verify frontend functionality, debug UI behavior, capture browser screenshots, check for visual regressions, or view browser console logs. Supports Chrome, Firefox, and WebKit browsers.
license: Complete terms in LICENSE.txt
---
```
| Field | Required | Constraints |
|-------|----------|-------------|
| `name` | Yes | Lowercase, hyphens for spaces, max 64 characters (e.g., `webapp-testing`) |
| `description` | Yes | Clear description of capabilities AND use cases, max 1024 characters |
| `license` | No | Reference to LICENSE.txt (e.g., `Complete terms in LICENSE.txt`) or SPDX identifier |
### Description Best Practices
**CRITICAL**: The `description` field is the PRIMARY mechanism for automatic skill discovery. Copilot reads ONLY the `name` and `description` to decide whether to load a skill. If your description is vague, the skill will never be activated.
**What to include in description:**
1. **WHAT** the skill does (capabilities)
2. **WHEN** to use it (specific triggers, scenarios, file types, or user requests)
3. **Keywords** that users might mention in their prompts
**Good description:**
```yaml
description: Toolkit for testing local web applications using Playwright. Use when asked to verify frontend functionality, debug UI behavior, capture browser screenshots, check for visual regressions, or view browser console logs. Supports Chrome, Firefox, and WebKit browsers.
```
**Poor description:**
```yaml
description: Web testing helpers
```
The poor description fails because:
- No specific triggers (when should Copilot load this?)
- No keywords (what user prompts would match?)
- No capabilities (what can it actually do?)
### Body Content
The body contains detailed instructions that Copilot loads AFTER the skill is activated. Recommended sections:
| Section | Purpose |
|---------|---------|
| `# Title` | Brief overview of what this skill enables |
| `## When to Use This Skill` | List of scenarios (reinforces description triggers) |
| `## Prerequisites` | Required tools, dependencies, environment setup |
| `## Step-by-Step Workflows` | Numbered steps for common tasks |
| `## Troubleshooting` | Common issues and solutions table |
| `## References` | Links to bundled docs or external resources |
## Bundling Resources
Skills can include additional files that Copilot accesses on-demand:
### Supported Resource Types
| Folder | Purpose | Loaded into Context? | Example Files |
|--------|---------|---------------------|---------------|
| `scripts/` | Executable automation that performs specific operations | When executed | `helper.py`, `validate.sh`, `build.ts` |
| `references/` | Documentation the AI agent reads to inform decisions | Yes, when referenced | `api_reference.md`, `schema.md`, `workflow_guide.md` |
| `assets/` | **Static files used AS-IS** in output (not modified by the AI agent) | No | `logo.png`, `brand-template.pptx`, `custom-font.ttf` |
| `templates/` | **Starter code/scaffolds that the AI agent MODIFIES** and builds upon | Yes, when referenced | `viewer.html` (insert algorithm), `hello-world/` (extend) |
### Directory Structure Example
```
.github/skills/my-skill/
├── SKILL.md # Required: Main instructions
├── LICENSE.txt # Recommended: License terms (Apache 2.0 typical)
├── scripts/ # Optional: Executable automation
│ ├── helper.py # Python script
│ └── helper.ps1 # PowerShell script
├── references/ # Optional: Documentation loaded into context
│ ├── api_reference.md
│ ├── step1-setup.md # Detailed workflow (>3 steps)
│ └── step2-deployment.md
├── assets/ # Optional: Static files used AS-IS in output
│ ├── baseline.png # Reference image for comparison
│ └── report-template.html
└── templates/ # Optional: Starter code the AI agent modifies
├── scaffold.py # Code scaffold the AI agent customizes
└── config.template # Config template the AI agent fills in
```
> **LICENSE.txt**: When creating a skill, download the Apache 2.0 license text from https://www.apache.org/licenses/LICENSE-2.0.txt and save as `LICENSE.txt`. Update the copyright year and owner in the appendix section.
### Assets vs Templates: Key Distinction
**Assets** are static resources **consumed unchanged** in the output:
- A `logo.png` that gets embedded into a generated document
- A `report-template.html` copied as output format
- A `custom-font.ttf` applied to text rendering
**Templates** are starter code/scaffolds that **the AI agent actively modifies**:
- A `scaffold.py` where the AI agent inserts logic
- A `config.template` where the AI agent fills in values based on user requirements
- A `hello-world/` project directory that the AI agent extends with new features
**Rule of thumb**: If the AI agent reads and builds upon the file content → `templates/`. If the file is used as-is in output → `assets/`.
### Referencing Resources in SKILL.md
Use relative paths to reference files within the skill directory:
```markdown
## Available Scripts
Run the [helper script](./scripts/helper.py) to automate common tasks.
See [API reference](./references/api_reference.md) for detailed documentation.
Use the [scaffold](./templates/scaffold.py) as a starting point.
```
## Progressive Loading Architecture
Skills use three-level loading for efficiency:
| Level | What Loads | When |
|-------|------------|------|
| 1. Discovery | `name` and `description` only | Always (lightweight metadata) |
| 2. Instructions | Full `SKILL.md` body | When request matches description |
| 3. Resources | Scripts, examples, docs | Only when Copilot references them |
This means:
- Install many skills without consuming context
- Only relevant content loads per task
- Resources don't load until explicitly needed
## Content Guidelines
### Writing Style
- Use imperative mood: "Run", "Create", "Configure" (not "You should run")
- Be specific and actionable
- Include exact commands with parameters
- Show expected outputs where helpful
- Keep sections focused and scannable
### Script Requirements
When including scripts, prefer cross-platform languages:
| Language | Use Case |
|----------|----------|
| Python | Complex automation, data processing |
| pwsh | PowerShell Core scripting |
| Node.js | JavaScript-based tooling |
| Bash/Shell | Simple automation tasks |
Best practices:
- Include help/usage documentation (`--help` flag)
- Handle errors gracefully with clear messages
- Avoid storing credentials or secrets
- Use relative paths where possible
### When to Bundle Scripts
Include scripts in your skill when:
- The same code would be rewritten repeatedly by the agent
- Deterministic reliability is critical (e.g., file manipulation, API calls)
- Complex logic benefits from being pre-tested rather than generated each time
- The operation has a self-contained purpose that can evolve independently
- Testability matters — scripts can be unit tested and validated
- Predictable behavior is preferred over dynamic generation
Scripts enable evolution: even simple operations benefit from being implemented as scripts when they may grow in complexity, need consistent behavior across invocations, or require future extensibility.
### Security Considerations
- Scripts rely on existing credential helpers (no credential storage)
- Include `--force` flags only for destructive operations
- Warn users before irreversible actions
- Document any network operations or external calls
## Common Patterns
### Parameter Table Pattern
Document parameters clearly:
```markdown
| Parameter | Required | Default | Description |
|-----------|----------|---------|-------------|
| `--input` | Yes | - | Input file or URL to process |
| `--action` | Yes | - | Action to perform |
| `--verbose` | No | `false` | Enable verbose output |
```
## Validation Checklist
Before publishing a skill:
- [ ] `SKILL.md` has valid frontmatter with `name` and `description`
- [ ] `name` is lowercase with hyphens, ≤64 characters
- [ ] `description` clearly states **WHAT** it does, **WHEN** to use it, and relevant **KEYWORDS**
- [ ] Body includes when to use, prerequisites, and step-by-step workflows
- [ ] SKILL.md body kept under 500 lines (split large content into `references/` folder)
- [ ] Large workflows (>5 steps) split into `references/` folder with clear links from SKILL.md
- [ ] Scripts include help documentation and error handling
- [ ] Relative paths used for all resource references
- [ ] No hardcoded credentials or secrets
## Workflow Execution Pattern
When executing multi-step workflows, create a TODO list where each step references the relevant documentation:
```markdown
## TODO
- [ ] Step 1: Configure environment - see [workflow-setup.md](./references/workflow-setup.md#environment)
- [ ] Step 2: Build project - see [workflow-setup.md](./references/workflow-setup.md#build)
- [ ] Step 3: Deploy to staging - see [workflow-deployment.md](./references/workflow-deployment.md#staging)
- [ ] Step 4: Run validation - see [workflow-deployment.md](./references/workflow-deployment.md#validation)
- [ ] Step 5: Deploy to production - see [workflow-deployment.md](./references/workflow-deployment.md#production)
```
This ensures traceability and allows resuming workflows if interrupted.
## Related Resources
- [Agent Skills Specification](https://agentskills.io/)
- [VS Code Agent Skills Documentation](https://code.visualstudio.com/docs/copilot/customization/agent-skills)
- [Reference Skills Repository](https://github.com/anthropics/skills)
- [Awesome Copilot Skills](https://github.com/github/awesome-copilot/blob/main/docs/README.skills.md)

View File

@@ -0,0 +1,61 @@
---
description: 'Guidelines for shared libraries including logging, IPC, settings, DPI, telemetry, and utilities consumed by multiple modules'
applyTo: 'src/common/**'
---
# Common Libraries Shared Code Guidance
Guidelines for modifying shared code in `src/common/`. Changes here can have wide-reaching impact across the entire PowerToys codebase.
## Scope
- Logging infrastructure (`src/common/logger/`)
- IPC primitives and named pipe utilities
- Settings serialization and management
- DPI awareness and scaling utilities
- Telemetry helpers
- General utilities (JSON parsing, string helpers, etc.)
## Guidelines
### API Stability
- Avoid breaking public headers/APIs; if changed, search & update all callers
- Coordinate ABI-impacting struct/class layout changes; keep binary compatibility
- When modifying public interfaces, grep the entire codebase for usages
### Performance
- Watch perf in hot paths (hooks, timers, serialization)
- Avoid avoidable allocations in frequently called code
- Profile changes that touch performance-sensitive areas
### Dependencies
- Ask before adding third-party deps or changing serialization formats
- New dependencies must be MIT-licensed or approved by PM team
- Add any new external packages to `NOTICE.md`
### Logging
- C++ logging uses spdlog (`Logger::info`, `Logger::warn`, `Logger::error`, `Logger::debug`)
- Initialize with `init_logger()` early in startup
- Keep hot paths quiet no logging in tight loops or hooks
## Acceptance Criteria
- No unintended ABI breaks
- No noisy logs in hot paths
- New non-obvious symbols briefly commented
- All callers updated when interfaces change
## Code Style
- **C++**: Follow `.clang-format` in `src/`; use Modern C++ patterns per C++ Core Guidelines
- **C#**: Follow `src/.editorconfig`; enforce StyleCop.Analyzers
## Validation
- Build: `tools\build\build.cmd` from `src/common/` folder
- Verify no ABI breaks: grep for changed function/struct names across codebase
- Check logs: ensure no new logging in performance-critical paths

View File

@@ -0,0 +1,68 @@
---
description: 'Guidelines for Runner and Settings UI components that communicate via named pipes and manage module lifecycle'
applyTo: 'src/runner/**,src/settings-ui/**'
---
# Runner & Settings UI Core Components Guidance
Guidelines for modifying the Runner (tray/module loader) and Settings UI (configuration app). These components communicate via Windows Named Pipes using JSON messages.
## Runner (`src/runner/`)
### Scope
- Module bootstrap, hotkey management, settings bridge, update/elevation handling
### Guidelines
- If IPC/JSON contracts change, mirror updates in `src/settings-ui/**`
- Keep module discovery in `src/runner/main.cpp` in sync when adding/removing modules
- Keep startup lean: avoid blocking/network calls in early init path
- Preserve GPO & elevation behaviors; confirm no regression in policy handling
- Ask before modifying update workflow or elevation logic
### Acceptance Criteria
- Stable startup, consistent contracts, no unnecessary logging noise
## Settings UI (`src/settings-ui/`)
### Scope
- WinUI/WPF UI, communicates with Runner over named pipes; manages persisted settings schema
### Guidelines
- Don't break settings schema silently; add migration when shape changes
- If IPC/JSON contracts change, align with `src/runner/**` implementation
- Keep UI responsive: marshal to UI thread for UI-bound operations
- Reuse existing styles/resources; avoid duplicate theme keys
- Add/adjust migration or serialization tests when changing persisted settings
### Acceptance Criteria
- Schema integrity preserved, responsive UI, consistent contracts, no style duplication
## Shared Concerns
### IPC Contract Changes
When modifying the JSON message format between Runner and Settings UI:
1. Update both `src/runner/` and `src/settings-ui/` in the same PR
2. Preserve backward compatibility where possible
3. Add migration logic for settings schema changes
4. Test both directions of communication
### Code Style
- **C++ (Runner)**: Follow `.clang-format` in `src/`
- **C# (Settings UI)**: Follow `src/.editorconfig`, use StyleCop.Analyzers
- **XAML**: Use XamlStyler or run `.\.pipelines\applyXamlStyling.ps1 -Main`
## Validation
- Build Runner: `tools\build\build.cmd` from `src/runner/`
- Build Settings UI: `tools\build\build.cmd` from `src/settings-ui/`
- Test IPC: Launch both Runner and Settings UI, verify communication works
- Schema changes: Run serialization tests if settings shape changed

View File

@@ -0,0 +1,228 @@
---
description: 'Instructions for building Model Context Protocol (MCP) servers using the TypeScript SDK'
applyTo: '**/*.ts, **/*.js, **/package.json'
---
# TypeScript MCP Server Development
## Instructions
- Use the **@modelcontextprotocol/sdk** npm package: `npm install @modelcontextprotocol/sdk`
- Import from specific paths: `@modelcontextprotocol/sdk/server/mcp.js`, `@modelcontextprotocol/sdk/server/stdio.js`, etc.
- Use `McpServer` class for high-level server implementation with automatic protocol handling
- Use `Server` class for low-level control with manual request handlers
- Use **zod** for input/output schema validation: `npm install zod@3`
- Always provide `title` field for tools, resources, and prompts for better UI display
- Use `registerTool()`, `registerResource()`, and `registerPrompt()` methods (recommended over older APIs)
- Define schemas using zod: `{ inputSchema: { param: z.string() }, outputSchema: { result: z.string() } }`
- Return both `content` (for display) and `structuredContent` (for structured data) from tools
- For HTTP servers, use `StreamableHTTPServerTransport` with Express or similar frameworks
- For local integrations, use `StdioServerTransport` for stdio-based communication
- Create new transport instances per request to prevent request ID collisions (stateless mode)
- Use session management with `sessionIdGenerator` for stateful servers
- Enable DNS rebinding protection for local servers: `enableDnsRebindingProtection: true`
- Configure CORS headers and expose `Mcp-Session-Id` for browser-based clients
- Use `ResourceTemplate` for dynamic resources with URI parameters: `new ResourceTemplate('resource://{param}', { list: undefined })`
- Support completions for better UX using `completable()` wrapper from `@modelcontextprotocol/sdk/server/completable.js`
- Implement sampling with `server.server.createMessage()` to request LLM completions from clients
- Use `server.server.elicitInput()` to request additional user input during tool execution
- Enable notification debouncing for bulk updates: `debouncedNotificationMethods: ['notifications/tools/list_changed']`
- Dynamic updates: call `.enable()`, `.disable()`, `.update()`, or `.remove()` on registered items to emit `listChanged` notifications
- Use `getDisplayName()` from `@modelcontextprotocol/sdk/shared/metadataUtils.js` for UI display names
- Test servers with MCP Inspector: `npx @modelcontextprotocol/inspector`
## Best Practices
- Keep tool implementations focused on single responsibilities
- Provide clear, descriptive titles and descriptions for LLM understanding
- Use proper TypeScript types for all parameters and return values
- Implement comprehensive error handling with try-catch blocks
- Return `isError: true` in tool results for error conditions
- Use async/await for all asynchronous operations
- Close database connections and clean up resources properly
- Validate input parameters before processing
- Use structured logging for debugging without polluting stdout/stderr
- Consider security implications when exposing file system or network access
- Implement proper resource cleanup on transport close events
- Use environment variables for configuration (ports, API keys, etc.)
- Document tool capabilities and limitations clearly
- Test with multiple clients to ensure compatibility
## Common Patterns
### Basic Server Setup (HTTP)
```typescript
import { McpServer } from '@modelcontextprotocol/sdk/server/mcp.js';
import { StreamableHTTPServerTransport } from '@modelcontextprotocol/sdk/server/streamableHttp.js';
import express from 'express';
const server = new McpServer({
name: 'my-server',
version: '1.0.0'
});
const app = express();
app.use(express.json());
app.post('/mcp', async (req, res) => {
const transport = new StreamableHTTPServerTransport({
sessionIdGenerator: undefined,
enableJsonResponse: true
});
res.on('close', () => transport.close());
await server.connect(transport);
await transport.handleRequest(req, res, req.body);
});
app.listen(3000);
```
### Basic Server Setup (stdio)
```typescript
import { McpServer } from '@modelcontextprotocol/sdk/server/mcp.js';
import { StdioServerTransport } from '@modelcontextprotocol/sdk/server/stdio.js';
const server = new McpServer({
name: 'my-server',
version: '1.0.0'
});
// ... register tools, resources, prompts ...
const transport = new StdioServerTransport();
await server.connect(transport);
```
### Simple Tool
```typescript
import { z } from 'zod';
server.registerTool(
'calculate',
{
title: 'Calculator',
description: 'Perform basic calculations',
inputSchema: { a: z.number(), b: z.number(), op: z.enum(['+', '-', '*', '/']) },
outputSchema: { result: z.number() }
},
async ({ a, b, op }) => {
const result = op === '+' ? a + b : op === '-' ? a - b :
op === '*' ? a * b : a / b;
const output = { result };
return {
content: [{ type: 'text', text: JSON.stringify(output) }],
structuredContent: output
};
}
);
```
### Dynamic Resource
```typescript
import { ResourceTemplate } from '@modelcontextprotocol/sdk/server/mcp.js';
server.registerResource(
'user',
new ResourceTemplate('users://{userId}', { list: undefined }),
{
title: 'User Profile',
description: 'Fetch user profile data'
},
async (uri, { userId }) => ({
contents: [{
uri: uri.href,
text: `User ${userId} data here`
}]
})
);
```
### Tool with Sampling
```typescript
server.registerTool(
'summarize',
{
title: 'Text Summarizer',
description: 'Summarize text using LLM',
inputSchema: { text: z.string() },
outputSchema: { summary: z.string() }
},
async ({ text }) => {
const response = await server.server.createMessage({
messages: [{
role: 'user',
content: { type: 'text', text: `Summarize: ${text}` }
}],
maxTokens: 500
});
const summary = response.content.type === 'text' ?
response.content.text : 'Unable to summarize';
const output = { summary };
return {
content: [{ type: 'text', text: JSON.stringify(output) }],
structuredContent: output
};
}
);
```
### Prompt with Completion
```typescript
import { completable } from '@modelcontextprotocol/sdk/server/completable.js';
server.registerPrompt(
'review',
{
title: 'Code Review',
description: 'Review code with specific focus',
argsSchema: {
language: completable(z.string(), value =>
['typescript', 'python', 'javascript', 'java']
.filter(l => l.startsWith(value))
),
code: z.string()
}
},
({ language, code }) => ({
messages: [{
role: 'user',
content: {
type: 'text',
text: `Review this ${language} code:\n\n${code}`
}
}]
})
);
```
### Error Handling
```typescript
server.registerTool(
'risky-operation',
{
title: 'Risky Operation',
description: 'An operation that might fail',
inputSchema: { input: z.string() },
outputSchema: { result: z.string() }
},
async ({ input }) => {
try {
const result = await performRiskyOperation(input);
const output = { result };
return {
content: [{ type: 'text', text: JSON.stringify(output) }],
structuredContent: output
};
} catch (err: unknown) {
const error = err as Error;
return {
content: [{ type: 'text', text: `Error: ${error.message}` }],
isError: true
};
}
}
);
```

View File

@@ -1,6 +1,5 @@
---
agent: 'agent'
model: 'GPT-5.1-Codex-Max'
description: 'Generate an 80-character git commit title for the local diff'
---

View File

@@ -1,6 +1,5 @@
---
agent: 'agent'
model: 'GPT-5.1-Codex-Max'
description: 'Generate a PowerToys-ready pull request description from the local diff'
---

View File

@@ -1,6 +1,5 @@
---
agent: 'agent'
model: 'GPT-5.1-Codex-Max'
description: 'Execute the fix for a GitHub issue using the previously generated implementation plan'
---

View File

@@ -1,6 +1,5 @@
---
agent: 'agent'
model: 'GPT-5.1-Codex-Max'
description: 'Resolve Code scanning / check-spelling comments on the active PR'
---

View File

@@ -1,6 +1,5 @@
---
agent: 'agent'
model: 'GPT-5.1-Codex-Max'
description: 'Review a GitHub issue, score it (0-100), and generate an implementation plan'
---
@@ -15,8 +14,14 @@ For **#{{issue_number}}** produce:
Figure out required inputs {{issue_number}} from the invocation context; if anything is missing, ask for the value or note it as a gap.
# CONTEXT (brief)
Ground evidence using `gh issue view {{issue_number}} --json number,title,body,author,createdAt,updatedAt,state,labels,milestone,reactions,comments,linkedPullRequests`, and download images to better understand the issue context.
Locate source code in the current workspace; feel free to use `rg`/`git grep`. Link related issues and PRs.
Ground evidence using `gh issue view {{issue_number}} --json number,title,body,author,createdAt,updatedAt,state,labels,milestone,reactions,comments,linkedPullRequests`, download images via MCP `github_issue_images` to better understand the issue context. Finally, use MCP `github_issue_attachments` to download logs with parameter `extractFolder` as `Generated Files/issueReview/{{issue_number}}/logs`, and analyze the downloaded logs if available to identify relevant issues. Locate the source code in the current workspace (use `rg`/`git grep` as needed). Link related issues and PRs.
## When to call MCP tools
If the following MCP "github-artifacts" tools are available in the environment, use them:
- `github_issue_images`: use when the issue/PR likely contains screenshots or other visual evidence (UI bugs, glitches, design problems).
- `github_issue_attachments`: use when the issue/PR mentions attached ZIPs (PowerToysReport_*.zip, logs.zip, debug.zip) or asks to analyze logs/diagnostics. Always provide `extractFolder` as `Generated Files/issueReview/{{issue_number}}/logs`
If these tools are not available (not listed by the runtime), start the MCP server "github-artifacts" first.
# OVERVIEW.MD
## Summary

View File

@@ -1,6 +1,5 @@
---
agent: 'agent'
model: 'GPT-5.1-Codex-Max'
description: 'Perform a comprehensive PR review with per-step Markdown and machine-readable outputs'
---

View File

@@ -0,0 +1,201 @@
Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common
control with that entity. For the purposes of this definition,
"control" means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity
exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation
source, and configuration files.
"Object" form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but
not limited to compiled object code, generated documentation,
and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or
Object form, made available under the License, as indicated by a
copyright notice that is included in or attached to the work
(an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the purposes
of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of,
the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including
the original version of the Work and any modifications or additions
to that Work or Derivative Works thereof, that is intentionally
submitted to the Licensor for inclusion in the Work by the copyright owner
or by an individual or Legal Entity authorized to submit on behalf of
the copyright owner. For the purposes of this definition, "submitted"
means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems,
and issue tracking systems that are managed by, or on behalf of, the
Licensor for the purpose of discussing and improving the Work, but
excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity
on behalf of whom a Contribution has been received by Licensor and
subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the
Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
(except as stated in this section) patent license to make, have made,
use, offer to sell, sell, import, and otherwise transfer the Work,
where such license applies only to those patent claims licensable
by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s)
with the Work to which such Contribution(s) was submitted. If You
institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work
or a Contribution incorporated within the Work constitutes direct
or contributory patent infringement, then any patent licenses
granted to You under this License for that Work shall terminate
as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the
Work or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You
meet the following conditions:
(a) You must give any other recipients of the Work or
Derivative Works a copy of this License; and
(b) You must cause any modified files to carry prominent notices
stating that You changed the files; and
(c) You must retain, in the Source form of any Derivative Works
that You distribute, all copyright, patent, trademark, and
attribution notices from the Source form of the Work,
excluding those notices that do not pertain to any part of
the Derivative Works; and
(d) If the Work includes a "NOTICE" text file as part of its
distribution, then any Derivative Works that You distribute must
include a readable copy of the attribution notices contained
within such NOTICE file, excluding those notices that do not
pertain to any part of the Derivative Works, in at least one
of the following places: within a NOTICE text file distributed
as part of the Derivative Works; within the Source form or
documentation, if provided along with the Derivative Works; or,
within a display generated by the Derivative Works, if and
wherever such third-party notices normally appear. The contents
of the NOTICE file are for informational purposes only and
do not modify the License. You may add Your own attribution
notices within Derivative Works that You distribute, alongside
or as an addendum to the NOTICE text from the Work, provided
that such additional attribution notices cannot be construed
as modifying the License.
You may add Your own copyright statement to Your modifications and
may provide additional or different license terms and conditions
for use, reproduction, or distribution of Your modifications, or
for any such Derivative Works as a whole, provided Your use,
reproduction, and distribution of the Work otherwise complies with
the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade
names, trademarks, service marks, or product names of the Licensor,
except as required for reasonable and customary use in describing the
origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or
agreed to in writing, Licensor provides the Work (and each
Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any
risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise,
unless required by applicable law (such as deliberate and grossly
negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special,
incidental, or consequential damages of any character arising as a
result of this License or out of the use or inability to use the
Work (including but not limited to damages for loss of goodwill,
work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses), even if such Contributor
has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing
the Work or Derivative Works thereof, You may choose to offer,
and charge a fee for, acceptance of support, warranty, indemnity,
or other liability obligations and/or rights consistent with this
License. However, in accepting such obligations, You may act only
on Your own behalf and on Your sole responsibility, not on behalf
of any other Contributor, and only if You agree to indemnify,
defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason
of your accepting any such warranty or additional liability.
END OF TERMS AND CONDITIONS
APPENDIX: How to apply the Apache License to your work.
To apply the Apache License to your work, attach the following
boilerplate notice, with the fields enclosed by brackets "[]"
replaced with your own identifying information. (Don't include
the brackets!) The text should be enclosed in the appropriate
comment syntax for the file format. We also recommend that a
file or class name and description of purpose be included on the
same "printed page" as the copyright notice for easier
identification within third-party archives.
Copyright 2026 Microsoft Corporation
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.

View File

@@ -0,0 +1,132 @@
---
name: release-note-generation
description: Toolkit for generating PowerToys release notes from GitHub milestone PRs or commit ranges. Use when asked to create release notes, summarize milestone PRs, generate changelog, prepare release documentation, request Copilot reviews for PRs, update README for a new release, manage PR milestones, or collect PRs between commits/tags. Supports PR collection by milestone or commit range, milestone assignment, grouping by label, summarization with external contributor attribution, and README version bumping.
license: Complete terms in LICENSE.txt
---
# Release Note Generation Skill
Generate professional release notes for PowerToys milestones by collecting merged PRs, requesting Copilot code reviews, grouping by label, and producing user-facing summaries.
## Output Directory
All generated artifacts are placed under `Generated Files/ReleaseNotes/` at the repository root (gitignored).
```
Generated Files/ReleaseNotes/
├── milestone_prs.json # Raw PR data from GitHub
├── sorted_prs.csv # Sorted PR list with Copilot summaries
├── prs_with_milestone.csv # Milestone assignment tracking
├── grouped_csv/ # PRs grouped by label (one CSV per label)
├── grouped_md/ # Generated markdown summaries per label
└── v{VERSION}-release-notes.md # Final consolidated release notes
```
## When to Use This Skill
- Generate release notes for a milestone
- Summarize PRs merged in a release
- Request Copilot reviews for milestone PRs
- Assign milestones to PRs missing them
- Collect PRs between two commits/tags
- Update README.md for a new version
## Prerequisites
- GitHub CLI (`gh`) installed and authenticated
- MCP Server: github-mcp-server installed
- GitHub Copilot code review enabled for the org/repo
## Required Variables
⚠️ **Before starting**, confirm `{{ReleaseVersion}}` with the user. If not provided, **ASK**: "What release version are we generating notes for? (e.g., 0.98)"
| Variable | Description | Example |
|----------|-------------|---------|
| `{{ReleaseVersion}}` | Target release version | `0.98` |
## Workflow Overview
```
┌────────────────────────────────┐
│ 1.1 Collect PRs (stable range) │
└────────────────────────────────┘
┌────────────────────────────────┐
│ 1.2 Assign Milestones │
└────────────────────────────────┘
┌────────────────────────────────┐
│ 2.12.4 Label PRs (auto+human) │
└────────────────────────────────┘
┌────────────────────────────────┐
│ 3.1 Request Reviews (Copilot) │
└────────────────────────────────┘
┌────────────────────────────────┐
│ 3.2 Refresh PR data │
│ (CopilotSummary) │
└────────────────────────────────┘
┌────────────────────────────────┐
│ 3.3 Group by label │
│ (grouped_csv) │
└────────────────────────────────┘
┌────────────────────────────────┐
│ 4.1 Summarize (grouped_md) │
└────────────────────────────────┘
┌────────────────────────────────┐
│ 4.2 Final notes (v{VERSION}.md) │
└────────────────────────────────┘
```
| Step | Action | Details |
|------|--------|---------|
| 1.1 | Collect PRs | From previous release tag on `stable` branch → `sorted_prs.csv` |
| 1.2 | Assign Milestones | Ensure all PRs have correct milestone |
| 2.12.4 | Label PRs | Auto-suggest + human label low-confidence |
| 3.13.3 | Reviews & Grouping | Request Copilot reviews → refresh → group by label |
| 4.14.2 | Summaries & Final | Generate grouped summaries, then consolidate |
## Detailed workflow docs
Do not read all steps at once—only read the step you are executing.
- [Step 1: Collection & Milestones](./references/step1-collection.md)
- [Step 2: Labeling PRs](./references/step2-labeling.md)
- [Step 3: Reviews & Grouping](./references/step3-review-grouping.md)
- [Step 4: Summarization](./references/step4-summarization.md)
## Available Scripts
| Script | Purpose |
|--------|---------|
| [dump-prs-since-commit.ps1](./scripts/dump-prs-since-commit.ps1) | Fetch PRs between commits/tags |
| [group-prs-by-label.ps1](./scripts/group-prs-by-label.ps1) | Group PRs into CSVs |
| [collect-or-apply-milestones.ps1](./scripts/collect-or-apply-milestones.ps1) | Assign milestones |
| [diff_prs.ps1](./scripts/diff_prs.ps1) | Incremental PR diff |
## References
- [Sample Output](./references/SampleOutput.md) - Example summary formatting
- [Detailed Instructions](./references/Instruction.md) - Legacy full documentation
## Conventions
- **Terminal usage**: Disabled by default; only run scripts when user explicitly requests
- **Batch generation**: Generate ALL grouped_md files in one pass, then human reviews
- **PR order**: Preserve order from `sorted_prs.csv` in all outputs
- **Label filtering**: Keeps `Product-*`, `Area-*`, `GitHub*`, `*Plugin`, `Issue-*`
## Troubleshooting
| Issue | Solution |
|-------|----------|
| `gh` command not found | Install GitHub CLI and add to PATH |
| No PRs returned | Verify milestone title matches exactly |
| Empty CopilotSummary | Request Copilot reviews first, then re-run dump |
| Many unlabeled PRs | Return to labeling step before grouping |

View File

@@ -0,0 +1,9 @@
- Added mouse button actions so you can choose what left, right, or middle click does. Thanks [@PesBandi](https://github.com/PesBandi)!
- Aligned window styling with current Windows theme for a cleaner look. Thanks [@sadirano](https://github.com/sadirano)!
- Ensured screen readers are notified when the selected item in the list changes for better accessibility.
- Implemented configurable UI test pipeline that can use pre-built official releases instead of building everything from scratch, reducing test execution time from 2+ hours.
- Fixed Alt+Left Arrow navigation not working when search box contains text. Thanks [@jiripolasek](https://github.com/jiripolasek)!

View File

@@ -0,0 +1,143 @@
# Step 1: Collection and Milestones
## 1.0 To-do
- 1.0.1 Generate MemberList.md (REQUIRED)
- 1.1 Collect PRs
- 1.2 Assign Milestones (REQUIRED)
## Required Variables
⚠️ **Before starting**, confirm these values with the user:
| Variable | Description | Example |
|----------|-------------|---------|
| `{{ReleaseVersion}}` | Target release version | `0.97` |
| `{{PreviousReleaseTag}}` | Previous release tag from releases page | `v0.96.1` |
**If user hasn't specified `{{ReleaseVersion}}`, ASK:** "What release version are we generating notes for? (e.g., 0.97)"
**`{{PreviousReleaseTag}}` is derived from the releases page, not user input.** Use the latest published release tag (top of the page). You will use its tag name and tag commit SHA in Step 1.
---
## 1.0.1 Generate MemberList.md (REQUIRED)
Create `Generated Files/ReleaseNotes/MemberList.md` from the **PowerToys core team** section in [COMMUNITY.md](../../../COMMUNITY.md).
Rules:
- One GitHub username per line, **no** `@` prefix.
- Use the usernames exactly as listed in the core team section.
- Do not include former team members or other sections.
Example (format only):
```
example-user
another-user
```
---
## 1.1 Collect PRs
### 1.1.1 Get the previous release commit
1. Open the [PowerToys releases page](https://github.com/microsoft/PowerToys/releases/)
2. Find the latest release (e.g., v0.96.1, which should be at the top)
3. Set `{{PreviousReleaseTag}}` to that tag name (e.g., `v0.96.1`)
4. Copy the full tag commit SHA as `{{SHALastRelease}}`
**If the release SHA is not in your branch history:** Use the helper script to find an equivalent commit on the target branch by matching the commit title:
```powershell
pwsh ./.github/skills/release-note-generation/scripts/find-commit-by-title.ps1 `
-Commit '{{SHALastRelease}}' `
-Branch 'stable'
```
### 1.1.2 Run collection script against stable branch
```powershell
# Collect PRs from previous release to current HEAD of stable branch
pwsh ./.github/skills/release-note-generation/scripts/dump-prs-since-commit.ps1 `
-StartCommit '{{SHALastRelease}}' `
-Branch 'stable' `
-OutputDir 'Generated Files/ReleaseNotes'
```
**Parameters:**
- `-StartCommit` - Previous release tag or commit SHA (exclusive)
- `-Branch` - Always use `stable` branch, not `main` (script uses `origin/stable` as the end ref)
- `-EndCommit` - Optional override if you need a custom end ref
- `-OutputDir` - Output directory for generated files
**Reliability check:** If the script reports “No commits found”, the stable branch has not moved since the last release. In that case, either:
- Confirm this is expected and stop (no new release notes), or
- Re-run against `main` to gather pending changes for the next release cycle.
The script detects both merge commits (`Merge pull request #12345`) and squash commits (`Feature (#12345)`).
**Output** (in `Generated Files/ReleaseNotes/`):
- `milestone_prs.json` - raw PR data
- `sorted_prs.csv` - sorted PR list with columns: Id, Title, Labels, Author, Url, Body, CopilotSummary, NeedThanks
---
## 1.2 Assign Milestones (REQUIRED)
**Before generating release notes**, ensure all collected PRs have the correct milestone assigned.
⚠️ **CRITICAL:** Do NOT proceed to labeling until all PRs have milestones assigned.
### 1.2.1 Check current milestone status (dry run)
```powershell
# Dry run first to see what would be changed:
pwsh ./.github/skills/release-note-generation/scripts/collect-or-apply-milestones.ps1 `
-InputCsv 'Generated Files/ReleaseNotes/sorted_prs.csv' `
-OutputCsv 'Generated Files/ReleaseNotes/prs_with_milestone.csv' `
-DefaultMilestone 'PowerToys {{ReleaseVersion}}' `
-ApplyMissing -WhatIf
```
This queries GitHub for each PR's current milestone and shows which PRs would be updated.
### 1.2.2 Apply milestones to PRs missing them
```powershell
# Apply for real:
pwsh ./.github/skills/release-note-generation/scripts/collect-or-apply-milestones.ps1 `
-InputCsv 'Generated Files/ReleaseNotes/sorted_prs.csv' `
-OutputCsv 'Generated Files/ReleaseNotes/prs_with_milestone.csv' `
-DefaultMilestone 'PowerToys {{ReleaseVersion}}' `
-ApplyMissing
```
**Script Behavior:**
- Queries each PR's current milestone from GitHub
- PRs that already have a milestone are **skipped** (not overwritten)
- PRs missing a milestone get the default milestone applied
- Outputs `prs_with_milestone.csv` with (Id, Milestone) columns
- Produces summary: `Updated=X Skipped=Y Failed=Z`
**Validation:** After assignment, all PRs in `prs_with_milestone.csv` should have the target milestone.
---
## Additional Commands
### Collect milestones only (no changes to GitHub)
```powershell
pwsh ./.github/skills/release-note-generation/scripts/collect-or-apply-milestones.ps1 `
-InputCsv 'Generated Files/ReleaseNotes/sorted_prs.csv' `
-OutputCsv 'Generated Files/ReleaseNotes/prs_with_milestone.csv'
```
### Local assignment only (fill blanks in CSV, no GitHub changes)
```powershell
pwsh ./.github/skills/release-note-generation/scripts/collect-or-apply-milestones.ps1 `
-InputCsv 'Generated Files/ReleaseNotes/sorted_prs.csv' `
-OutputCsv 'Generated Files/ReleaseNotes/prs_with_milestone.csv' `
-DefaultMilestone 'PowerToys {{ReleaseVersion}}' `
-LocalAssign
```

View File

@@ -0,0 +1,131 @@
# Step 2: Label Unlabeled PRs
## 2.0 To-do
- 2.1 Identify unlabeled PRs (Agent Mode)
- 2.2 Suggest labels (Agent Mode)
- 2.3 Human label low-confidence PRs
- 2.4 Recheck labels, delete Unlabeled.csv, and re-collect
**Before grouping**, ensure all PRs have appropriate labels for categorization.
⚠️ **CRITICAL:** Do NOT proceed to grouping until all PRs have labels assigned. PRs without labels will end up in `Unlabeled.csv` and won't appear in the correct release note sections.
## 2.1 Identify unlabeled PRs (Agent Mode)
Read `sorted_prs.csv` and identify PRs with empty or missing `Labels` column.
For each unlabeled PR, analyze:
- **Title** - Often contains module name or feature
- **Body** - PR description with context
- **CopilotSummary** - AI-generated summary of changes
## 2.2 Suggest labels (Agent Mode)
For each unlabeled PR, suggest an appropriate label based on the content analysis.
**Output:** Create `Generated Files/ReleaseNotes/prs_label_review.md` with the following format:
```markdown
# PR Label Review
Generated: YYYY-MM-DD HH:mm:ss
## Summary
- Total unlabeled PRs: X
- High confidence: X
- Medium confidence: X
- Low confidence: X
---
## PRs Needing Review (sorted by confidence, low first)
| PR | Title | Suggested Label | Confidence | Reason |
|----|-------|-----------------|------------|--------|
| [#12347](url) | Some generic fix | ??? | Low | Unclear from content |
| [#12346](url) | Update dependencies | `Area-Build` | Medium | Body mentions NuGet packages |
```
Sort by confidence (low first) so human reviews uncertain ones first.
After writing `prs_label_review.md`, **generate `prs_to_label.csv`, apply labels, and re-run collection** so the CSV/labels stay in sync:
```powershell
# Generate CSV from suggestions (agent)
# Apply labels
pwsh ./.github/skills/release-note-generation/scripts/apply-labels.ps1 `
-InputCsv 'Generated Files/ReleaseNotes/prs_to_label.csv'
# Refresh collection
pwsh ./.github/skills/release-note-generation/scripts/dump-prs-since-commit.ps1 `
-StartCommit '{{PreviousReleaseTag}}' -Branch 'stable' `
-OutputDir 'Generated Files/ReleaseNotes'
```
## 2.3 Human label low-confidence PRs
Ask the human to label **low-confidence** PRs directly (in GitHub). Skip any they decide not to label.
## 2.4 Recheck labels, delete Unlabeled.csv, and re-collect
Recheck that all PRs now have labels. Delete `Unlabeled.csv` (if present), then re-run the collection script to update `sorted_prs.csv`:
```powershell
# Remove stale unlabeled output if it exists
Remove-Item 'Generated Files/ReleaseNotes/Unlabeled.csv' -ErrorAction SilentlyContinue
```
```powershell
pwsh ./.github/skills/release-note-generation/scripts/dump-prs-since-commit.ps1 `
-StartCommit '{{PreviousReleaseTag}}' -Branch 'stable' `
-OutputDir 'Generated Files/ReleaseNotes'
```
---
## Common Label Mappings
| Keywords/Patterns | Suggested Label |
| ----------------- | --------------- |
| Advanced Paste, AP, clipboard, paste | `Product-Advanced Paste` |
| CmdPal, Command Palette, cmdpal | `Product-Command Palette` |
| FancyZones, zones, layout | `Product-FancyZones` |
| ZoomIt, zoom, screen annotation | `Product-ZoomIt` |
| Settings, settings-ui, Quick Access, flyout | `Product-Settings` |
| Installer, setup, MSI, MSIX, WiX | `Area-Setup/Install` |
| Build, pipeline, CI/CD, msbuild | `Area-Build` |
| Test, unit test, UI test, fuzz | `Area-Tests` |
| Localization, loc, translation, resw | `Area-Localization` |
| Foundry, AI, LLM | `Product-Advanced Paste` (AI features) |
| Mouse Without Borders, MWB | `Product-Mouse Without Borders` |
| PowerRename, rename, regex | `Product-PowerRename` |
| Peek, preview, file preview | `Product-Peek` |
| Image Resizer, resize | `Product-Image Resizer` |
| LightSwitch, theme, dark mode | `Product-LightSwitch` |
| Quick Accent, accent, diacritics | `Product-Quick Accent` |
| Awake, keep awake, caffeine | `Product-Awake` |
| ColorPicker, color picker, eyedropper | `Product-ColorPicker` |
| Hosts, hosts file | `Product-Hosts` |
| Keyboard Manager, remap | `Product-Keyboard Manager` |
| Mouse Highlighter | `Product-Mouse Highlighter` |
| Mouse Jump | `Product-Mouse Jump` |
| Find My Mouse | `Product-Find My Mouse` |
| Mouse Pointer Crosshairs | `Product-Mouse Pointer Crosshairs` |
| Shortcut Guide | `Product-Shortcut Guide` |
| Text Extractor, OCR, PowerOCR | `Product-Text Extractor` |
| Workspaces | `Product-Workspaces` |
| File Locksmith | `Product-File Locksmith` |
| Crop And Lock | `Product-CropAndLock` |
| Environment Variables | `Product-Environment Variables` |
| New+ | `Product-New+` |
## Label Filtering Rules
The grouping script keeps labels matching these patterns:
- `Product-*`
- `Area-*`
- `GitHub*`
- `*Plugin`
- `Issue-*`
Other labels are ignored for grouping purposes.

View File

@@ -0,0 +1,37 @@
# Step 3: Copilot Reviews and Grouping
## 3.0 To-do
- 3.1 Request Copilot Reviews (Agent Mode)
- 3.2 Refresh PR Data
- 3.3 Group PRs by Label
## 3.1 Request Copilot Reviews (Agent Mode)
Use MCP tools to request Copilot reviews for all PRs in `Generated Files/ReleaseNotes/sorted_prs.csv`:
- Use `mcp_github_request_copilot_review` for each PR ID
- Do NOT generate or run scripts for this step
---
## 3.2 Refresh PR Data
Re-run the collection script to capture Copilot review summaries into the `CopilotSummary` column:
```powershell
pwsh ./.github/skills/release-note-generation/scripts/dump-prs-since-commit.ps1 `
-StartCommit '{{PreviousReleaseTag}}' -Branch 'stable' `
-OutputDir 'Generated Files/ReleaseNotes'
```
---
## 3.3 Group PRs by Label
```powershell
pwsh ./.github/skills/release-note-generation/scripts/group-prs-by-label.ps1 -CsvPath 'Generated Files/ReleaseNotes/sorted_prs.csv' -OutDir 'Generated Files/ReleaseNotes/grouped_csv'
```
Creates `Generated Files/ReleaseNotes/grouped_csv/` with one CSV per label combination.
**Validation:** The `Unlabeled.csv` file should be minimal (ideally empty). If many PRs remain unlabeled, return to Step 2 (see [step2-labeling.md](./step2-labeling.md)).

View File

@@ -0,0 +1,88 @@
# Step 4: Summaries and Final Release Notes
## 4.0 To-do
- 4.1 Generate Summary Markdown (Agent Mode)
- 4.2 Produce Final Release Notes File
## 4.1 Generate Summary Markdown (Agent Mode)
For each CSV in `Generated Files/ReleaseNotes/grouped_csv/`, create a markdown file in `Generated Files/ReleaseNotes/grouped_md/`.
⚠️ **IMPORTANT:** Generate **ALL** markdown files first. Do NOT pause between files or ask for feedback during generation. Complete the entire batch, then human reviews afterwards.
### Structure per file
**1. Bullet list** - one concise, user-facing line per PR:
- Use the “Verb-ed + Scenario + Impact” sentence structure—make readers think, “Thats exactly what I need” or “Yes, thats an awesome fix.”; The "impact" can be end-user focused (written to convey user excitement) or technical (performance/stability) when user-facing impact is minimal.
- If nothing special on impact or unclear impact, mark as needing human summary
- Source from Title, Body, and CopilotSummary (prefer CopilotSummary when available)
- If the column `NeedThanks` in CSV is `True`, append: `Thanks [@Author](https://github.com/Author)!`
- Do NOT include PR numbers in bullet lines
- Do NOT mention “security” or “privacy” issues, since these are not known and could be leveraged by attackers in earlier versions. Instead, describe the user-facing scenario, usage, or impact.
- If confidence < 70%, write: `Human Summary Needed: <PR full link>`
**See [SampleOutput.md](./SampleOutput.md) for examples of well-written bullet summaries.**
**2. Three-column table** (same PR order):
- Column 1: Concise summary (same as bullet)
- Column 2: PR link `[#ID](URL)`
- Column 3: Confidence level (High/Medium/Low)
### Review Process (AFTER all files generated)
- Human reviews each `grouped_md/*.md` file and requests rewrites as needed
- Human may say "rewrite Product-X" or "combine these bullets"—apply changes to that specific file
- Do NOT interrupt generation to ask for feedback
---
## 4.2 Produce Final Release Notes File
Once all `grouped_md/*.md` files are reviewed and approved, consolidate into a single release notes file.
**Output:** `Generated Files/ReleaseNotes/v{{ReleaseVersion}}-release-notes.md`
### Structure
**1. Highlights section** (top):
- 8-12 bullets covering the most user-visible features and impactful fixes
- Pattern: `**Module**: brief description`
- Avoid internal refactors; focus on what users will notice
**2. Module sections** (alphabetical order):
- One section per product (Advanced Paste, Awake, Command Palette, etc.)
- Migrate bullet summaries from the approved `grouped_md/Product-*.md` files
- One section 'Development' for all the rest summaries from the approved `grouped_md/Area-*.md` files
- Re-review E2E, group release improvements by section, and move the most important items to the top of each section.
Some items in the Development section may overlap and should be moved to the Module section where more applicable.
### Example Final Structure
```markdown
# PowerToys v{{ReleaseVersion}} Release Notes
## Highlights
- **Command Palette**: Added theme customization and drag-and-drop support
- **Advanced Paste**: Image input for AI, color detection in clipboard history
- **FancyZones**: New CLI tool for command-line layout management
...
---
## Advanced Paste
- Wrapped paste option lists in a single ScrollViewer
- Added image input handling for AI-powered transformations
...
## Awake
- Fixed timed mode expiration. Thanks [@daverayment](https://github.com/daverayment)!
...
---
## Development
...
```

View File

@@ -0,0 +1,90 @@
<#
.SYNOPSIS
Apply labels to PRs from a CSV file.
.DESCRIPTION
Reads a CSV with Id and Label columns and applies the specified label to each PR via GitHub CLI.
Supports dry-run mode to preview changes before applying.
.PARAMETER InputCsv
CSV file with Id and Label columns. Default: prs_to_label.csv
.PARAMETER Repo
GitHub repository (owner/name). Default: microsoft/PowerToys
.PARAMETER WhatIf
Dry run - show what would be applied without making changes.
.EXAMPLE
pwsh ./apply-labels.ps1 -InputCsv 'Generated Files/ReleaseNotes/prs_to_label.csv'
.EXAMPLE
pwsh ./apply-labels.ps1 -InputCsv 'Generated Files/ReleaseNotes/prs_to_label.csv' -WhatIf
.NOTES
Requires: gh CLI authenticated with repo write access.
Input CSV format:
Id,Label
12345,Product-Advanced Paste
12346,Product-Settings
#>
[CmdletBinding()] param(
[Parameter(Mandatory=$false)][string]$InputCsv = 'prs_to_label.csv',
[Parameter(Mandatory=$false)][string]$Repo = 'microsoft/PowerToys',
[switch]$WhatIf
)
$ErrorActionPreference = 'Stop'
function Write-Info($m){ Write-Host "[info] $m" -ForegroundColor Cyan }
function Write-Warn($m){ Write-Host "[warn] $m" -ForegroundColor Yellow }
function Write-Err($m){ Write-Host "[error] $m" -ForegroundColor Red }
function Write-OK($m){ Write-Host "[ok] $m" -ForegroundColor Green }
if (-not (Get-Command gh -ErrorAction SilentlyContinue)) { Write-Err "GitHub CLI 'gh' not found in PATH"; exit 1 }
if (-not (Test-Path -LiteralPath $InputCsv)) { Write-Err "Input CSV not found: $InputCsv"; exit 1 }
$rows = Import-Csv -LiteralPath $InputCsv
if (-not $rows) { Write-Info "No rows in CSV."; exit 0 }
$firstCols = $rows[0].PSObject.Properties.Name
if (-not ($firstCols -contains 'Id' -and $firstCols -contains 'Label')) {
Write-Err "CSV must contain 'Id' and 'Label' columns"; exit 1
}
Write-Info "Processing $($rows.Count) label assignments..."
if ($WhatIf) { Write-Warn "DRY RUN - no changes will be made" }
$applied = 0
$skipped = 0
$failed = 0
foreach ($row in $rows) {
$id = $row.Id
$label = $row.Label
if ([string]::IsNullOrWhiteSpace($id) -or [string]::IsNullOrWhiteSpace($label)) {
Write-Warn "Skipping row with empty Id or Label"
$skipped++
continue
}
if ($WhatIf) {
Write-Info "Would apply label '$label' to PR #$id"
$applied++
continue
}
try {
gh pr edit $id --repo $Repo --add-label $label 2>&1 | Out-Null
Write-OK "Applied '$label' to PR #$id"
$applied++
} catch {
Write-Warn "Failed to apply label to PR #${id}: $_"
$failed++
}
}
Write-Info ""
Write-Info "Summary: Applied=$applied Skipped=$skipped Failed=$failed"

View File

@@ -0,0 +1,172 @@
<#
.SYNOPSIS
Collect existing PR milestones or (optionally) assign/apply a milestone to missing PRs in one script.
.DESCRIPTION
This unified script merges the behaviors of the previous add-milestone-column (collector) and
set-milestones-missing (remote updater) scripts.
Modes (controlled by switches):
1. Collect (default) For each PR Id in the input CSV, queries GitHub for the current milestone and
outputs a two-column CSV (Id,Milestone) leaving blanks where none are set.
2. LocalAssign Same as Collect, but for rows that end up blank assigns the value of -DefaultMilestone
in memory (does NOT touch GitHub). Useful for quickly preparing a fully populated CSV.
3. ApplyMissing After determining which PRs have no milestone, call GitHub API to set their milestone
to -DefaultMilestone. Requires milestone to already exist (open). Network + write.
You can combine LocalAssign and ApplyMissing: the remote update uses the existing live state; LocalAssign only
affects the output CSV/pipeline objects.
.PARAMETER InputCsv
Source CSV with at least an Id column. Default: sorted_prs.csv
.PARAMETER OutputCsv
Destination CSV for collected (and optionally locally assigned) milestones. Default: prs_with_milestone.csv
.PARAMETER Repo
GitHub repository (owner/name). Default: microsoft/PowerToys
.PARAMETER DefaultMilestone
Milestone title used when -LocalAssign or -ApplyMissing is specified. Default: 'PowerToys 0.97'
.PARAMETER Offline
Skip ALL GitHub lookups / updates. Implies Collect-only with all Milestone cells blank (unless LocalAssign).
.PARAMETER LocalAssign
Populate empty Milestone cells in the output with -DefaultMilestone (does not modify GitHub).
.PARAMETER ApplyMissing
For PRs which currently have no milestone (live on GitHub), set them to -DefaultMilestone via Issues API.
.PARAMETER WhatIf
Dry run for ApplyMissing: show intended remote changes without performing PATCH requests.
.EXAMPLE
# Collect only
pwsh ./collect-or-apply-milestones.ps1
.EXAMPLE
# Collect and fill blanks locally in the output only
pwsh ./collect-or-apply-milestones.ps1 -LocalAssign
.EXAMPLE
# Collect and remotely apply milestone to missing PRs
pwsh ./collect-or-apply-milestones.ps1 -ApplyMissing
.EXAMPLE
# Dry run remote application
pwsh ./collect-or-apply-milestones.ps1 -ApplyMissing -WhatIf
.EXAMPLE
# Offline local assignment
pwsh ./collect-or-apply-milestones.ps1 -Offline -LocalAssign -DefaultMilestone 'PowerToys 0.96'
.NOTES
Requires gh CLI unless -Offline AND -ApplyMissing not specified.
Remote apply path queries milestones to resolve numeric ID.
#>
[CmdletBinding()] param(
[Parameter(Mandatory=$false)][string]$InputCsv = 'sorted_prs.csv',
[Parameter(Mandatory=$false)][string]$OutputCsv = 'prs_with_milestone.csv',
[Parameter(Mandatory=$false)][string]$Repo = 'microsoft/PowerToys',
[Parameter(Mandatory=$false)][string]$DefaultMilestone = 'PowerToys 0.97',
[switch]$Offline,
[switch]$LocalAssign,
[switch]$ApplyMissing,
[switch]$WhatIf
)
$ErrorActionPreference = 'Stop'
function Write-Info($m){ Write-Host "[info] $m" -ForegroundColor Cyan }
function Write-Warn($m){ Write-Host "[warn] $m" -ForegroundColor Yellow }
function Write-Err($m){ Write-Host "[error] $m" -ForegroundColor Red }
if (-not (Test-Path -LiteralPath $InputCsv)) { Write-Err "Input CSV not found: $InputCsv"; exit 1 }
$rows = Import-Csv -LiteralPath $InputCsv
if (-not $rows) { Write-Warn "Input CSV has no rows."; @() | Export-Csv -NoTypeInformation -LiteralPath $OutputCsv; exit 0 }
if (-not ($rows[0].PSObject.Properties.Name -contains 'Id')) { Write-Err "Input CSV missing 'Id' column."; exit 1 }
$needGh = (-not $Offline) -and ($ApplyMissing -or -not $Offline)
if ($needGh -and -not (Get-Command gh -ErrorAction SilentlyContinue)) { Write-Err "GitHub CLI 'gh' not found. Use -Offline or install gh."; exit 1 }
# Step 1: Collect current milestone titles
$milestoneCache = @{}
$collected = New-Object System.Collections.Generic.List[object]
$idx = 0
foreach ($row in $rows) {
$idx++
$id = $row.Id
if (-not $id) { Write-Warn "Row $idx missing Id; skipping"; continue }
$ms = ''
if (-not $Offline) {
if ($milestoneCache.ContainsKey($id)) { $ms = $milestoneCache[$id] }
else {
try {
$json = gh pr view $id --repo $Repo --json milestone 2>$null | ConvertFrom-Json
if ($json -and $json.milestone -and $json.milestone.title) { $ms = $json.milestone.title }
} catch {
Write-Warn "Failed to fetch PR #$id milestone: $_"
}
$milestoneCache[$id] = $ms
}
}
$collected.Add([PSCustomObject]@{ Id = $id; Milestone = $ms }) | Out-Null
}
# Step 2: Remote apply (if requested)
$applySummary = @()
if ($ApplyMissing) {
if ($Offline) { Write-Err "Cannot use -ApplyMissing with -Offline."; exit 1 }
Write-Info "Resolving milestone id for '$DefaultMilestone' ..."
$milestonesRaw = gh api repos/$Repo/milestones --paginate --jq '.[] | {number,title,state}'
$msObj = $milestonesRaw | ConvertFrom-Json | Where-Object { $_.title -eq $DefaultMilestone -and $_.state -eq 'open' } | Select-Object -First 1
if (-not $msObj) { Write-Err "Milestone '$DefaultMilestone' not found/open."; exit 1 }
$msNumber = $msObj.number
$targets = $collected | Where-Object { [string]::IsNullOrWhiteSpace($_.Milestone) }
Write-Info ("ApplyMissing: {0} PR(s) without milestone." -f $targets.Count)
foreach ($t in $targets) {
$id = $t.Id
try {
# Verify still missing live
$current = gh pr view $id --repo $Repo --json milestone --jq '.milestone.title // ""'
if ($current) {
$applySummary += [PSCustomObject]@{ Id=$id; Action='Skip (already has)'; Milestone=$current; Status='OK' }
continue
}
if ($WhatIf) {
$applySummary += [PSCustomObject]@{ Id=$id; Action='Would set'; Milestone=$DefaultMilestone; Status='DRY RUN' }
continue
}
gh api -X PATCH -H 'Accept: application/vnd.github+json' repos/$Repo/issues/$id -f milestone=$msNumber | Out-Null
$applySummary += [PSCustomObject]@{ Id=$id; Action='Set'; Milestone=$DefaultMilestone; Status='OK' }
# Reflect in collected object for CSV output if not LocalAssign already doing so
$t.Milestone = $DefaultMilestone
} catch {
$errText = $_ | Out-String
$applySummary += [PSCustomObject]@{ Id=$id; Action='Failed'; Milestone=$DefaultMilestone; Status=$errText.Trim() }
Write-Warn ("Failed to set milestone for PR #{0}: {1}" -f $id, ($errText.Trim()))
}
}
}
# Step 3: Local assignment (purely for output) AFTER remote so remote actual result not overwritten accidentally
if ($LocalAssign) {
foreach ($item in $collected) {
if ([string]::IsNullOrWhiteSpace($item.Milestone)) { $item.Milestone = $DefaultMilestone }
}
}
# Step 4: Export CSV
$collected | Export-Csv -LiteralPath $OutputCsv -NoTypeInformation -Encoding UTF8
Write-Info ("Wrote collected CSV -> {0}" -f (Resolve-Path -LiteralPath $OutputCsv))
# Step 5: Summaries
if ($ApplyMissing) {
$updated = ($applySummary | Where-Object { $_.Action -eq 'Set' }).Count
$skipped = ($applySummary | Where-Object { $_.Action -like 'Skip*' }).Count
$failed = ($applySummary | Where-Object { $_.Action -eq 'Failed' }).Count
Write-Info ("ApplyMissing summary: Updated={0} Skipped={1} Failed={2}" -f $updated, $skipped, $failed)
}
# Emit objects (final collected set)
return $collected

View File

@@ -0,0 +1,100 @@
<#
.SYNOPSIS
Produce an incremental PR CSV containing rows present in a newer full export but absent from a baseline export.
.DESCRIPTION
Compares two previously generated sorted PR CSV files (same schema). Any row whose key column value
(defaults to 'Number') does not exist in the baseline file is emitted to a new incremental CSV, preserving
the original column order. If no new rows are found, an empty CSV (with headers when determinable) is written.
.PARAMETER BaseCsv
Path to the baseline (earlier) PR CSV.
.PARAMETER AllCsv
Path to the newer full PR CSV containing superset (or equal set) of rows.
.PARAMETER OutCsv
Path to write the incremental CSV containing only new rows.
.PARAMETER Key
Column name used as unique identifier (defaults to 'Number'). Must exist in both CSVs.
.EXAMPLE
pwsh ./diff_prs.ps1 -BaseCsv sorted_prs_prev.csv -AllCsv sorted_prs.csv -OutCsv sorted_prs_incremental.csv
.NOTES
Requires: PowerShell 7+, both CSVs with identical column schemas.
Exit code 0 on success (even if zero incremental rows). Throws on missing files.
#>
[CmdletBinding()] param(
[Parameter(Mandatory=$false)][string]$BaseCsv = "./sorted_prs_93_round1.csv",
[Parameter(Mandatory=$false)][string]$AllCsv = "./sorted_prs.csv",
[Parameter(Mandatory=$false)][string]$OutCsv = "./sorted_prs_93_incremental.csv",
[Parameter(Mandatory=$false)][string]$Key = "Number"
)
Set-StrictMode -Version Latest
$ErrorActionPreference = 'Stop'
function Write-Info($m) { Write-Host "[info] $m" -ForegroundColor Cyan }
function Write-Warn($m) { Write-Host "[warn] $m" -ForegroundColor Yellow }
if (-not (Test-Path -LiteralPath $BaseCsv)) { throw "Base CSV not found: $BaseCsv" }
if (-not (Test-Path -LiteralPath $AllCsv)) { throw "All CSV not found: $AllCsv" }
# Load CSVs
$baseRows = Import-Csv -LiteralPath $BaseCsv
$allRows = Import-Csv -LiteralPath $AllCsv
if (-not $baseRows) { Write-Warn "Base CSV has no rows." }
if (-not $allRows) { Write-Warn "All CSV has no rows." }
# Validate key presence
if ($baseRows -and -not ($baseRows[0].PSObject.Properties.Name -contains $Key)) { throw "Key column '$Key' not found in base CSV." }
if ($allRows -and -not ($allRows[0].PSObject.Properties.Name -contains $Key)) { throw "Key column '$Key' not found in all CSV." }
# Build a set of existing keys from base
$set = New-Object 'System.Collections.Generic.HashSet[string]'
foreach ($row in $baseRows) {
$val = [string]($row.$Key)
if ($null -ne $val) { [void]$set.Add($val) }
}
# Filter rows in AllCsv whose key is not in base (these are the new / incremental rows)
$incremental = @()
foreach ($row in $allRows) {
$val = [string]($row.$Key)
if (-not $set.Contains($val)) { $incremental += $row }
}
# Preserve column order from the All CSV
$columns = @()
if ($allRows.Count -gt 0) {
$columns = $allRows[0].PSObject.Properties.Name
}
try {
if ($incremental.Count -gt 0) {
if ($columns.Count -gt 0) {
$incremental | Select-Object -Property $columns | Export-Csv -LiteralPath $OutCsv -NoTypeInformation -Encoding UTF8
} else {
$incremental | Export-Csv -LiteralPath $OutCsv -NoTypeInformation -Encoding UTF8
}
} else {
# Write an empty CSV with headers if we know them (facilitates downstream tooling expecting header row)
if ($columns.Count -gt 0) {
$obj = [PSCustomObject]@{}
foreach ($c in $columns) { $obj | Add-Member -NotePropertyName $c -NotePropertyValue $null }
$obj | Select-Object -Property $columns | Export-Csv -LiteralPath $OutCsv -NoTypeInformation -Encoding UTF8
} else {
'' | Out-File -LiteralPath $OutCsv -Encoding UTF8
}
}
Write-Info ("Incremental rows: {0}" -f $incremental.Count)
Write-Info ("Output: {0}" -f (Resolve-Path -LiteralPath $OutCsv))
}
catch {
Write-Host "[error] Failed writing output CSV: $_" -ForegroundColor Red
exit 1
}

View File

@@ -0,0 +1,344 @@
<#
.SYNOPSIS
Export merged PR metadata between two commits (exclusive start, inclusive end) to JSON and CSV.
.DESCRIPTION
Identifies merge/squash commits reachable from EndCommit but not StartCommit, extracts PR numbers,
queries GitHub for metadata plus (optionally) Copilot review/comment summaries, filters labels, then
emits a JSON artifact and a sorted CSV (first label alphabetical).
.PARAMETER StartCommit
Exclusive starting commit (SHA, tag, or ref). Commits AFTER this one are considered.
.PARAMETER EndCommit
Inclusive ending commit (SHA, tag, or ref). If not provided, uses origin/<Branch> when Branch is set; otherwise uses HEAD.
.PARAMETER Repo
GitHub repository (owner/name). Default: microsoft/PowerToys.
.PARAMETER OutputCsv
Destination CSV path. Default: sorted_prs.csv.
.PARAMETER OutputJson
Destination JSON path containing raw PR objects. Default: milestone_prs.json.
.EXAMPLE
pwsh ./dump-prs-since-commit.ps1 -StartCommit 0123abcd -Branch stable
.EXAMPLE
pwsh ./dump-prs-since-commit.ps1 -StartCommit 0123abcd -EndCommit 89ef7654 -OutputCsv delta.csv
.NOTES
Requires: git, gh (authenticated). No Set-StrictMode to keep parity with existing release scripts.
#>
[CmdletBinding()]
param(
[Parameter(Mandatory = $true)][string]$StartCommit, # exclusive start (commits AFTER this one)
[string]$EndCommit,
[string]$Branch,
[string]$Repo = "microsoft/PowerToys",
[string]$OutputDir,
[string]$OutputCsv = "sorted_prs.csv",
[string]$OutputJson = "milestone_prs.json"
)
<#
.SYNOPSIS
Dump merged PR information whose merge commits are reachable from EndCommit but not from StartCommit.
.DESCRIPTION
Uses git rev-list to compute commits in the (StartCommit, EndCommit] range, extracts PR numbers from merge commit messages,
queries GitHub (gh CLI) for details, then outputs a CSV.
PR merge commit messages in PowerToys generally contain patterns like:
Merge pull request #12345 from ...
.EXAMPLE
pwsh ./dump-prs-since-commit.ps1 -StartCommit 0123abcd -Branch stable
.EXAMPLE
pwsh ./dump-prs-since-commit.ps1 -StartCommit 0123abcd -EndCommit 89ef7654 -OutputCsv changes.csv
.NOTES
Requires: gh CLI authenticated; git available in working directory (must be inside PowerToys repo clone).
CopilotSummary behavior:
- Attempts to locate the latest GitHub Copilot authored review (preferred).
- If no review is found, lazily fetches PR comments to look for a Copilot-authored comment.
- Normalizes whitespace and strips newlines. Empty when no Copilot activity detected.
- Run with -Verbose to see whether the summary came from a 'review' or 'comment' source.
#>
function Write-Info($msg) { Write-Host $msg -ForegroundColor Cyan }
function Write-Warn($msg) { Write-Host $msg -ForegroundColor Yellow }
function Write-Err($msg) { Write-Host $msg -ForegroundColor Red }
function Write-DebugMsg($msg) { if ($PSBoundParameters.ContainsKey('Verbose') -or $VerbosePreference -eq 'Continue') { Write-Host "[VERBOSE] $msg" -ForegroundColor DarkGray } }
# Load member list from Generated Files/ReleaseNotes/MemberList.md (internal team - no thanks needed)
$scriptDir = Split-Path -Parent $MyInvocation.MyCommand.Path
$repoRoot = Resolve-Path (Join-Path $scriptDir "..\..\..\..")
$defaultMemberListPath = Join-Path $repoRoot "Generated Files\ReleaseNotes\MemberList.md"
$memberListPath = $defaultMemberListPath
if ($OutputDir) {
$memberListFromOutputDir = Join-Path $OutputDir "MemberList.md"
if (Test-Path $memberListFromOutputDir) {
$memberListPath = $memberListFromOutputDir
}
}
$memberList = @()
if (Test-Path $memberListPath) {
$memberListContent = Get-Content $memberListPath -Raw
# Extract usernames - skip markdown code fence lines, get all non-empty lines
$memberList = ($memberListContent -split "`n") | Where-Object { $_ -notmatch '^\s*```' -and $_.Trim() -ne '' } | ForEach-Object { $_.Trim() }
if (-not $memberList -or $memberList.Count -eq 0) {
Write-Err "MemberList.md is empty at $memberListPath"
exit 1
}
Write-DebugMsg "Loaded $($memberList.Count) members from MemberList.md"
} else {
Write-Err "MemberList.md not found at $memberListPath"
exit 1
}
# Validate we are in a git repo
#if (-not (Test-Path .git)) {
# Write-Err "Current directory does not appear to be the root of a git repository."
# exit 1
#}
# Resolve output directory (if specified)
if ($OutputDir) {
if (-not (Test-Path $OutputDir)) {
New-Item -ItemType Directory -Path $OutputDir -Force | Out-Null
}
if (-not [System.IO.Path]::IsPathRooted($OutputCsv)) {
$OutputCsv = Join-Path $OutputDir $OutputCsv
}
if (-not [System.IO.Path]::IsPathRooted($OutputJson)) {
$OutputJson = Join-Path $OutputDir $OutputJson
}
}
# Resolve commits
try {
if ($Branch) {
Write-Info "Fetching latest '$Branch' from origin (with tags)..."
git fetch origin $Branch --tags | Out-Null
if ($LASTEXITCODE -ne 0) { throw "git fetch origin $Branch --tags failed" }
}
$startSha = (git rev-parse --verify $StartCommit) 2>$null
if (-not $startSha) { throw "StartCommit '$StartCommit' not found" }
if ($Branch) {
$branchRef = $Branch
$branchSha = (git rev-parse --verify $branchRef) 2>$null
if (-not $branchSha) {
$branchRef = "origin/$Branch"
$branchSha = (git rev-parse --verify $branchRef) 2>$null
}
if (-not $branchSha) { throw "Branch '$Branch' not found" }
if (-not $PSBoundParameters.ContainsKey('EndCommit') -or [string]::IsNullOrWhiteSpace($EndCommit)) {
$EndCommit = $branchRef
}
}
if (-not $PSBoundParameters.ContainsKey('EndCommit') -or [string]::IsNullOrWhiteSpace($EndCommit)) {
$EndCommit = "HEAD"
}
$endSha = (git rev-parse --verify $EndCommit) 2>$null
if (-not $endSha) { throw "EndCommit '$EndCommit' not found" }
}
catch {
Write-Err $_
exit 1
}
Write-Info "Collecting commits between $startSha..$endSha (excluding start, including end)."
# Get list of commits reachable from end but not from start.
# IMPORTANT: In PowerShell, the .. operator creates a numeric/char range. If $startSha and $endSha look like hex strings,
# `$startSha..$endSha` must be passed as a single string argument.
$rangeArg = "$startSha..$endSha"
$commitList = git rev-list $rangeArg
# Normalize list (filter out empty strings)
$normalizedCommits = $commitList | Where-Object { $_ -and $_.Trim() -ne '' }
$commitCount = ($normalizedCommits | Measure-Object).Count
Write-DebugMsg ("Raw commitList length (including blanks): {0}" -f (($commitList | Measure-Object).Count))
Write-DebugMsg ("Normalized commit count: {0}" -f $commitCount)
if ($commitCount -eq 0) {
Write-Warn "No commits found in specified range ($startSha..$endSha)."; exit 0
}
Write-DebugMsg ("First 5 commits: {0}" -f (($normalizedCommits | Select-Object -First 5) -join ', '))
<#
Extract PR numbers from commits.
Patterns handled:
1. Merge commits: 'Merge pull request #12345 from ...'
2. Squash commits: 'Some feature change (#12345)' (GitHub default squash format)
We collect both. If a commit matches both (unlikely), it's deduped later.
#>
# Extract PR numbers from merge or squash commits
$mergeCommits = @()
foreach ($c in $normalizedCommits) {
$subject = git show -s --format=%s $c
$matched = $false
# Pattern 1: Traditional merge commit
if ($subject -match 'Merge pull request #([0-9]+) ') {
$prNumber = [int]$matches[1]
$mergeCommits += [PSCustomObject]@{ Sha = $c; Pr = $prNumber; Subject = $subject; Pattern = 'merge' }
Write-DebugMsg "Matched merge PR #$prNumber in commit $c"
$matched = $true
}
# Pattern 2: Squash merge subject line with ' (#12345)' at end (allow possible whitespace before paren)
if ($subject -match '\(#([0-9]+)\)$') {
$prNumber2 = [int]$matches[1]
# Avoid duplicate object if pattern 1 already captured same number for same commit
if (-not ($mergeCommits | Where-Object { $_.Sha -eq $c -and $_.Pr -eq $prNumber2 })) {
$mergeCommits += [PSCustomObject]@{ Sha = $c; Pr = $prNumber2; Subject = $subject; Pattern = 'squash' }
Write-DebugMsg "Matched squash PR #$prNumber2 in commit $c"
}
$matched = $true
}
if (-not $matched) {
Write-DebugMsg "No PR pattern in commit $c : $subject"
}
}
if (-not $mergeCommits -or $mergeCommits.Count -eq 0) {
Write-Warn "No merge commits with PR numbers found in range."; exit 0
}
# Deduplicate PR numbers (in case of revert or merges across branches)
$prNumbers = $mergeCommits | Select-Object -ExpandProperty Pr -Unique | Sort-Object
Write-Info ("Found {0} unique PRs: {1}" -f $prNumbers.Count, ($prNumbers -join ', '))
Write-DebugMsg ("Total merge commits examined: {0}" -f $mergeCommits.Count)
# Query GitHub for each PR
$prDetails = @()
function Get-CopilotSummaryFromPrJson {
param(
[Parameter(Mandatory=$true)]$PrJson,
[switch]$VerboseMode
)
# Returns a hashtable with Summary and Source keys.
$result = @{ Summary = ""; Source = "" }
if (-not $PrJson) { return $result }
$candidateAuthors = @(
'github-copilot[bot]', 'github-copilot', 'copilot'
)
# 1. Reviews (preferred) pick the LONGEST valid Copilot body, not the most recent
$reviews = $PrJson.reviews
if ($reviews) {
$copilotReviews = $reviews | Where-Object {
($candidateAuthors -contains $_.author.login -or $_.author.login -like '*copilot*') -and $_.body -and $_.body.Trim() -ne ''
}
if ($copilotReviews) {
$longest = $copilotReviews | Sort-Object { $_.body.Length } -Descending | Select-Object -First 1
if ($longest) {
$body = $longest.body
$norm = ($body -replace "`r", '') -replace "`n", ' '
$norm = $norm -replace '\s+', ' '
$result.Summary = $norm
$result.Source = 'review'
if ($VerboseMode) { Write-DebugMsg "Selected Copilot review length=$($body.Length) (longest)." }
return $result
}
}
}
# 2. Comments fallback (some repos surface Copilot summaries as PR comments rather than review objects)
if ($null -eq $PrJson.comments) {
try {
# Lazy fetch comments only if needed
$commentsJson = gh pr view $PrJson.number --repo $Repo --json comments 2>$null | ConvertFrom-Json
if ($commentsJson -and $commentsJson.comments) {
$PrJson | Add-Member -NotePropertyName comments -NotePropertyValue $commentsJson.comments -Force
}
} catch {
if ($VerboseMode) { Write-DebugMsg "Failed to fetch comments for PR #$($PrJson.number): $_" }
}
}
if ($PrJson.comments) {
$copilotComments = $PrJson.comments | Where-Object {
($candidateAuthors -contains $_.author.login -or $_.author.login -like '*copilot*') -and $_.body -and $_.body.Trim() -ne ''
}
if ($copilotComments) {
$longestC = $copilotComments | Sort-Object { $_.body.Length } -Descending | Select-Object -First 1
if ($longestC) {
$body = $longestC.body
$norm = ($body -replace "`r", '') -replace "`n", ' '
$norm = $norm -replace '\s+', ' '
$result.Summary = $norm
$result.Source = 'comment'
if ($VerboseMode) { Write-DebugMsg "Selected Copilot comment length=$($body.Length) (longest)." }
return $result
}
}
}
return $result
}
foreach ($pr in $prNumbers) {
Write-Info "Fetching PR #$pr ..."
try {
# Include comments only if Verbose asked; if not, we lazily pull when reviews are missing
$fields = 'number,title,labels,author,url,body,reviews'
if ($PSBoundParameters.ContainsKey('Verbose')) { $fields += ',comments' }
$json = gh pr view $pr --repo $Repo --json $fields 2>$null | ConvertFrom-Json
if ($null -eq $json) { throw "Empty response" }
$copilot = Get-CopilotSummaryFromPrJson -PrJson $json -VerboseMode:($PSBoundParameters.ContainsKey('Verbose'))
if ($copilot.Summary -and $copilot.Source -and $PSBoundParameters.ContainsKey('Verbose')) {
Write-DebugMsg "Copilot summary source=$($copilot.Source) chars=$($copilot.Summary.Length)"
} elseif (-not $copilot.Summary) {
Write-DebugMsg "No Copilot summary found for PR #$pr"
}
# Filter labels
$filteredLabels = $json.labels | Where-Object {
($_.name -like "Product-*") -or
($_.name -like "Area-*") -or
($_.name -like "GitHub*") -or
($_.name -like "*Plugin") -or
($_.name -like "Issue-*")
}
$labelNames = ($filteredLabels | ForEach-Object { $_.name }) -join ", "
$bodyValue = if ($json.body) { ($json.body -replace "`r", '') -replace "`n", ' ' } else { '' }
$bodyValue = $bodyValue -replace '\s+', ' '
# Determine if author needs thanks (not in member list)
$authorLogin = $json.author.login
$needThanks = $true
if ($memberList.Count -gt 0 -and $authorLogin) {
$needThanks = -not ($memberList -contains $authorLogin)
}
$prDetails += [PSCustomObject]@{
Id = $json.number
Title = $json.title
Labels = $labelNames
Author = $authorLogin
Url = $json.url
Body = $bodyValue
CopilotSummary = $copilot.Summary
NeedThanks = $needThanks
}
}
catch {
$err = $_
Write-Warn ("Failed to fetch PR #{0}: {1}" -f $pr, $err)
}
}
if (-not $prDetails) { Write-Warn "No PR details fetched."; exit 0 }
# Sort by Labels like original script (first label alphabetical)
$sorted = $prDetails | Sort-Object { ($_.Labels -split ',')[0] }
# Output JSON raw (optional)
$sorted | ConvertTo-Json -Depth 6 | Out-File -Encoding UTF8 $OutputJson
Write-Info "Saving CSV to $OutputCsv ..."
$sorted | Export-Csv $OutputCsv -NoTypeInformation
Write-Host "✅ Done. Generated $($prDetails.Count) PR rows." -ForegroundColor Green

View File

@@ -0,0 +1,80 @@
<#
.SYNOPSIS
Find a commit on a branch that has the same subject line as a reference commit.
.DESCRIPTION
Given a commit SHA (often from a release tag) and a branch name, this script
resolves the reference commit's subject, then searches the branch history for
commits with the exact same subject line. Useful when the release tag commit
is not reachable from your current branch history.
.PARAMETER Commit
The reference commit SHA or ref (e.g., v0.96.1 or a full SHA).
.PARAMETER Branch
The branch to search (e.g., stable or main). Defaults to stable.
.PARAMETER RepoPath
Path to the local repo. Defaults to current directory.
.EXAMPLE
pwsh ./find-commit-by-title.ps1 -Commit b62f6421845f7e5c92b8186868d98f46720db442 -Branch stable
#>
[CmdletBinding()]
param(
[Parameter(Mandatory = $true)][string]$Commit,
[string]$Branch = "stable",
[string]$RepoPath = "."
)
function Write-Info($msg) { Write-Host $msg -ForegroundColor Cyan }
function Write-Err($msg) { Write-Host $msg -ForegroundColor Red }
Push-Location $RepoPath
try {
Write-Info "Fetching latest '$Branch' from origin (with tags)..."
git fetch origin $Branch --tags | Out-Null
if ($LASTEXITCODE -ne 0) { throw "git fetch origin $Branch --tags failed" }
$commitSha = (git rev-parse --verify $Commit) 2>$null
if (-not $commitSha) { throw "Commit '$Commit' not found" }
$subject = (git show -s --format=%s $commitSha) 2>$null
if (-not $subject) { throw "Unable to read subject for '$commitSha'" }
$branchRef = $Branch
$branchSha = (git rev-parse --verify $branchRef) 2>$null
if (-not $branchSha) {
$branchRef = "origin/$Branch"
$branchSha = (git rev-parse --verify $branchRef) 2>$null
}
if (-not $branchSha) { throw "Branch '$Branch' not found" }
Write-Info "Reference commit: $commitSha"
Write-Info "Reference title: $subject"
Write-Info "Searching branch: $branchRef"
$matches = git log $branchRef --format="%H|%s" | Where-Object { $_ -match '\|' }
$results = @()
foreach ($line in $matches) {
$parts = $line -split '\|', 2
if ($parts.Count -eq 2 -and $parts[1] -eq $subject) {
$results += [PSCustomObject]@{ Sha = $parts[0]; Title = $parts[1] }
}
}
if (-not $results -or $results.Count -eq 0) {
Write-Info "No matching commit found on $branchRef for the given title."
exit 0
}
Write-Info ("Found {0} matching commit(s):" -f $results.Count)
$results | ForEach-Object { Write-Host ("{0} {1}" -f $_.Sha, $_.Title) }
}
catch {
Write-Err $_
exit 1
}
finally {
Pop-Location
}

View File

@@ -0,0 +1,85 @@
<#
.SYNOPSIS
Group PR rows by their Labels column and emit per-label CSV files.
.DESCRIPTION
Reads a milestone PR CSV (usually produced by dump-prs-information / dump-prs-since-commit scripts),
splits rows by label list, normalizes/sorts individual labels, and writes one CSV per unique label combination.
Each output preserves the original row ordering within that subset and column order from the source.
.PARAMETER CsvPath
Input CSV containing PR rows with a 'Labels' column (comma-separated list).
.PARAMETER OutDir
Output directory to place grouped CSVs (created if missing). Default: 'grouped_csv'.
.NOTES
Label combinations are joined using ' | ' when multiple labels present. Filenames are sanitized (invalid characters,
whitespace collapsed) and truncated to <= 120 characters.
#>
param(
[string]$CsvPath = "sorted_prs.csv",
[string]$OutDir = "grouped_csv"
)
$ErrorActionPreference = 'Stop'
function Write-Info($msg) { Write-Host "[info] $msg" -ForegroundColor Cyan }
function Write-Warn($msg) { Write-Host "[warn] $msg" -ForegroundColor Yellow }
if (-not (Test-Path -LiteralPath $CsvPath)) { throw "CSV not found: $CsvPath" }
Write-Info "Reading CSV: $CsvPath"
$rows = Import-Csv -LiteralPath $CsvPath
Write-Info ("Loaded {0} rows" -f $rows.Count)
function ConvertTo-SafeFileName {
[CmdletBinding()]
param(
[Parameter(Mandatory=$true)][string]$Name
)
if ([string]::IsNullOrWhiteSpace($Name)) { return 'Unnamed' }
$s = $Name -replace '[<>:"/\\|?*]', '-' # invalid path chars
$s = $s -replace '\s+', '-' # spaces to dashes
$s = $s -replace '-{2,}', '-' # collapse dashes
$s = $s.Trim('-')
if ($s.Length -gt 120) { $s = $s.Substring(0,120).Trim('-') }
if ([string]::IsNullOrWhiteSpace($s)) { return 'Unnamed' }
return $s
}
# Build groups keyed by normalized, sorted label combinations. Preserve original CSV row order.
$groups = @{}
foreach ($row in $rows) {
$labelsRaw = $row.Labels
if ([string]::IsNullOrWhiteSpace($labelsRaw)) {
$labelParts = @('Unlabeled')
} else {
$parts = $labelsRaw -split ',' | ForEach-Object { $_.Trim() } | Where-Object { $_ }
if (-not $parts -or $parts.Count -eq 0) { $labelParts = @('Unlabeled') }
else { $labelParts = $parts | Sort-Object }
}
$key = ($labelParts -join ' | ')
if (-not $groups.ContainsKey($key)) { $groups[$key] = New-Object System.Collections.ArrayList }
[void]$groups[$key].Add($row)
}
if (-not (Test-Path -LiteralPath $OutDir)) {
Write-Info "Creating output directory: $OutDir"
New-Item -ItemType Directory -Path $OutDir | Out-Null
}
Write-Info ("Generating {0} grouped CSV file(s) into: {1}" -f $groups.Count, $OutDir)
foreach ($key in $groups.Keys) {
$labelParts = if ($key -eq 'Unlabeled') { @('Unlabeled') } else { $key -split '\s\|\s' }
$safeName = ($labelParts | ForEach-Object { ConvertTo-SafeFileName -Name $_ }) -join '-'
$filePath = Join-Path $OutDir ("$safeName.csv")
# Keep same columns and order
$groups[$key] | Export-Csv -LiteralPath $filePath -NoTypeInformation -Encoding UTF8
}
Write-Info "Done. Sample output files:"
Get-ChildItem -LiteralPath $OutDir | Select-Object -First 10 Name | Format-Table -HideTableHeaders

13
.vscode/mcp.json vendored Normal file
View File

@@ -0,0 +1,13 @@
{
"servers": {
"github-artifacts": {
"command": "node",
"args": [
"tools/mcp/github-artifacts/launch.js"
],
"env": {
"GITHUB_TOKEN": "${env:GITHUB_TOKEN}"
}
}
}
}

165
AGENTS.md Normal file
View File

@@ -0,0 +1,165 @@
---
description: 'Top-level AI contributor guidance for developing PowerToys - a collection of Windows productivity utilities'
applyTo: '**'
---
# PowerToys AI Contributor Guide
This is the top-level guidance for AI contributions to PowerToys. Keep changes atomic, follow existing patterns, and cite exact paths in PRs.
## Overview
PowerToys is a set of utilities for power users to tune and streamline their Windows experience.
| Area | Location | Description |
|------|----------|-------------|
| Runner | `src/runner/` | Main executable, tray icon, module loader, hotkey management |
| Settings UI | `src/settings-ui/` | WinUI/WPF configuration app communicating via named pipes |
| Modules | `src/modules/` | Individual PowerToys utilities (each in its own subfolder) |
| Common Libraries | `src/common/` | Shared code: logging, IPC, settings, DPI, telemetry, utilities |
| Build Tools | `tools/build/` | Build scripts and automation |
| Documentation | `doc/devdocs/` | Developer documentation |
| Installer | `installer/` | WiX-based installer projects |
For architecture details and module types, see [Architecture Overview](doc/devdocs/core/architecture.md).
## Conventions
For detailed coding conventions, see:
- [Coding Guidelines](doc/devdocs/development/guidelines.md) Dependencies, testing, PR management
- [Coding Style](doc/devdocs/development/style.md) Formatting, C++/C#/XAML style rules
- [Logging](doc/devdocs/development/logging.md) C++ spdlog and C# Logger usage
### Component-Specific Instructions
These instruction files are automatically applied when working in their respective areas:
- [Runner & Settings UI](.github/instructions/runner-settings-ui.instructions.md) IPC contracts, schema migrations
- [Common Libraries](.github/instructions/common-libraries.instructions.md) ABI stability, shared code guidelines
## Build
### Prerequisites
- Visual Studio 2022 17.4+
- Windows 10 1803+ (April 2018 Update or newer)
- Initialize submodules once: `git submodule update --init --recursive`
### Build Commands
| Task | Command |
|------|---------|
| First build / NuGet restore | `tools\build\build-essentials.cmd` |
| Build current folder | `tools\build\build.cmd` |
| Build with options | `build.ps1 -Platform x64 -Configuration Release` |
### Build Discipline
1. One terminal per operation (build → test). Do not switch or open new ones mid-flow
2. After making changes, `cd` to the project folder that changed (`.csproj`/`.vcxproj`)
3. Use scripts to build: `tools/build/build.ps1` or `tools/build/build.cmd`
4. For first build or missing NuGet packages, run `build-essentials.cmd` first
5. **Exit code 0 = success; non-zero = failure** treat this as absolute
6. On failure, read the errors log: `build.<config>.<platform>.errors.log`
7. Do not start tests or launch Runner until the build succeeds
### Build Logs
Located next to the solution/project being built:
- `build.<configuration>.<platform>.errors.log` errors only (check this first)
- `build.<configuration>.<platform>.all.log` full log
- `build.<configuration>.<platform>.trace.binlog` for MSBuild Structured Log Viewer
For complete details, see [Build Guidelines](tools/build/BUILD-GUIDELINES.md).
## Tests
### Test Discovery
- Find test projects by product code prefix (e.g., `FancyZones`, `AdvancedPaste`)
- Look for sibling folders or 1-2 levels up named `<Product>*UnitTests` or `<Product>*UITests`
### Running Tests
1. **Build the test project first**, wait for exit code 0
2. Run via VS Test Explorer (`Ctrl+E, T`) or `vstest.console.exe` with filters
3. **Avoid `dotnet test`** in this repo use VS Test Explorer or vstest.console.exe
### Test Types
| Type | Requirements | Setup |
|------|--------------|-------|
| Unit Tests | Standard dev environment | None |
| UI Tests | WinAppDriver v1.2.1, Developer Mode | Install from [WinAppDriver releases](https://github.com/microsoft/WinAppDriver/releases/tag/v1.2.1) |
| Fuzz Tests | OneFuzz, .NET 8 | See [Fuzzing Tests](doc/devdocs/tools/fuzzingtesting.md) |
### Test Discipline
1. Add or adjust tests when changing behavior
2. If tests skipped, state why (e.g., comment-only change, string rename)
3. New modules handling file I/O or user input **must** implement fuzzing tests
### Special Requirements
- **Mouse Without Borders**: Requires 2+ physical computers (not VMs)
- **Multi-monitor utilities**: Test with 2+ monitors, different DPI settings
For UI test setup details, see [UI Tests](doc/devdocs/development/ui-tests.md).
## Boundaries
### Ask for Clarification When
- Ambiguous spec after scanning relevant docs
- Cross-module impact (shared enum/struct) is unclear
- Security, elevation, or installer changes involved
- GPO or policy handling modifications needed
### Areas Requiring Extra Care
| Area | Concern | Reference |
|------|---------|-----------|
| `src/common/` | ABI breaks | [Common Libraries Instructions](.github/instructions/common-libraries.instructions.md) |
| `src/runner/`, `src/settings-ui/` | IPC contracts, schema | [Runner & Settings UI Instructions](.github/instructions/runner-settings-ui.instructions.md) |
| Installer files | Release impact | Careful review required |
| Elevation/GPO logic | Security | Confirm no regression in policy handling |
### What NOT to Do
- Don't merge incomplete features into main (use feature branches)
- Don't break IPC/JSON contracts without updating both runner and settings-ui
- Don't add noisy logs in hot paths
- Don't introduce third-party deps without PM approval and `NOTICE.md` update
## Validation Checklist
Before finishing, verify:
- [ ] Build clean with exit code 0
- [ ] Tests updated and passing locally
- [ ] No unintended ABI breaks or schema changes
- [ ] IPC contracts consistent between runner and settings-ui
- [ ] New dependencies added to `NOTICE.md`
- [ ] PR is atomic (one logical change), with issue linked
## Documentation Index
### Core Architecture
- [Architecture Overview](doc/devdocs/core/architecture.md)
- [Runner](doc/devdocs/core/runner.md)
- [Settings System](doc/devdocs/core/settings/readme.md)
- [Module Interface](doc/devdocs/modules/interface.md)
### Development
- [Coding Guidelines](doc/devdocs/development/guidelines.md)
- [Coding Style](doc/devdocs/development/style.md)
- [Logging](doc/devdocs/development/logging.md)
- [UI Tests](doc/devdocs/development/ui-tests.md)
- [Fuzzing Tests](doc/devdocs/tools/fuzzingtesting.md)
### Build & Tools
- [Build Guidelines](tools/build/BUILD-GUIDELINES.md)
- [Tools Overview](doc/devdocs/tools/readme.md)
### Instructions (Auto-Applied)
- [Runner & Settings UI](.github/instructions/runner-settings-ui.instructions.md)
- [Common Libraries](.github/instructions/common-libraries.instructions.md)

View File

@@ -1,244 +0,0 @@
# Windows App SDK Semantic Search API 总结
## 1. 环境与依赖
| 项目 | 版本/值 |
|------|---------|
| **Windows App SDK** | `2.0.0-experimental3` |
| **.NET** | `net9.0-windows10.0.26100.0` |
| **AI Search NuGet** | `Microsoft.WindowsAppSDK.AI` (2.0.57-experimental) |
| **命名空间** | `Microsoft.Windows.AI.Search.Experimental.AppContentIndex` |
| **应用类型** | WinUI 3 MSIX 打包应用 |
---
## 2. 核心 API
### 2.1 索引管理
```csharp
// 创建/打开索引
var result = AppContentIndexer.GetOrCreateIndex("indexName");
if (result.Succeeded) {
_indexer = result.Indexer;
// result.Status: CreatedNew | OpenedExisting
}
// 等待索引能力就绪
await _indexer.WaitForIndexCapabilitiesAsync();
// 等待索引空闲(建索引完成)
await _indexer.WaitForIndexingIdleAsync(TimeSpan.FromSeconds(120));
// 清理
_indexer.RemoveAll(); // 删除所有索引
_indexer.Remove(id); // 删除单个
_indexer.Dispose();
```
### 2.2 添加内容到索引
```csharp
// 索引文本 → 自动建立 TextLexical + TextSemantic 索引
IndexableAppContent textContent = AppManagedIndexableAppContent.CreateFromString(id, text);
_indexer.AddOrUpdate(textContent);
// 索引图片 → 自动建立 ImageSemantic + ImageOcr 索引
IndexableAppContent imageContent = AppManagedIndexableAppContent.CreateFromBitmap(id, softwareBitmap);
_indexer.AddOrUpdate(imageContent);
```
### 2.3 查询
```csharp
// 文本查询
TextQueryOptions options = new TextQueryOptions {
Language = "en-US", // 可选
MatchScope = QueryMatchScope.Unconstrained, // 匹配范围
TextMatchType = TextLexicalMatchType.Fuzzy // Fuzzy | Exact
};
AppIndexTextQuery query = _indexer.CreateTextQuery(searchText, options);
IReadOnlyList<TextQueryMatch> matches = query.GetNextMatches(5);
// 图片查询
ImageQueryOptions imgOptions = new ImageQueryOptions {
MatchScope = QueryMatchScope.Unconstrained,
ImageOcrTextMatchType = TextLexicalMatchType.Fuzzy
};
AppIndexImageQuery imgQuery = _indexer.CreateImageQuery(searchText, imgOptions);
IReadOnlyList<ImageQueryMatch> imgMatches = imgQuery.GetNextMatches(5);
```
### 2.4 能力检查(只读)
```csharp
IndexCapabilities capabilities = _indexer.GetIndexCapabilities();
bool textLexicalOK = capabilities.GetCapabilityState(IndexCapability.TextLexical)
.InitializationStatus == IndexCapabilityInitializationStatus.Initialized;
bool textSemanticOK = capabilities.GetCapabilityState(IndexCapability.TextSemantic)
.InitializationStatus == IndexCapabilityInitializationStatus.Initialized;
bool imageSemanticOK = capabilities.GetCapabilityState(IndexCapability.ImageSemantic)
.InitializationStatus == IndexCapabilityInitializationStatus.Initialized;
bool imageOcrOK = capabilities.GetCapabilityState(IndexCapability.ImageOcr)
.InitializationStatus == IndexCapabilityInitializationStatus.Initialized;
```
---
## 3. 四种索引能力
| 能力 | 说明 | 触发方式 |
|------|------|----------|
| `TextLexical` | 词法/关键词搜索 | CreateFromString() 自动 |
| `TextSemantic` | AI 语义搜索 (Embedding) | CreateFromString() 自动 |
| `ImageSemantic` | 图像语义搜索 | CreateFromBitmap() 自动 |
| `ImageOcr` | 图片 OCR 文字搜索 | CreateFromBitmap() 自动 |
---
## 4. 可控选项(有限)
### TextQueryOptions
| 属性 | 类型 | 说明 |
|------|------|------|
| `Language` | string | 查询语言(可选,如 "en-US"|
| `MatchScope` | QueryMatchScope | Unconstrained / Region / ContentItem |
| `TextMatchType` | TextLexicalMatchType | **Fuzzy** / Exact仅影响 Lexical|
### ImageQueryOptions
| 属性 | 类型 | 说明 |
|------|------|------|
| `MatchScope` | QueryMatchScope | Unconstrained / Region / ContentItem |
| `ImageOcrTextMatchType` | TextLexicalMatchType | **Fuzzy** / Exact仅影响 OCR|
### 枚举值说明
**QueryMatchScope:**
- `Unconstrained` - 无约束,同时使用 Lexical + Semantic
- `Region` - 限制在特定区域
- `ContentItem` - 限制在单个内容项
**TextLexicalMatchType:**
- `Fuzzy` - 模糊匹配,允许拼写错误、近似词
- `Exact` - 精确匹配,必须完全一致
---
## 5. 关键限制 ⚠️
| 限制 | 说明 |
|------|------|
| **不能单独指定 Semantic/Lexical** | 系统自动同时使用所有可用能力 |
| **Fuzzy/Exact 只影响 Lexical** | 对 Semantic 搜索无效 |
| **能力检查是只读的** | `GetIndexCapabilities()` 只能查看,不能控制 |
| **无相似度阈值** | 不能设置 Semantic 匹配的阈值 |
| **无结果排序控制** | 无法指定按相关度或其他方式排序 |
| **语言需手动传** | 不会自动检测,需开发者指定 |
| **无相关度分数** | 查询结果不返回匹配分数 |
---
## 6. 典型使用流程
```
┌─────────────────────────────────────────────────────────┐
│ App 启动时 │
├─────────────────────────────────────────────────────────┤
│ 1. GetOrCreateIndex("name") // 创建/打开索引 │
│ 2. WaitForIndexCapabilitiesAsync() // 等待能力就绪 │
│ 3. GetIndexCapabilities() // 检查可用能力 │
│ 4. IndexAll() // 索引所有数据 │
│ ├─ CreateFromString() × N │
│ └─ CreateFromBitmap() × N │
│ 5. WaitForIndexingIdleAsync() // 等待索引完成 │
└─────────────────────────────────────────────────────────┘
┌─────────────────────────────────────────────────────────┐
│ 运行时查询 │
├─────────────────────────────────────────────────────────┤
│ 1. CreateTextQuery(text, options) // 创建查询 │
│ 2. query.GetNextMatches(N) // 获取结果 │
│ 3. 处理 TextQueryMatch / ImageQueryMatch │
└─────────────────────────────────────────────────────────┘
┌─────────────────────────────────────────────────────────┐
│ App 退出时 │
├─────────────────────────────────────────────────────────┤
│ 1. _indexer.RemoveAll() // 清理索引 │
│ 2. _indexer.Dispose() // 释放资源 │
└─────────────────────────────────────────────────────────┘
```
---
## 7. 查询结果处理
```csharp
// 文本查询结果
foreach (var match in textMatches)
{
if (match.ContentKind == QueryMatchContentKind.AppManagedText)
{
AppManagedTextQueryMatch textResult = (AppManagedTextQueryMatch)match;
string contentId = match.ContentId; // 内容 ID
int offset = textResult.TextOffset; // 匹配文本偏移
int length = textResult.TextLength; // 匹配文本长度
}
}
// 图片查询结果
foreach (var match in imageMatches)
{
if (match.ContentKind == QueryMatchContentKind.AppManagedImage)
{
AppManagedImageQueryMatch imageResult = (AppManagedImageQueryMatch)match;
string contentId = imageResult.ContentId; // 图片 ID
}
}
```
---
## 8. 能力变化监听
```csharp
// 监听索引能力变化
_indexer.Listener.IndexCapabilitiesChanged += (indexer, capabilities) =>
{
// 重新检查能力状态,更新 UI
LoadAppIndexCapabilities();
};
```
---
## 9. 结论
这是一个**高度封装的黑盒 API**
### 优点 ✅
- 简单易用,几行代码即可实现搜索
- 自动处理 Lexical + Semantic
- 支持文本和图片多模态搜索
- 系统级集成,无需额外部署模型
### 缺点 ❌
- 无法精细控制搜索类型
- 不能只用 Semantic Search
- 选项有限,缺乏高级配置
- 实验性 API可能变更
### 替代方案
**如果需要纯 Semantic Search向量搜索**,建议:
- 直接使用 Embedding 模型生成向量
- 配合向量数据库Azure Cosmos DB、FAISS、Qdrant 等)
---
## 10. 相关 NuGet 包
```xml
<PackageReference Include="Microsoft.WindowsAppSDK" Version="2.0.0-experimental3" />
<PackageReference Include="Microsoft.WindowsAppSDK.AI" Version="2.0.57-experimental" />
```
---
*文档生成日期: 2026-01-21*

View File

@@ -1,496 +0,0 @@
# Common.Search Library Specification
## Overview
本文档描述 `Common.Search` 库的重构设计目标是提供一个通用的、可插拔的搜索框架支持多种搜索引擎实现Fuzzy Match、Semantic Search 等)。
## Goals
1. **解耦** - 搜索引擎与数据源完全解耦
2. **可插拔** - 支持替换不同的搜索引擎实现
3. **泛型** - 不绑定特定业务类型(如 SettingEntry
4. **可组合** - 支持多引擎组合(即时 Fuzzy + 延迟 Semantic
5. **可复用** - 可被 PowerToys 多个模块使用
## Architecture
```
┌─────────────────────────────────────────────────────────────────┐
│ Consumer (e.g., Settings.UI) │
├─────────────────────────────────────────────────────────────────┤
│ SettingsDataProvider ← 业务特定的数据加载 │
│ SettingsSearchService ← 业务特定的搜索服务 │
│ SettingEntry : ISearchable ← 业务实体实现搜索契约 │
└─────────────────────────────────────────────────────────────────┘
│ uses
┌─────────────────────────────────────────────────────────────────┐
│ Common.Search (Library) │
├─────────────────────────────────────────────────────────────────┤
│ │
│ ┌─────────────────────────────────────────────────────────┐ │
│ │ Core Abstractions │ │
│ ├─────────────────────────────────────────────────────────┤ │
│ │ ISearchable ← 可搜索内容契约 │ │
│ │ ISearchEngine<T> ← 搜索引擎接口 │ │
│ │ SearchResult<T> ← 统一结果模型 │ │
│ │ SearchOptions ← 搜索选项 │ │
│ └─────────────────────────────────────────────────────────┘ │
│ │
│ ┌─────────────────────────────────────────────────────────┐ │
│ │ Implementations │ │
│ ├─────────────────────────────────────────────────────────┤ │
│ │ FuzzSearch/ │ │
│ │ ├── FuzzSearchEngine<T> ← 内存 Fuzzy 搜索 │ │
│ │ ├── StringMatcher ← 现有的模糊匹配算法 │ │
│ │ └── MatchResult ← Fuzzy 匹配结果 │ │
│ │ │ │
│ │ SemanticSearch/ │ │
│ │ ├── SemanticSearchEngine ← Windows AI Search 封装 │ │
│ │ └── SemanticSearchCapabilities │ │
│ │ │ │
│ │ CompositeSearch/ │ │
│ │ └── CompositeSearchEngine<T> ← 多引擎组合 │ │
│ └─────────────────────────────────────────────────────────┘ │
│ │
└─────────────────────────────────────────────────────────────────┘
```
## Core Interfaces
### ISearchable
定义可搜索内容的最小契约。
```csharp
namespace Common.Search;
/// <summary>
/// Defines a searchable item that can be indexed and searched.
/// </summary>
public interface ISearchable
{
/// <summary>
/// Gets the unique identifier for this item.
/// </summary>
string Id { get; }
/// <summary>
/// Gets the primary searchable text (e.g., title, header).
/// </summary>
string SearchableText { get; }
/// <summary>
/// Gets optional secondary searchable text (e.g., description).
/// Returns null if not available.
/// </summary>
string? SecondarySearchableText { get; }
}
```
### ISearchEngine&lt;T&gt;
搜索引擎核心接口。
```csharp
namespace Common.Search;
/// <summary>
/// Defines a pluggable search engine that can index and search items.
/// </summary>
/// <typeparam name="T">The type of items to search, must implement ISearchable.</typeparam>
public interface ISearchEngine<T> : IDisposable
where T : ISearchable
{
/// <summary>
/// Gets a value indicating whether the engine is ready to search.
/// </summary>
bool IsReady { get; }
/// <summary>
/// Gets the engine capabilities.
/// </summary>
SearchEngineCapabilities Capabilities { get; }
/// <summary>
/// Initializes the search engine.
/// </summary>
Task InitializeAsync(CancellationToken cancellationToken = default);
/// <summary>
/// Indexes a single item.
/// </summary>
Task IndexAsync(T item, CancellationToken cancellationToken = default);
/// <summary>
/// Indexes multiple items in batch.
/// </summary>
Task IndexBatchAsync(IEnumerable<T> items, CancellationToken cancellationToken = default);
/// <summary>
/// Removes an item from the index by its ID.
/// </summary>
Task RemoveAsync(string id, CancellationToken cancellationToken = default);
/// <summary>
/// Clears all indexed items.
/// </summary>
Task ClearAsync(CancellationToken cancellationToken = default);
/// <summary>
/// Searches for items matching the query.
/// </summary>
Task<IReadOnlyList<SearchResult<T>>> SearchAsync(
string query,
SearchOptions? options = null,
CancellationToken cancellationToken = default);
}
```
### SearchResult&lt;T&gt;
统一的搜索结果模型。
```csharp
namespace Common.Search;
/// <summary>
/// Represents a search result with the matched item and scoring information.
/// </summary>
public sealed class SearchResult<T>
where T : ISearchable
{
/// <summary>
/// Gets the matched item.
/// </summary>
public required T Item { get; init; }
/// <summary>
/// Gets the relevance score (higher is more relevant).
/// </summary>
public required double Score { get; init; }
/// <summary>
/// Gets the type of match that produced this result.
/// </summary>
public required SearchMatchKind MatchKind { get; init; }
/// <summary>
/// Gets the match details for highlighting (optional).
/// </summary>
public IReadOnlyList<MatchSpan>? MatchSpans { get; init; }
}
/// <summary>
/// Represents a span of matched text for highlighting.
/// </summary>
public readonly record struct MatchSpan(int Start, int Length);
/// <summary>
/// Specifies the kind of match that produced a search result.
/// </summary>
public enum SearchMatchKind
{
/// <summary>Exact text match.</summary>
Exact,
/// <summary>Fuzzy/approximate text match.</summary>
Fuzzy,
/// <summary>Semantic/AI-based match.</summary>
Semantic,
/// <summary>Combined match from multiple engines.</summary>
Composite,
}
```
### SearchOptions
搜索配置选项。
```csharp
namespace Common.Search;
/// <summary>
/// Options for configuring search behavior.
/// </summary>
public sealed class SearchOptions
{
/// <summary>
/// Gets or sets the maximum number of results to return.
/// Default is 20.
/// </summary>
public int MaxResults { get; set; } = 20;
/// <summary>
/// Gets or sets the minimum score threshold (0.0 to 1.0).
/// Results below this score are filtered out.
/// Default is 0.0 (no filtering).
/// </summary>
public double MinScore { get; set; } = 0.0;
/// <summary>
/// Gets or sets the language hint for the search (e.g., "en-US").
/// </summary>
public string? Language { get; set; }
/// <summary>
/// Gets or sets a value indicating whether to include match spans for highlighting.
/// Default is false.
/// </summary>
public bool IncludeMatchSpans { get; set; } = false;
}
```
### SearchEngineCapabilities
引擎能力描述。
```csharp
namespace Common.Search;
/// <summary>
/// Describes the capabilities of a search engine.
/// </summary>
public sealed class SearchEngineCapabilities
{
/// <summary>
/// Gets a value indicating whether the engine supports fuzzy matching.
/// </summary>
public bool SupportsFuzzyMatch { get; init; }
/// <summary>
/// Gets a value indicating whether the engine supports semantic search.
/// </summary>
public bool SupportsSemanticSearch { get; init; }
/// <summary>
/// Gets a value indicating whether the engine persists the index to disk.
/// </summary>
public bool PersistsIndex { get; init; }
/// <summary>
/// Gets a value indicating whether the engine supports incremental indexing.
/// </summary>
public bool SupportsIncrementalIndex { get; init; }
/// <summary>
/// Gets a value indicating whether the engine supports match span highlighting.
/// </summary>
public bool SupportsMatchSpans { get; init; }
}
```
## Implementations
### FuzzSearchEngine&lt;T&gt;
基于现有 StringMatcher 的内存搜索引擎。
**特点:**
- 纯内存,无持久化
- 即时响应(毫秒级)
- 支持 match spans 高亮
- 基于字符的模糊匹配
**Capabilities**
```csharp
new SearchEngineCapabilities
{
SupportsFuzzyMatch = true,
SupportsSemanticSearch = false,
PersistsIndex = false,
SupportsIncrementalIndex = true,
SupportsMatchSpans = true,
}
```
### SemanticSearchEngine
基于 Windows App SDK AI Search API 的语义搜索引擎。
**特点:**
- 系统管理的持久化索引
- AI 驱动的语义理解
- 需要模型初始化(可能较慢)
- 可能不可用(依赖系统支持)
**Capabilities**
```csharp
new SearchEngineCapabilities
{
SupportsFuzzyMatch = true, // API 同时提供 lexical + semantic
SupportsSemanticSearch = true,
PersistsIndex = true,
SupportsIncrementalIndex = true,
SupportsMatchSpans = false, // API 不返回详细位置
}
```
**注意:** SemanticSearchEngine 不是泛型的,因为它需要将内容转换为字符串存入系统索引。实现时通过 `ISearchable` 接口提取文本。
### CompositeSearchEngine&lt;T&gt;
组合多个搜索引擎,支持 fallback 和结果合并。
```csharp
namespace Common.Search;
/// <summary>
/// A search engine that combines results from multiple engines.
/// </summary>
public sealed class CompositeSearchEngine<T> : ISearchEngine<T>
where T : ISearchable
{
/// <summary>
/// Strategy for combining results from multiple engines.
/// </summary>
public enum CombineStrategy
{
/// <summary>Use first ready engine only.</summary>
FirstReady,
/// <summary>Merge results from all ready engines.</summary>
MergeAll,
/// <summary>Use primary, fallback to secondary if primary not ready.</summary>
PrimaryWithFallback,
}
}
```
**典型用法:** Fuzzy 作为即时响应Semantic 准备好后增强结果。
## Directory Structure
```
src/common/Common.Search/
├── Common.Search.csproj
├── GlobalSuppressions.cs
├── ISearchable.cs
├── ISearchEngine.cs
├── SearchResult.cs
├── SearchOptions.cs
├── SearchEngineCapabilities.cs
├── SearchMatchKind.cs
├── MatchSpan.cs
├── FuzzSearch/
│ ├── FuzzSearchEngine.cs
│ ├── StringMatcher.cs (existing)
│ ├── MatchOption.cs (existing)
│ ├── MatchResult.cs (existing)
│ └── SearchPrecisionScore.cs (existing)
├── SemanticSearch/
│ ├── SemanticSearchEngine.cs
│ ├── SemanticSearchCapabilities.cs
│ └── SemanticSearchAdapter.cs (adapts ISearchable to Windows API)
└── CompositeSearch/
└── CompositeSearchEngine.cs
```
## Consumer Usage (Settings.UI)
### SettingEntry 实现 ISearchable
```csharp
// Settings.UI.Library/SettingEntry.cs
public struct SettingEntry : ISearchable
{
// Existing properties...
// ISearchable implementation
public string Id => ElementUid ?? $"{PageTypeName}|{ElementName}";
public string SearchableText => Header ?? string.Empty;
public string? SecondarySearchableText => Description;
}
```
### SettingsSearchService
```csharp
// Settings.UI/Services/SettingsSearchService.cs
public sealed class SettingsSearchService : IDisposable
{
private readonly ISearchEngine<SettingEntry> _engine;
public SettingsSearchService(ISearchEngine<SettingEntry> engine)
{
_engine = engine;
}
public async Task InitializeAsync(IEnumerable<SettingEntry> entries)
{
await _engine.InitializeAsync();
await _engine.IndexBatchAsync(entries);
}
public async Task<List<SettingEntry>> SearchAsync(string query, CancellationToken ct = default)
{
var results = await _engine.SearchAsync(query, cancellationToken: ct);
return results.Select(r => r.Item).ToList();
}
}
```
### Startup Configuration
```csharp
// Option 1: Fuzzy only (default, immediate)
var engine = new FuzzSearchEngine<SettingEntry>();
// Option 2: Semantic only (requires Windows AI)
var engine = new SemanticSearchAdapter<SettingEntry>("PowerToysSettings");
// Option 3: Composite (best of both worlds)
var engine = new CompositeSearchEngine<SettingEntry>(
primary: new SemanticSearchAdapter<SettingEntry>("PowerToysSettings"),
fallback: new FuzzSearchEngine<SettingEntry>(),
strategy: CombineStrategy.PrimaryWithFallback
);
var searchService = new SettingsSearchService(engine);
await searchService.InitializeAsync(settingEntries);
```
## Migration Plan
### Phase 1: Core Abstractions
1. 创建 `ISearchable`, `ISearchEngine<T>`, `SearchResult<T>` 等核心接口
2. 保持现有 FuzzSearch 代码不变
### Phase 2: FuzzSearchEngine&lt;T&gt;
1. 创建泛型 `FuzzSearchEngine<T>` 实现
2. 内部复用现有 `StringMatcher`
### Phase 3: SemanticSearchEngine
1. 完善现有 `SemanticSearchEngine` 实现
2. 创建 `SemanticSearchAdapter<T>` 桥接泛型接口
### Phase 4: Settings.UI Migration
1. `SettingEntry` 实现 `ISearchable`
2. 创建 `SettingsSearchService`
3. 迁移 `SearchIndexService` 到新架构
4. 保持 API 兼容,逐步废弃旧方法
### Phase 5: CompositeSearchEngine (Optional)
1. 实现组合引擎
2. 支持 Fuzzy + Semantic 混合搜索
## Open Questions
1. **是否需要支持图片搜索?** 当前 SemanticSearchEngine 支持 `IndexImage`,但 `ISearchable` 只有文本。如果需要图片,可能需要 `IImageSearchable` 扩展。
2. **结果去重策略?** CompositeEngine 合并结果时,同一个 Item 可能被多个引擎匹配,如何去重和合并分数?
3. **异步 vs 同步?** FuzzSearch 完全可以同步执行,但接口统一用 `Task` 是否合适?考虑提供同步重载?
4. **索引更新策略?** 当 Settings 内容变化时(例如用户切换语言),如何高效更新索引?
---
*Document Version: 1.0*
*Last Updated: 2026-01-21*

View File

@@ -4,11 +4,5 @@
<PropertyGroup>
<ImplicitUsings>enable</ImplicitUsings>
<Nullable>enable</Nullable>
</PropertyGroup>
<!-- Exclude SemanticSearch until experimental SDK is available -->
<ItemGroup>
<Compile Remove="SemanticSearch\**" />
</ItemGroup>
</Project>

View File

@@ -1,306 +0,0 @@
// Copyright (c) Microsoft Corporation
// The Microsoft Corporation licenses this file to you under the MIT license.
// See the LICENSE file in the project root for more information.
using System.Collections.Concurrent;
using System.Globalization;
using System.Text;
namespace Common.Search.FuzzSearch;
/// <summary>
/// A search engine that uses fuzzy string matching for search.
/// </summary>
/// <typeparam name="T">The type of items to search.</typeparam>
public sealed class FuzzSearchEngine<T> : ISearchEngine<T>
where T : ISearchable
{
private readonly object _lockObject = new();
private readonly Dictionary<string, T> _itemsById = new();
private readonly Dictionary<string, (string PrimaryNorm, string? SecondaryNorm)> _normalizedCache = new();
private bool _isReady;
private bool _disposed;
/// <inheritdoc/>
public bool IsReady
{
get
{
lock (_lockObject)
{
return _isReady;
}
}
}
/// <inheritdoc/>
public SearchEngineCapabilities Capabilities { get; } = new()
{
SupportsFuzzyMatch = true,
SupportsSemanticSearch = false,
PersistsIndex = false,
SupportsIncrementalIndex = true,
SupportsMatchSpans = true,
};
/// <inheritdoc/>
public Task InitializeAsync(CancellationToken cancellationToken = default)
{
lock (_lockObject)
{
_isReady = true;
}
return Task.CompletedTask;
}
/// <inheritdoc/>
public Task IndexAsync(T item, CancellationToken cancellationToken = default)
{
ArgumentNullException.ThrowIfNull(item);
ThrowIfDisposed();
lock (_lockObject)
{
_itemsById[item.Id] = item;
_normalizedCache[item.Id] = (
NormalizeString(item.SearchableText),
item.SecondarySearchableText != null ? NormalizeString(item.SecondarySearchableText) : null
);
}
return Task.CompletedTask;
}
/// <inheritdoc/>
public Task IndexBatchAsync(IEnumerable<T> items, CancellationToken cancellationToken = default)
{
ArgumentNullException.ThrowIfNull(items);
ThrowIfDisposed();
lock (_lockObject)
{
foreach (var item in items)
{
if (cancellationToken.IsCancellationRequested)
{
break;
}
_itemsById[item.Id] = item;
_normalizedCache[item.Id] = (
NormalizeString(item.SearchableText),
item.SecondarySearchableText != null ? NormalizeString(item.SecondarySearchableText) : null
);
}
}
return Task.CompletedTask;
}
/// <inheritdoc/>
public Task RemoveAsync(string id, CancellationToken cancellationToken = default)
{
ArgumentException.ThrowIfNullOrWhiteSpace(id);
ThrowIfDisposed();
lock (_lockObject)
{
_itemsById.Remove(id);
_normalizedCache.Remove(id);
}
return Task.CompletedTask;
}
/// <inheritdoc/>
public Task ClearAsync(CancellationToken cancellationToken = default)
{
ThrowIfDisposed();
lock (_lockObject)
{
_itemsById.Clear();
_normalizedCache.Clear();
}
return Task.CompletedTask;
}
/// <inheritdoc/>
public Task<IReadOnlyList<SearchResult<T>>> SearchAsync(
string query,
SearchOptions? options = null,
CancellationToken cancellationToken = default)
{
ThrowIfDisposed();
if (string.IsNullOrWhiteSpace(query))
{
return Task.FromResult<IReadOnlyList<SearchResult<T>>>(Array.Empty<SearchResult<T>>());
}
options ??= new SearchOptions();
var normalizedQuery = NormalizeString(query);
List<KeyValuePair<string, T>> snapshot;
lock (_lockObject)
{
if (_itemsById.Count == 0)
{
return Task.FromResult<IReadOnlyList<SearchResult<T>>>(Array.Empty<SearchResult<T>>());
}
snapshot = _itemsById.ToList();
}
var bag = new ConcurrentBag<SearchResult<T>>();
var po = new ParallelOptions
{
CancellationToken = cancellationToken,
MaxDegreeOfParallelism = Math.Max(1, Environment.ProcessorCount - 1),
};
try
{
Parallel.ForEach(snapshot, po, kvp =>
{
var (primaryNorm, secondaryNorm) = GetNormalizedTexts(kvp.Key);
var primaryResult = StringMatcher.FuzzyMatch(normalizedQuery, primaryNorm);
double score = primaryResult.Score;
List<int>? matchData = primaryResult.MatchData;
if (!string.IsNullOrEmpty(secondaryNorm))
{
var secondaryResult = StringMatcher.FuzzyMatch(normalizedQuery, secondaryNorm);
if (secondaryResult.Success && secondaryResult.Score * 0.8 > score)
{
score = secondaryResult.Score * 0.8;
matchData = null; // Secondary matches don't have primary text spans
}
}
if (score > options.MinScore)
{
var result = new SearchResult<T>
{
Item = kvp.Value,
Score = score,
MatchKind = SearchMatchKind.Fuzzy,
MatchSpans = options.IncludeMatchSpans && matchData != null
? ConvertToMatchSpans(matchData)
: null,
};
bag.Add(result);
}
});
}
catch (OperationCanceledException)
{
return Task.FromResult<IReadOnlyList<SearchResult<T>>>(Array.Empty<SearchResult<T>>());
}
var results = bag
.OrderByDescending(r => r.Score)
.Take(options.MaxResults)
.ToList();
return Task.FromResult<IReadOnlyList<SearchResult<T>>>(results);
}
/// <inheritdoc/>
public void Dispose()
{
if (_disposed)
{
return;
}
lock (_lockObject)
{
_itemsById.Clear();
_normalizedCache.Clear();
_isReady = false;
}
_disposed = true;
}
private (string PrimaryNorm, string? SecondaryNorm) GetNormalizedTexts(string id)
{
lock (_lockObject)
{
if (_normalizedCache.TryGetValue(id, out var cached))
{
return cached;
}
}
return (string.Empty, null);
}
private static IReadOnlyList<MatchSpan> ConvertToMatchSpans(List<int> matchData)
{
if (matchData == null || matchData.Count == 0)
{
return Array.Empty<MatchSpan>();
}
// Convert individual match indices to spans
var spans = new List<MatchSpan>();
var sortedIndices = matchData.OrderBy(i => i).ToList();
int start = sortedIndices[0];
int length = 1;
for (int i = 1; i < sortedIndices.Count; i++)
{
if (sortedIndices[i] == sortedIndices[i - 1] + 1)
{
// Consecutive index, extend the span
length++;
}
else
{
// Gap found, save current span and start new one
spans.Add(new MatchSpan(start, length));
start = sortedIndices[i];
length = 1;
}
}
// Add the last span
spans.Add(new MatchSpan(start, length));
return spans;
}
private static string NormalizeString(string? input)
{
if (string.IsNullOrEmpty(input))
{
return string.Empty;
}
var normalized = input.ToLowerInvariant().Normalize(NormalizationForm.FormKD);
var sb = new StringBuilder(normalized.Length);
foreach (var c in normalized)
{
var category = CharUnicodeInfo.GetUnicodeCategory(c);
if (category != UnicodeCategory.NonSpacingMark)
{
sb.Append(c);
}
}
return sb.ToString();
}
private void ThrowIfDisposed()
{
ObjectDisposedException.ThrowIf(_disposed, this);
}
}

View File

@@ -25,9 +25,9 @@ public class StringMatcher
/// 6. Move onto the next substring's characters until all substrings are checked.
/// 7. Consider success and move onto scoring if every char or substring without whitespaces matched
/// </summary>
public static MatchResult FuzzyMatch(string query, string stringToCompare, MatchOption? opt = null)
public static MatchResult FuzzyMatch(string query, string stringToCompare, MatchOption opt = null)
{
opt ??= new MatchOption();
opt = opt ?? new MatchOption();
if (string.IsNullOrEmpty(stringToCompare))
{

View File

@@ -11,10 +11,6 @@ using System.Diagnostics.CodeAnalysis;
[assembly: SuppressMessage("StyleCop.CSharp.NamingRules", "SA1309:Field names should not begin with underscore", Justification = "coding style", Scope = "member", Target = "~F:Common.Search.MatchResult._rawScore")]
[assembly: SuppressMessage("StyleCop.CSharp.NamingRules", "SA1309:Field names should not begin with underscore", Justification = "coding style", Scope = "member", Target = "~F:Common.Search.StringMatcher._defaultMatchOption")]
[assembly: SuppressMessage("StyleCop.CSharp.NamingRules", "SA1309:Field names should not begin with underscore", Justification = "coding style", Scope = "member", Target = "~F:Common.Search.StringMatcher._instance")]
[assembly: SuppressMessage("StyleCop.CSharp.NamingRules", "SA1309:Field names should not begin with underscore", Justification = "coding style", Scope = "member", Target = "~F:Common.Search.SemanticSearch.SemanticSearchEngine._indexName")]
[assembly: SuppressMessage("StyleCop.CSharp.NamingRules", "SA1309:Field names should not begin with underscore", Justification = "coding style", Scope = "member", Target = "~F:Common.Search.SemanticSearch.SemanticSearchEngine._indexer")]
[assembly: SuppressMessage("StyleCop.CSharp.NamingRules", "SA1309:Field names should not begin with underscore", Justification = "coding style", Scope = "member", Target = "~F:Common.Search.SemanticSearch.SemanticSearchEngine._disposed")]
[assembly: SuppressMessage("StyleCop.CSharp.NamingRules", "SA1309:Field names should not begin with underscore", Justification = "coding style", Scope = "member", Target = "~F:Common.Search.SemanticSearch.SemanticSearchEngine._capabilities")]
[assembly: SuppressMessage("StyleCop.CSharp.ReadabilityRules", "SA1101:Prefix local calls with this", Justification = "coding style", Scope = "member", Target = "~M:Common.Search.MatchResult.#ctor(System.Boolean,Common.Search.SearchPrecisionScore)")]
[assembly: SuppressMessage("StyleCop.CSharp.ReadabilityRules", "SA1101:Prefix local calls with this", Justification = "coding style", Scope = "member", Target = "~M:Common.Search.MatchResult.#ctor(System.Boolean,Common.Search.SearchPrecisionScore,System.Collections.Generic.List{System.Int32},System.Int32)")]
[assembly: SuppressMessage("Compiler", "CS8618:Non-nullable field must contain a non-null value when exiting constructor. Consider adding the 'required' modifier or declaring as nullable.", Justification = "Coding style", Scope = "member", Target = "~F:Common.Search.StringMatcher._instance")]

View File

@@ -1,73 +0,0 @@
// Copyright (c) Microsoft Corporation
// The Microsoft Corporation licenses this file to you under the MIT license.
// See the LICENSE file in the project root for more information.
namespace Common.Search;
/// <summary>
/// Defines a pluggable search engine that can index and search items.
/// </summary>
/// <typeparam name="T">The type of items to search, must implement ISearchable.</typeparam>
public interface ISearchEngine<T> : IDisposable
where T : ISearchable
{
/// <summary>
/// Gets a value indicating whether the engine is ready to search.
/// </summary>
bool IsReady { get; }
/// <summary>
/// Gets the engine capabilities.
/// </summary>
SearchEngineCapabilities Capabilities { get; }
/// <summary>
/// Initializes the search engine.
/// </summary>
/// <param name="cancellationToken">Cancellation token.</param>
/// <returns>A task representing the async operation.</returns>
Task InitializeAsync(CancellationToken cancellationToken = default);
/// <summary>
/// Indexes a single item.
/// </summary>
/// <param name="item">The item to index.</param>
/// <param name="cancellationToken">Cancellation token.</param>
/// <returns>A task representing the async operation.</returns>
Task IndexAsync(T item, CancellationToken cancellationToken = default);
/// <summary>
/// Indexes multiple items in batch.
/// </summary>
/// <param name="items">The items to index.</param>
/// <param name="cancellationToken">Cancellation token.</param>
/// <returns>A task representing the async operation.</returns>
Task IndexBatchAsync(IEnumerable<T> items, CancellationToken cancellationToken = default);
/// <summary>
/// Removes an item from the index by its ID.
/// </summary>
/// <param name="id">The ID of the item to remove.</param>
/// <param name="cancellationToken">Cancellation token.</param>
/// <returns>A task representing the async operation.</returns>
Task RemoveAsync(string id, CancellationToken cancellationToken = default);
/// <summary>
/// Clears all indexed items.
/// </summary>
/// <param name="cancellationToken">Cancellation token.</param>
/// <returns>A task representing the async operation.</returns>
Task ClearAsync(CancellationToken cancellationToken = default);
/// <summary>
/// Searches for items matching the query.
/// </summary>
/// <param name="query">The search query.</param>
/// <param name="options">Optional search options.</param>
/// <param name="cancellationToken">Cancellation token.</param>
/// <returns>A list of search results ordered by relevance.</returns>
Task<IReadOnlyList<SearchResult<T>>> SearchAsync(
string query,
SearchOptions? options = null,
CancellationToken cancellationToken = default);
}

View File

@@ -1,27 +0,0 @@
// Copyright (c) Microsoft Corporation
// The Microsoft Corporation licenses this file to you under the MIT license.
// See the LICENSE file in the project root for more information.
namespace Common.Search;
/// <summary>
/// Defines a searchable item that can be indexed and searched.
/// </summary>
public interface ISearchable
{
/// <summary>
/// Gets the unique identifier for this item.
/// </summary>
string Id { get; }
/// <summary>
/// Gets the primary searchable text (e.g., title, header).
/// </summary>
string SearchableText { get; }
/// <summary>
/// Gets optional secondary searchable text (e.g., description).
/// Returns null if not available.
/// </summary>
string? SecondarySearchableText { get; }
}

View File

@@ -1,12 +0,0 @@
// Copyright (c) Microsoft Corporation
// The Microsoft Corporation licenses this file to you under the MIT license.
// See the LICENSE file in the project root for more information.
namespace Common.Search;
/// <summary>
/// Represents a span of matched text for highlighting.
/// </summary>
/// <param name="Start">The starting index of the match.</param>
/// <param name="Length">The length of the match.</param>
public readonly record struct MatchSpan(int Start, int Length);

View File

@@ -1,36 +0,0 @@
// Copyright (c) Microsoft Corporation
// The Microsoft Corporation licenses this file to you under the MIT license.
// See the LICENSE file in the project root for more information.
namespace Common.Search;
/// <summary>
/// Describes the capabilities of a search engine.
/// </summary>
public sealed class SearchEngineCapabilities
{
/// <summary>
/// Gets a value indicating whether the engine supports fuzzy matching.
/// </summary>
public bool SupportsFuzzyMatch { get; init; }
/// <summary>
/// Gets a value indicating whether the engine supports semantic search.
/// </summary>
public bool SupportsSemanticSearch { get; init; }
/// <summary>
/// Gets a value indicating whether the engine persists the index to disk.
/// </summary>
public bool PersistsIndex { get; init; }
/// <summary>
/// Gets a value indicating whether the engine supports incremental indexing.
/// </summary>
public bool SupportsIncrementalIndex { get; init; }
/// <summary>
/// Gets a value indicating whether the engine supports match span highlighting.
/// </summary>
public bool SupportsMatchSpans { get; init; }
}

View File

@@ -1,31 +0,0 @@
// Copyright (c) Microsoft Corporation
// The Microsoft Corporation licenses this file to you under the MIT license.
// See the LICENSE file in the project root for more information.
namespace Common.Search;
/// <summary>
/// Specifies the kind of match that produced a search result.
/// </summary>
public enum SearchMatchKind
{
/// <summary>
/// Exact text match.
/// </summary>
Exact,
/// <summary>
/// Fuzzy/approximate text match.
/// </summary>
Fuzzy,
/// <summary>
/// Semantic/AI-based match.
/// </summary>
Semantic,
/// <summary>
/// Combined match from multiple engines.
/// </summary>
Composite,
}

View File

@@ -1,35 +0,0 @@
// Copyright (c) Microsoft Corporation
// The Microsoft Corporation licenses this file to you under the MIT license.
// See the LICENSE file in the project root for more information.
namespace Common.Search;
/// <summary>
/// Options for configuring search behavior.
/// </summary>
public sealed class SearchOptions
{
/// <summary>
/// Gets or sets the maximum number of results to return.
/// Default is 20.
/// </summary>
public int MaxResults { get; set; } = 20;
/// <summary>
/// Gets or sets the minimum score threshold.
/// Results below this score are filtered out.
/// Default is 0.0 (no filtering).
/// </summary>
public double MinScore { get; set; }
/// <summary>
/// Gets or sets the language hint for the search (e.g., "en-US").
/// </summary>
public string? Language { get; set; }
/// <summary>
/// Gets or sets a value indicating whether to include match spans for highlighting.
/// Default is false.
/// </summary>
public bool IncludeMatchSpans { get; set; }
}

View File

@@ -1,33 +0,0 @@
// Copyright (c) Microsoft Corporation
// The Microsoft Corporation licenses this file to you under the MIT license.
// See the LICENSE file in the project root for more information.
namespace Common.Search;
/// <summary>
/// Represents a search result with the matched item and scoring information.
/// </summary>
/// <typeparam name="T">The type of the matched item.</typeparam>
public sealed class SearchResult<T>
where T : ISearchable
{
/// <summary>
/// Gets the matched item.
/// </summary>
public required T Item { get; init; }
/// <summary>
/// Gets the relevance score (higher is more relevant).
/// </summary>
public required double Score { get; init; }
/// <summary>
/// Gets the type of match that produced this result.
/// </summary>
public required SearchMatchKind MatchKind { get; init; }
/// <summary>
/// Gets the match details for highlighting (optional).
/// </summary>
public IReadOnlyList<MatchSpan>? MatchSpans { get; init; }
}

View File

@@ -1,234 +0,0 @@
// Copyright (c) Microsoft Corporation
// The Microsoft Corporation licenses this file to you under the MIT license.
// See the LICENSE file in the project root for more information.
namespace Common.Search.SemanticSearch;
/// <summary>
/// Adapts the SemanticSearchEngine to the generic ISearchEngine interface.
/// </summary>
/// <typeparam name="T">The type of items to search.</typeparam>
public sealed class SemanticSearchAdapter<T> : ISearchEngine<T>
where T : ISearchable
{
private readonly SemanticSearchEngine _engine;
private readonly Dictionary<string, T> _itemsById = new();
private readonly object _lockObject = new();
private bool _disposed;
/// <summary>
/// Initializes a new instance of the <see cref="SemanticSearchAdapter{T}"/> class.
/// </summary>
/// <param name="indexName">The name of the search index.</param>
public SemanticSearchAdapter(string indexName)
{
_engine = new SemanticSearchEngine(indexName);
}
/// <inheritdoc/>
public bool IsReady => _engine.IsInitialized;
/// <inheritdoc/>
public SearchEngineCapabilities Capabilities { get; } = new()
{
SupportsFuzzyMatch = true,
SupportsSemanticSearch = true,
PersistsIndex = true,
SupportsIncrementalIndex = true,
SupportsMatchSpans = false,
};
/// <summary>
/// Gets the underlying semantic search capabilities.
/// </summary>
public SemanticSearchCapabilities? SemanticCapabilities => _engine.Capabilities;
/// <summary>
/// Occurs when the semantic search capabilities change.
/// </summary>
public event EventHandler<SemanticSearchCapabilities>? CapabilitiesChanged
{
add => _engine.CapabilitiesChanged += value;
remove => _engine.CapabilitiesChanged -= value;
}
/// <inheritdoc/>
public async Task InitializeAsync(CancellationToken cancellationToken = default)
{
ThrowIfDisposed();
await _engine.InitializeAsync();
}
/// <inheritdoc/>
public Task IndexAsync(T item, CancellationToken cancellationToken = default)
{
ArgumentNullException.ThrowIfNull(item);
ThrowIfDisposed();
var text = BuildSearchableText(item);
if (string.IsNullOrWhiteSpace(text))
{
return Task.CompletedTask;
}
lock (_lockObject)
{
_itemsById[item.Id] = item;
}
_engine.IndexText(item.Id, text);
return Task.CompletedTask;
}
/// <inheritdoc/>
public Task IndexBatchAsync(IEnumerable<T> items, CancellationToken cancellationToken = default)
{
ArgumentNullException.ThrowIfNull(items);
ThrowIfDisposed();
var batch = new List<(string Id, string Text)>();
lock (_lockObject)
{
foreach (var item in items)
{
if (cancellationToken.IsCancellationRequested)
{
break;
}
var text = BuildSearchableText(item);
if (!string.IsNullOrWhiteSpace(text))
{
_itemsById[item.Id] = item;
batch.Add((item.Id, text));
}
}
}
_engine.IndexTextBatch(batch);
return Task.CompletedTask;
}
/// <inheritdoc/>
public Task RemoveAsync(string id, CancellationToken cancellationToken = default)
{
ArgumentException.ThrowIfNullOrWhiteSpace(id);
ThrowIfDisposed();
lock (_lockObject)
{
_itemsById.Remove(id);
}
_engine.Remove(id);
return Task.CompletedTask;
}
/// <inheritdoc/>
public Task ClearAsync(CancellationToken cancellationToken = default)
{
ThrowIfDisposed();
lock (_lockObject)
{
_itemsById.Clear();
}
_engine.RemoveAll();
return Task.CompletedTask;
}
/// <inheritdoc/>
public Task<IReadOnlyList<SearchResult<T>>> SearchAsync(
string query,
SearchOptions? options = null,
CancellationToken cancellationToken = default)
{
ThrowIfDisposed();
if (string.IsNullOrWhiteSpace(query))
{
return Task.FromResult<IReadOnlyList<SearchResult<T>>>(Array.Empty<SearchResult<T>>());
}
options ??= new SearchOptions();
var semanticOptions = new SemanticSearchOptions
{
MaxResults = options.MaxResults,
Language = options.Language,
MatchScope = SemanticSearchMatchScope.Unconstrained,
TextMatchType = SemanticSearchTextMatchType.Fuzzy,
};
var matches = _engine.SearchText(query, semanticOptions);
var results = new List<SearchResult<T>>();
lock (_lockObject)
{
foreach (var match in matches)
{
if (_itemsById.TryGetValue(match.ContentId, out var item))
{
results.Add(new SearchResult<T>
{
Item = item,
Score = 100.0, // Semantic search doesn't return scores, use fixed value
MatchKind = SearchMatchKind.Semantic,
MatchSpans = null,
});
}
}
}
return Task.FromResult<IReadOnlyList<SearchResult<T>>>(results);
}
/// <summary>
/// Waits for the indexing process to complete.
/// </summary>
/// <param name="timeout">The maximum time to wait.</param>
/// <returns>A task representing the async operation.</returns>
public async Task WaitForIndexingCompleteAsync(TimeSpan timeout)
{
ThrowIfDisposed();
await _engine.WaitForIndexingCompleteAsync(timeout);
}
/// <inheritdoc/>
public void Dispose()
{
if (_disposed)
{
return;
}
_engine.Dispose();
lock (_lockObject)
{
_itemsById.Clear();
}
_disposed = true;
}
private static string BuildSearchableText(T item)
{
var primary = item.SearchableText ?? string.Empty;
var secondary = item.SecondarySearchableText;
if (string.IsNullOrWhiteSpace(secondary))
{
return primary;
}
return $"{primary} {secondary}";
}
private void ThrowIfDisposed()
{
ObjectDisposedException.ThrowIf(_disposed, this);
}
}

View File

@@ -1,46 +0,0 @@
// Copyright (c) Microsoft Corporation
// The Microsoft Corporation licenses this file to you under the MIT license.
// See the LICENSE file in the project root for more information.
namespace Common.Search.SemanticSearch;
/// <summary>
/// Represents the capabilities of the semantic search index.
/// </summary>
public class SemanticSearchCapabilities
{
/// <summary>
/// Gets or sets a value indicating whether text lexical (keyword) search is available.
/// </summary>
public bool TextLexicalAvailable { get; set; }
/// <summary>
/// Gets or sets a value indicating whether text semantic (AI embedding) search is available.
/// </summary>
public bool TextSemanticAvailable { get; set; }
/// <summary>
/// Gets or sets a value indicating whether image semantic search is available.
/// </summary>
public bool ImageSemanticAvailable { get; set; }
/// <summary>
/// Gets or sets a value indicating whether image OCR search is available.
/// </summary>
public bool ImageOcrAvailable { get; set; }
/// <summary>
/// Gets a value indicating whether any search capability is available.
/// </summary>
public bool AnyAvailable => TextLexicalAvailable || TextSemanticAvailable || ImageSemanticAvailable || ImageOcrAvailable;
/// <summary>
/// Gets a value indicating whether text search (lexical or semantic) is available.
/// </summary>
public bool TextSearchAvailable => TextLexicalAvailable || TextSemanticAvailable;
/// <summary>
/// Gets a value indicating whether image search (semantic or OCR) is available.
/// </summary>
public bool ImageSearchAvailable => ImageSemanticAvailable || ImageOcrAvailable;
}

View File

@@ -1,21 +0,0 @@
// Copyright (c) Microsoft Corporation
// The Microsoft Corporation licenses this file to you under the MIT license.
// See the LICENSE file in the project root for more information.
namespace Common.Search.SemanticSearch;
/// <summary>
/// Specifies the kind of content in a semantic search result.
/// </summary>
public enum SemanticSearchContentKind
{
/// <summary>
/// Text content.
/// </summary>
Text,
/// <summary>
/// Image content.
/// </summary>
Image,
}

View File

@@ -1,347 +0,0 @@
// Copyright (c) Microsoft Corporation
// The Microsoft Corporation licenses this file to you under the MIT license.
// See the LICENSE file in the project root for more information.
using Microsoft.Windows.AI.Search.Experimental;
using Windows.Graphics.Imaging;
namespace Common.Search.SemanticSearch;
/// <summary>
/// A semantic search engine powered by Windows App SDK AI Search APIs.
/// Provides text and image indexing with lexical and semantic search capabilities.
/// </summary>
public sealed class SemanticSearchEngine : IDisposable
{
private readonly string _indexName;
private AppContentIndexer? _indexer;
private bool _disposed;
private SemanticSearchCapabilities? _capabilities;
/// <summary>
/// Initializes a new instance of the <see cref="SemanticSearchEngine"/> class.
/// </summary>
/// <param name="indexName">The name of the search index.</param>
public SemanticSearchEngine(string indexName)
{
ArgumentException.ThrowIfNullOrWhiteSpace(indexName);
_indexName = indexName;
}
/// <summary>
/// Occurs when the index capabilities change.
/// </summary>
public event EventHandler<SemanticSearchCapabilities>? CapabilitiesChanged;
/// <summary>
/// Gets a value indicating whether the search engine is initialized.
/// </summary>
public bool IsInitialized => _indexer != null;
/// <summary>
/// Gets the current index capabilities, or null if not initialized.
/// </summary>
public SemanticSearchCapabilities? Capabilities => _capabilities;
/// <summary>
/// Initializes the search engine and creates or opens the index.
/// </summary>
/// <returns>A task that represents the asynchronous operation. Returns true if initialization succeeded.</returns>
public async Task<bool> InitializeAsync()
{
ThrowIfDisposed();
if (_indexer != null)
{
return true;
}
var result = AppContentIndexer.GetOrCreateIndex(_indexName);
if (!result.Succeeded)
{
return false;
}
_indexer = result.Indexer;
// Wait for index capabilities to be ready
await _indexer.WaitForIndexCapabilitiesAsync();
// Load capabilities
_capabilities = LoadCapabilities();
// Subscribe to capability changes
_indexer.Listener.IndexCapabilitiesChanged += OnIndexCapabilitiesChanged;
return true;
}
/// <summary>
/// Waits for the indexing process to complete.
/// </summary>
/// <param name="timeout">The maximum time to wait for indexing to complete.</param>
/// <returns>A task that represents the asynchronous operation.</returns>
public async Task WaitForIndexingCompleteAsync(TimeSpan timeout)
{
ThrowIfDisposed();
ThrowIfNotInitialized();
await _indexer!.WaitForIndexingIdleAsync(timeout);
}
/// <summary>
/// Gets the current index capabilities.
/// </summary>
/// <returns>The current capabilities of the search index.</returns>
public SemanticSearchCapabilities GetCapabilities()
{
ThrowIfDisposed();
ThrowIfNotInitialized();
return _capabilities ?? LoadCapabilities();
}
/// <summary>
/// Adds or updates text content in the index.
/// </summary>
/// <param name="id">The unique identifier for the content.</param>
/// <param name="text">The text content to index.</param>
public void IndexText(string id, string text)
{
ThrowIfDisposed();
ThrowIfNotInitialized();
ArgumentException.ThrowIfNullOrWhiteSpace(id);
ArgumentException.ThrowIfNullOrWhiteSpace(text);
var content = AppManagedIndexableAppContent.CreateFromString(id, text);
_indexer!.AddOrUpdate(content);
}
/// <summary>
/// Adds or updates multiple text contents in the index.
/// </summary>
/// <param name="items">A collection of id-text pairs to index.</param>
public void IndexTextBatch(IEnumerable<(string Id, string Text)> items)
{
ThrowIfDisposed();
ThrowIfNotInitialized();
ArgumentNullException.ThrowIfNull(items);
foreach (var (id, text) in items)
{
if (!string.IsNullOrWhiteSpace(id) && !string.IsNullOrWhiteSpace(text))
{
var content = AppManagedIndexableAppContent.CreateFromString(id, text);
_indexer!.AddOrUpdate(content);
}
}
}
/// <summary>
/// Adds or updates image content in the index.
/// </summary>
/// <param name="id">The unique identifier for the image.</param>
/// <param name="bitmap">The image bitmap to index.</param>
public void IndexImage(string id, SoftwareBitmap bitmap)
{
ThrowIfDisposed();
ThrowIfNotInitialized();
ArgumentException.ThrowIfNullOrWhiteSpace(id);
ArgumentNullException.ThrowIfNull(bitmap);
var content = AppManagedIndexableAppContent.CreateFromBitmap(id, bitmap);
_indexer!.AddOrUpdate(content);
}
/// <summary>
/// Removes content from the index by its identifier.
/// </summary>
/// <param name="id">The unique identifier of the content to remove.</param>
public void Remove(string id)
{
ThrowIfDisposed();
ThrowIfNotInitialized();
ArgumentException.ThrowIfNullOrWhiteSpace(id);
_indexer!.Remove(id);
}
/// <summary>
/// Removes all content from the index.
/// </summary>
public void RemoveAll()
{
ThrowIfDisposed();
ThrowIfNotInitialized();
_indexer!.RemoveAll();
}
/// <summary>
/// Searches for text content in the index.
/// </summary>
/// <param name="searchText">The text to search for.</param>
/// <param name="options">Optional search options.</param>
/// <returns>A list of search results.</returns>
public IReadOnlyList<SemanticSearchResult> SearchText(string searchText, SemanticSearchOptions? options = null)
{
ThrowIfDisposed();
ThrowIfNotInitialized();
ArgumentException.ThrowIfNullOrWhiteSpace(searchText);
options ??= new SemanticSearchOptions();
var queryOptions = new TextQueryOptions
{
MatchScope = ConvertMatchScope(options.MatchScope),
TextMatchType = ConvertTextMatchType(options.TextMatchType),
};
if (!string.IsNullOrEmpty(options.Language))
{
queryOptions.Language = options.Language;
}
var query = _indexer!.CreateTextQuery(searchText, queryOptions);
var matches = query.GetNextMatches((uint)options.MaxResults);
return ConvertTextMatches(matches);
}
/// <summary>
/// Searches for image content in the index using text.
/// </summary>
/// <param name="searchText">The text to search for in images.</param>
/// <param name="options">Optional search options.</param>
/// <returns>A list of search results.</returns>
public IReadOnlyList<SemanticSearchResult> SearchImages(string searchText, SemanticSearchOptions? options = null)
{
ThrowIfDisposed();
ThrowIfNotInitialized();
ArgumentException.ThrowIfNullOrWhiteSpace(searchText);
options ??= new SemanticSearchOptions();
var queryOptions = new ImageQueryOptions
{
MatchScope = ConvertMatchScope(options.MatchScope),
ImageOcrTextMatchType = ConvertTextMatchType(options.TextMatchType),
};
var query = _indexer!.CreateImageQuery(searchText, queryOptions);
var matches = query.GetNextMatches((uint)options.MaxResults);
return ConvertImageMatches(matches);
}
/// <inheritdoc/>
public void Dispose()
{
if (_disposed)
{
return;
}
if (_indexer != null)
{
_indexer.Listener.IndexCapabilitiesChanged -= OnIndexCapabilitiesChanged;
_indexer.Dispose();
_indexer = null;
}
_disposed = true;
}
private SemanticSearchCapabilities LoadCapabilities()
{
var capabilities = _indexer!.GetIndexCapabilities();
return new SemanticSearchCapabilities
{
TextLexicalAvailable = IsCapabilityInitialized(capabilities, IndexCapability.TextLexical),
TextSemanticAvailable = IsCapabilityInitialized(capabilities, IndexCapability.TextSemantic),
ImageSemanticAvailable = IsCapabilityInitialized(capabilities, IndexCapability.ImageSemantic),
ImageOcrAvailable = IsCapabilityInitialized(capabilities, IndexCapability.ImageOcr),
};
}
private static bool IsCapabilityInitialized(IndexCapabilities capabilities, IndexCapability capability)
{
var state = capabilities.GetCapabilityState(capability);
return state.InitializationStatus == IndexCapabilityInitializationStatus.Initialized;
}
private void OnIndexCapabilitiesChanged(AppContentIndexer indexer, IndexCapabilities capabilities)
{
_capabilities = LoadCapabilities();
CapabilitiesChanged?.Invoke(this, _capabilities);
}
private static QueryMatchScope ConvertMatchScope(SemanticSearchMatchScope scope)
{
return scope switch
{
SemanticSearchMatchScope.Unconstrained => QueryMatchScope.Unconstrained,
SemanticSearchMatchScope.Region => QueryMatchScope.Region,
SemanticSearchMatchScope.ContentItem => QueryMatchScope.ContentItem,
_ => QueryMatchScope.Unconstrained,
};
}
private static TextLexicalMatchType ConvertTextMatchType(SemanticSearchTextMatchType matchType)
{
return matchType switch
{
SemanticSearchTextMatchType.Fuzzy => TextLexicalMatchType.Fuzzy,
SemanticSearchTextMatchType.Exact => TextLexicalMatchType.Exact,
_ => TextLexicalMatchType.Fuzzy,
};
}
private static IReadOnlyList<SemanticSearchResult> ConvertTextMatches(IReadOnlyList<TextQueryMatch> matches)
{
var results = new List<SemanticSearchResult>();
foreach (var match in matches)
{
var result = new SemanticSearchResult(match.ContentId, SemanticSearchContentKind.Text);
if (match.ContentKind == QueryMatchContentKind.AppManagedText &&
match is AppManagedTextQueryMatch textMatch)
{
result.TextOffset = textMatch.TextOffset;
result.TextLength = textMatch.TextLength;
}
results.Add(result);
}
return results;
}
private static IReadOnlyList<SemanticSearchResult> ConvertImageMatches(IReadOnlyList<ImageQueryMatch> matches)
{
var results = new List<SemanticSearchResult>();
foreach (var match in matches)
{
var result = new SemanticSearchResult(match.ContentId, SemanticSearchContentKind.Image);
results.Add(result);
}
return results;
}
private void ThrowIfDisposed()
{
ObjectDisposedException.ThrowIf(_disposed, this);
}
private void ThrowIfNotInitialized()
{
if (_indexer == null)
{
throw new InvalidOperationException("Search engine is not initialized. Call InitializeAsync() first.");
}
}
}

View File

@@ -1,26 +0,0 @@
// Copyright (c) Microsoft Corporation
// The Microsoft Corporation licenses this file to you under the MIT license.
// See the LICENSE file in the project root for more information.
namespace Common.Search.SemanticSearch;
/// <summary>
/// Specifies the scope for semantic search matching.
/// </summary>
public enum SemanticSearchMatchScope
{
/// <summary>
/// No constraints, uses both Lexical and Semantic matching.
/// </summary>
Unconstrained,
/// <summary>
/// Restrict matching to a specific region.
/// </summary>
Region,
/// <summary>
/// Restrict matching to a single content item.
/// </summary>
ContentItem,
}

View File

@@ -1,31 +0,0 @@
// Copyright (c) Microsoft Corporation
// The Microsoft Corporation licenses this file to you under the MIT license.
// See the LICENSE file in the project root for more information.
namespace Common.Search.SemanticSearch;
/// <summary>
/// Options for configuring semantic search queries.
/// </summary>
public class SemanticSearchOptions
{
/// <summary>
/// Gets or sets the language for the search query (e.g., "en-US").
/// </summary>
public string? Language { get; set; }
/// <summary>
/// Gets or sets the match scope for the search.
/// </summary>
public SemanticSearchMatchScope MatchScope { get; set; } = SemanticSearchMatchScope.Unconstrained;
/// <summary>
/// Gets or sets the text match type for lexical matching.
/// </summary>
public SemanticSearchTextMatchType TextMatchType { get; set; } = SemanticSearchTextMatchType.Fuzzy;
/// <summary>
/// Gets or sets the maximum number of results to return.
/// </summary>
public int MaxResults { get; set; } = 10;
}

View File

@@ -1,42 +0,0 @@
// Copyright (c) Microsoft Corporation
// The Microsoft Corporation licenses this file to you under the MIT license.
// See the LICENSE file in the project root for more information.
namespace Common.Search.SemanticSearch;
/// <summary>
/// Represents a search result from the semantic search engine.
/// </summary>
public class SemanticSearchResult
{
/// <summary>
/// Initializes a new instance of the <see cref="SemanticSearchResult"/> class.
/// </summary>
/// <param name="contentId">The unique identifier of the matched content.</param>
/// <param name="contentKind">The kind of content matched (text or image).</param>
public SemanticSearchResult(string contentId, SemanticSearchContentKind contentKind)
{
ContentId = contentId;
ContentKind = contentKind;
}
/// <summary>
/// Gets the unique identifier of the matched content.
/// </summary>
public string ContentId { get; }
/// <summary>
/// Gets the kind of content that was matched.
/// </summary>
public SemanticSearchContentKind ContentKind { get; }
/// <summary>
/// Gets or sets the text offset where the match was found (for text matches only).
/// </summary>
public int TextOffset { get; set; }
/// <summary>
/// Gets or sets the length of the matched text (for text matches only).
/// </summary>
public int TextLength { get; set; }
}

View File

@@ -1,21 +0,0 @@
// Copyright (c) Microsoft Corporation
// The Microsoft Corporation licenses this file to you under the MIT license.
// See the LICENSE file in the project root for more information.
namespace Common.Search.SemanticSearch;
/// <summary>
/// Specifies the type of text matching for lexical searches.
/// </summary>
public enum SemanticSearchTextMatchType
{
/// <summary>
/// Fuzzy matching allows spelling errors and approximate words.
/// </summary>
Fuzzy,
/// <summary>
/// Exact matching requires exact text matches.
/// </summary>
Exact,
}

View File

@@ -2,10 +2,6 @@
// The Microsoft Corporation licenses this file to you under the MIT license.
// See the LICENSE file in the project root for more information.
#nullable enable
using Common.Search;
namespace Settings.UI.Library
{
public enum EntryType
@@ -15,7 +11,7 @@ namespace Settings.UI.Library
SettingsExpander,
}
public struct SettingEntry : ISearchable
public struct SettingEntry
{
public EntryType Type { get; set; }
@@ -33,23 +29,16 @@ namespace Settings.UI.Library
public string Icon { get; set; }
public SettingEntry(EntryType type, string header, string pageTypeName, string elementName, string elementUid, string? parentElementName = null, string? description = null, string? icon = null)
public SettingEntry(EntryType type, string header, string pageTypeName, string elementName, string elementUid, string parentElementName = null, string description = null, string icon = null)
{
Type = type;
Header = header;
PageTypeName = pageTypeName;
ElementName = elementName;
ElementUid = elementUid;
ParentElementName = parentElementName ?? string.Empty;
Description = description ?? string.Empty;
Icon = icon ?? string.Empty;
ParentElementName = parentElementName;
Description = description;
Icon = icon;
}
// ISearchable implementation
public readonly string Id => ElementUid ?? $"{PageTypeName}|{ElementName}";
public readonly string SearchableText => Header ?? string.Empty;
public readonly string? SecondarySearchableText => Description;
}
}

View File

@@ -23,7 +23,6 @@
<ProjectReference Include="..\..\common\ManagedCommon\ManagedCommon.csproj" />
<ProjectReference Include="..\..\common\ManagedTelemetry\Telemetry\ManagedTelemetry.csproj" />
<ProjectReference Include="..\..\modules\MouseUtils\MouseJump.Common\MouseJump.Common.csproj" />
<ProjectReference Include="..\..\common\Common.Search\Common.Search.csproj" />
</ItemGroup>
<ItemGroup>

View File

@@ -1,174 +0,0 @@
// Copyright (c) Microsoft Corporation
// The Microsoft Corporation licenses this file to you under the MIT license.
// See the LICENSE file in the project root for more information.
using System;
using System.Collections.Concurrent;
using System.Collections.Generic;
using System.Globalization;
using System.Linq;
using System.Text;
using System.Threading;
using System.Threading.Tasks;
using Common.Search.FuzzSearch;
using Settings.UI.Library;
namespace Microsoft.PowerToys.Settings.UI.Services.Search
{
/// <summary>
/// A search provider that uses fuzzy string matching for settings search.
/// </summary>
public sealed class FuzzSearchProvider : ISearchProvider
{
private readonly object _lockObject = new();
private readonly Dictionary<string, (string HeaderNorm, string DescNorm)> _normalizedTextCache = new();
private IReadOnlyList<SettingEntry> _entries = Array.Empty<SettingEntry>();
private bool _isReady;
/// <inheritdoc/>
public bool IsReady
{
get
{
lock (_lockObject)
{
return _isReady;
}
}
}
/// <inheritdoc/>
public Task InitializeAsync(IReadOnlyList<SettingEntry> entries, CancellationToken cancellationToken = default)
{
ArgumentNullException.ThrowIfNull(entries);
lock (_lockObject)
{
_normalizedTextCache.Clear();
_entries = entries;
_isReady = true;
}
return Task.CompletedTask;
}
/// <inheritdoc/>
public List<SettingEntry> Search(string query, CancellationToken cancellationToken = default)
{
if (string.IsNullOrWhiteSpace(query))
{
return [];
}
IReadOnlyList<SettingEntry> currentEntries;
lock (_lockObject)
{
if (!_isReady || _entries.Count == 0)
{
return [];
}
currentEntries = _entries;
}
var normalizedQuery = NormalizeString(query);
var bag = new ConcurrentBag<(SettingEntry Hit, double Score)>();
var po = new ParallelOptions
{
CancellationToken = cancellationToken,
MaxDegreeOfParallelism = Math.Max(1, Environment.ProcessorCount - 1),
};
try
{
Parallel.ForEach(currentEntries, po, entry =>
{
var (headerNorm, descNorm) = GetNormalizedTexts(entry);
var captionScoreResult = StringMatcher.FuzzyMatch(normalizedQuery, headerNorm);
double score = captionScoreResult.Score;
if (!string.IsNullOrEmpty(descNorm))
{
var descriptionScoreResult = StringMatcher.FuzzyMatch(normalizedQuery, descNorm);
if (descriptionScoreResult.Success)
{
score = Math.Max(score, descriptionScoreResult.Score * 0.8);
}
}
if (score > 0)
{
bag.Add((entry, score));
}
});
}
catch (OperationCanceledException)
{
return [];
}
return bag
.OrderByDescending(r => r.Score)
.Select(r => r.Hit)
.ToList();
}
/// <inheritdoc/>
public void Clear()
{
lock (_lockObject)
{
_normalizedTextCache.Clear();
_entries = Array.Empty<SettingEntry>();
_isReady = false;
}
}
private (string HeaderNorm, string DescNorm) GetNormalizedTexts(SettingEntry entry)
{
if (entry.ElementUid == null && entry.Header == null)
{
return (NormalizeString(entry.Header), NormalizeString(entry.Description));
}
var key = entry.ElementUid ?? $"{entry.PageTypeName}|{entry.ElementName}";
lock (_lockObject)
{
if (_normalizedTextCache.TryGetValue(key, out var cached))
{
return cached;
}
}
var headerNorm = NormalizeString(entry.Header);
var descNorm = NormalizeString(entry.Description);
lock (_lockObject)
{
_normalizedTextCache[key] = (headerNorm, descNorm);
}
return (headerNorm, descNorm);
}
private static string NormalizeString(string input)
{
if (string.IsNullOrEmpty(input))
{
return string.Empty;
}
var normalized = input.ToLowerInvariant().Normalize(NormalizationForm.FormKD);
var stringBuilder = new StringBuilder();
foreach (var c in normalized)
{
var unicodeCategory = CharUnicodeInfo.GetUnicodeCategory(c);
if (unicodeCategory != UnicodeCategory.NonSpacingMark)
{
stringBuilder.Append(c);
}
}
return stringBuilder.ToString();
}
}
}

View File

@@ -1,43 +0,0 @@
// Copyright (c) Microsoft Corporation
// The Microsoft Corporation licenses this file to you under the MIT license.
// See the LICENSE file in the project root for more information.
using System.Collections.Generic;
using System.Threading;
using System.Threading.Tasks;
using Settings.UI.Library;
namespace Microsoft.PowerToys.Settings.UI.Services.Search
{
/// <summary>
/// Defines a pluggable search provider interface for settings search.
/// </summary>
public interface ISearchProvider
{
/// <summary>
/// Gets a value indicating whether the provider is ready to search.
/// </summary>
bool IsReady { get; }
/// <summary>
/// Initializes the search provider with the given index entries.
/// </summary>
/// <param name="entries">The setting entries to index.</param>
/// <param name="cancellationToken">Cancellation token.</param>
/// <returns>A task representing the async operation.</returns>
Task InitializeAsync(IReadOnlyList<SettingEntry> entries, CancellationToken cancellationToken = default);
/// <summary>
/// Searches for settings matching the query.
/// </summary>
/// <param name="query">The search query.</param>
/// <param name="cancellationToken">Cancellation token.</param>
/// <returns>A list of matching setting entries ordered by relevance.</returns>
List<SettingEntry> Search(string query, CancellationToken cancellationToken = default);
/// <summary>
/// Clears the index and releases resources.
/// </summary>
void Clear();
}
}

View File

@@ -3,15 +3,20 @@
// See the LICENSE file in the project root for more information.
using System;
using System.Collections.Concurrent;
using System.Collections.Generic;
using System.Collections.Immutable;
using System.Diagnostics;
using System.Globalization;
using System.IO;
using System.Linq;
using System.Reflection;
using System.Text;
using System.Text.Json;
using System.Threading;
using System.Threading.Tasks;
using Common.Search.FuzzSearch;
using Microsoft.PowerToys.Settings.UI.Helpers;
using Microsoft.PowerToys.Settings.UI.Services.Search;
using Microsoft.PowerToys.Settings.UI.Views;
using Microsoft.Windows.ApplicationModel.Resources;
using Settings.UI.Library;
@@ -22,43 +27,14 @@ namespace Microsoft.PowerToys.Settings.UI.Services
{
private static readonly object _lockObject = new();
private static readonly Dictionary<string, string> _pageNameCache = [];
private static readonly Dictionary<string, (string HeaderNorm, string DescNorm)> _normalizedTextCache = new();
private static readonly Dictionary<string, Type> _pageTypeCache = new();
private static ImmutableArray<SettingEntry> _index = [];
private static ISearchProvider _searchProvider;
private static bool _isIndexBuilt;
private static bool _isIndexBuilding;
private const string PrebuiltIndexResourceName = "Microsoft.PowerToys.Settings.UI.Assets.search.index.json";
private static JsonSerializerOptions _serializerOptions = new() { PropertyNameCaseInsensitive = true };
/// <summary>
/// Gets or sets the search provider. Defaults to FuzzSearchProvider if not set.
/// </summary>
public static ISearchProvider SearchProvider
{
get
{
lock (_lockObject)
{
_searchProvider ??= new FuzzSearchProvider();
return _searchProvider;
}
}
set
{
lock (_lockObject)
{
_searchProvider = value;
// If index is already built, reinitialize the new provider
if (_isIndexBuilt && _searchProvider != null)
{
_searchProvider.InitializeAsync(_index).ConfigureAwait(false);
}
}
}
}
public static ImmutableArray<SettingEntry> Index
{
get
@@ -76,7 +52,7 @@ namespace Microsoft.PowerToys.Settings.UI.Services
{
lock (_lockObject)
{
return _isIndexBuilt && SearchProvider.IsReady;
return _isIndexBuilt;
}
}
}
@@ -93,6 +69,7 @@ namespace Microsoft.PowerToys.Settings.UI.Services
_isIndexBuilding = true;
// Clear caches on rebuild
_normalizedTextCache.Clear();
_pageTypeCache.Clear();
}
@@ -101,17 +78,12 @@ namespace Microsoft.PowerToys.Settings.UI.Services
var builder = ImmutableArray.CreateBuilder<SettingEntry>();
LoadIndexFromPrebuiltData(builder);
ImmutableArray<SettingEntry> builtIndex;
lock (_lockObject)
{
_index = builder.ToImmutable();
builtIndex = _index;
_isIndexBuilt = true;
_isIndexBuilding = false;
}
// Initialize the search provider with the index
SearchProvider.InitializeAsync(builtIndex).ConfigureAwait(false);
}
catch (Exception ex)
{
@@ -237,32 +209,57 @@ namespace Microsoft.PowerToys.Settings.UI.Services
return [];
}
if (!IsIndexReady)
var currentIndex = Index;
if (currentIndex.IsEmpty)
{
Debug.WriteLine("[SearchIndexService] Search called but index is not ready.");
Debug.WriteLine("[SearchIndexService] Search called but index is empty.");
return [];
}
// Delegate search to the pluggable search provider
var results = SearchProvider.Search(query, token);
// Filter results to ensure page types are valid
return FilterValidPageTypes(results);
}
private static List<SettingEntry> FilterValidPageTypes(List<SettingEntry> results)
{
var filtered = new List<SettingEntry>();
foreach (var entry in results)
var normalizedQuery = NormalizeString(query);
var bag = new ConcurrentBag<(SettingEntry Hit, double Score)>();
var po = new ParallelOptions
{
var pageType = GetPageTypeFromName(entry.PageTypeName);
if (pageType != null)
CancellationToken = token,
MaxDegreeOfParallelism = Math.Max(1, Environment.ProcessorCount - 1),
};
try
{
Parallel.ForEach(currentIndex, po, entry =>
{
filtered.Add(entry);
}
var (headerNorm, descNorm) = GetNormalizedTexts(entry);
var captionScoreResult = StringMatcher.FuzzyMatch(normalizedQuery, headerNorm);
double score = captionScoreResult.Score;
if (!string.IsNullOrEmpty(descNorm))
{
var descriptionScoreResult = StringMatcher.FuzzyMatch(normalizedQuery, descNorm);
if (descriptionScoreResult.Success)
{
score = Math.Max(score, descriptionScoreResult.Score * 0.8);
}
}
if (score > 0)
{
var pageType = GetPageTypeFromName(entry.PageTypeName);
if (pageType != null)
{
bag.Add((entry, score));
}
}
});
}
catch (OperationCanceledException)
{
return [];
}
return filtered;
return bag
.OrderByDescending(r => r.Score)
.Select(r => r.Hit)
.ToList();
}
private static Type GetPageTypeFromName(string pageTypeName)
@@ -286,6 +283,53 @@ namespace Microsoft.PowerToys.Settings.UI.Services
}
}
private static (string HeaderNorm, string DescNorm) GetNormalizedTexts(SettingEntry entry)
{
if (entry.ElementUid == null && entry.Header == null)
{
return (NormalizeString(entry.Header), NormalizeString(entry.Description));
}
var key = entry.ElementUid ?? $"{entry.PageTypeName}|{entry.ElementName}";
lock (_lockObject)
{
if (_normalizedTextCache.TryGetValue(key, out var cached))
{
return cached;
}
}
var headerNorm = NormalizeString(entry.Header);
var descNorm = NormalizeString(entry.Description);
lock (_lockObject)
{
_normalizedTextCache[key] = (headerNorm, descNorm);
}
return (headerNorm, descNorm);
}
private static string NormalizeString(string input)
{
if (string.IsNullOrEmpty(input))
{
return string.Empty;
}
var normalized = input.ToLowerInvariant().Normalize(NormalizationForm.FormKD);
var stringBuilder = new StringBuilder();
foreach (var c in normalized)
{
var unicodeCategory = CharUnicodeInfo.GetUnicodeCategory(c);
if (unicodeCategory != UnicodeCategory.NonSpacingMark)
{
stringBuilder.Append(c);
}
}
return stringBuilder.ToString();
}
public static string GetLocalizedPageName(string pageTypeName)
{
return _pageNameCache.TryGetValue(pageTypeName, out string cachedName) ? cachedName : string.Empty;

View File

@@ -1,140 +0,0 @@
// Copyright (c) Microsoft Corporation
// The Microsoft Corporation licenses this file to you under the MIT license.
// See the LICENSE file in the project root for more information.
using System;
using System.Collections.Generic;
using System.Linq;
using System.Threading;
using System.Threading.Tasks;
using Common.Search;
using Common.Search.FuzzSearch;
using Settings.UI.Library;
namespace Microsoft.PowerToys.Settings.UI.Services
{
/// <summary>
/// A service that provides search functionality for settings using pluggable search engines.
/// </summary>
public sealed class SettingsSearchService : IDisposable
{
private readonly ISearchEngine<SettingEntry> _searchEngine;
private bool _disposed;
/// <summary>
/// Initializes a new instance of the <see cref="SettingsSearchService"/> class with the default FuzzSearchEngine.
/// </summary>
public SettingsSearchService()
: this(new FuzzSearchEngine<SettingEntry>())
{
}
/// <summary>
/// Initializes a new instance of the <see cref="SettingsSearchService"/> class with a custom search engine.
/// </summary>
/// <param name="searchEngine">The search engine to use.</param>
public SettingsSearchService(ISearchEngine<SettingEntry> searchEngine)
{
ArgumentNullException.ThrowIfNull(searchEngine);
_searchEngine = searchEngine;
}
/// <summary>
/// Gets a value indicating whether the search service is ready.
/// </summary>
public bool IsReady => _searchEngine.IsReady;
/// <summary>
/// Gets the search engine capabilities.
/// </summary>
public SearchEngineCapabilities Capabilities => _searchEngine.Capabilities;
/// <summary>
/// Initializes the search service with the given setting entries.
/// </summary>
/// <param name="entries">The setting entries to index.</param>
/// <param name="cancellationToken">Cancellation token.</param>
/// <returns>A task representing the async operation.</returns>
public async Task InitializeAsync(IEnumerable<SettingEntry> entries, CancellationToken cancellationToken = default)
{
ArgumentNullException.ThrowIfNull(entries);
ThrowIfDisposed();
await _searchEngine.InitializeAsync(cancellationToken);
await _searchEngine.IndexBatchAsync(entries, cancellationToken);
}
/// <summary>
/// Rebuilds the index with new entries.
/// </summary>
/// <param name="entries">The new setting entries to index.</param>
/// <param name="cancellationToken">Cancellation token.</param>
/// <returns>A task representing the async operation.</returns>
public async Task RebuildIndexAsync(IEnumerable<SettingEntry> entries, CancellationToken cancellationToken = default)
{
ArgumentNullException.ThrowIfNull(entries);
ThrowIfDisposed();
await _searchEngine.ClearAsync(cancellationToken);
await _searchEngine.IndexBatchAsync(entries, cancellationToken);
}
/// <summary>
/// Searches for settings matching the query.
/// </summary>
/// <param name="query">The search query.</param>
/// <param name="cancellationToken">Cancellation token.</param>
/// <returns>A list of matching setting entries.</returns>
public async Task<List<SettingEntry>> SearchAsync(string query, CancellationToken cancellationToken = default)
{
ThrowIfDisposed();
if (string.IsNullOrWhiteSpace(query))
{
return [];
}
var results = await _searchEngine.SearchAsync(query, cancellationToken: cancellationToken);
return results.Select(r => r.Item).ToList();
}
/// <summary>
/// Searches for settings matching the query with detailed results.
/// </summary>
/// <param name="query">The search query.</param>
/// <param name="options">Search options.</param>
/// <param name="cancellationToken">Cancellation token.</param>
/// <returns>A list of search results with scoring information.</returns>
public async Task<IReadOnlyList<SearchResult<SettingEntry>>> SearchWithScoresAsync(
string query,
SearchOptions? options = null,
CancellationToken cancellationToken = default)
{
ThrowIfDisposed();
if (string.IsNullOrWhiteSpace(query))
{
return Array.Empty<SearchResult<SettingEntry>>();
}
return await _searchEngine.SearchAsync(query, options, cancellationToken);
}
/// <inheritdoc/>
public void Dispose()
{
if (_disposed)
{
return;
}
_searchEngine.Dispose();
_disposed = true;
}
private void ThrowIfDisposed()
{
ObjectDisposedException.ThrowIf(_disposed, this);
}
}
}

2
tools/mcp/github-artifacts/.gitignore vendored Normal file
View File

@@ -0,0 +1,2 @@
node_modules/
package-lock.json

View File

@@ -0,0 +1,14 @@
import { existsSync } from "fs";
import { execSync } from "child_process";
import { fileURLToPath } from "url";
import { dirname } from "path";
const __dirname = dirname(fileURLToPath(import.meta.url));
process.chdir(__dirname);
if (!existsSync("node_modules")) {
console.log("[MCP] Installing dependencies...");
execSync("npm install", { stdio: "inherit" });
}
import("./server.js");

View File

@@ -0,0 +1,19 @@
{
"name": "github-artifacts",
"version": "1.0.0",
"description": "",
"main": "server.js",
"scripts": {
"test": "node test-github_issue_images.js && node test-github_issue_attachments.js",
"start": "node server.js"
},
"keywords": [],
"author": "",
"license": "ISC",
"type": "module",
"dependencies": {
"@modelcontextprotocol/sdk": "^1.25.2",
"jszip": "^3.10.1",
"zod": "^3.25.76"
}
}

View File

@@ -0,0 +1,385 @@
import { McpServer } from "@modelcontextprotocol/sdk/server/mcp.js";
import { StdioServerTransport } from "@modelcontextprotocol/sdk/server/stdio.js";
import { z } from "zod";
import fs from "fs/promises";
import path from "path";
import { execFile } from "child_process";
const server = new McpServer({
name: "issue-images",
version: "0.1.0"
});
const GH_API_PER_PAGE = 100;
const MAX_TEXT_FILE_BYTES = 100000;
// Limit to common text files to avoid binary blobs, huge payloads, and non-UTF8 noise.
const TEXT_EXTENSIONS = new Set([
".txt", ".log", ".json", ".xml", ".yaml", ".yml", ".md", ".csv",
".ini", ".config", ".conf", ".bat", ".ps1", ".sh", ".reg", ".etl"
]);
function extractImageUrls(markdownOrHtml) {
const urls = new Set();
// Markdown images: ![alt](url)
for (const m of markdownOrHtml.matchAll(/!\[[^\]]*?\]\((https?:\/\/[^\s)]+)\)/g)) {
urls.add(m[1]);
}
// HTML <img src="...">
for (const m of markdownOrHtml.matchAll(/<img[^>]+src="(https?:\/\/[^">]+)"/g)) {
urls.add(m[1]);
}
return [...urls];
}
function extractZipUrls(markdownOrHtml) {
const urls = new Set();
// Markdown links to .zip files: [text](url.zip)
for (const m of markdownOrHtml.matchAll(/\[[^\]]*?\]\((https?:\/\/[^\s)]+\.zip)\)/gi)) {
urls.add(m[1]);
}
// Plain URLs ending in .zip
for (const m of markdownOrHtml.matchAll(/(https?:\/\/[^\s<>"]+\.zip)/gi)) {
urls.add(m[1]);
}
return [...urls];
}
async function fetchJson(url, token) {
const res = await fetch(url, {
headers: {
"Accept": "application/vnd.github+json",
...(token ? { "Authorization": `Bearer ${token}` } : {}),
"X-GitHub-Api-Version": "2022-11-28",
"User-Agent": "issue-images-mcp"
}
});
if (!res.ok) throw new Error(`GitHub API failed: ${res.status} ${res.statusText}`);
return await res.json();
}
async function downloadBytes(url, token) {
const res = await fetch(url, {
headers: {
...(token ? { "Authorization": `Bearer ${token}` } : {}),
"User-Agent": "issue-images-mcp"
}
});
if (!res.ok) throw new Error(`Image download failed: ${res.status} ${res.statusText}`);
const buf = new Uint8Array(await res.arrayBuffer());
const ct = res.headers.get("content-type") || "image/png";
return { buf, mimeType: ct };
}
async function downloadZipBytes(url, token) {
const zipUrl = url.includes("?") ? url : `${url}?download=1`;
const tryFetch = async (useAuth) => {
const res = await fetch(zipUrl, {
headers: {
"Accept": "application/octet-stream",
...(useAuth && token ? { "Authorization": `Bearer ${token}` } : {}),
"User-Agent": "issue-images-mcp"
},
redirect: "follow"
});
if (!res.ok) throw new Error(`ZIP download failed: ${res.status} ${res.statusText}`);
const contentType = (res.headers.get("content-type") || "").toLowerCase();
const buf = new Uint8Array(await res.arrayBuffer());
return { buf, contentType };
};
let { buf, contentType } = await tryFetch(true);
const isZip = buf.length >= 4 && buf[0] === 0x50 && buf[1] === 0x4b;
if (!isZip) {
({ buf, contentType } = await tryFetch(false));
}
const isZipRetry = buf.length >= 4 && buf[0] === 0x50 && buf[1] === 0x4b;
if (!isZipRetry || contentType.includes("text/html") || buf.length < 100) {
throw new Error("ZIP download returned HTML or invalid data. Check permissions or rate limits.");
}
return buf;
}
function execFileAsync(file, args) {
return new Promise((resolve, reject) => {
execFile(file, args, (error, stdout, stderr) => {
if (error) {
reject(new Error(stderr || error.message));
} else {
resolve(stdout);
}
});
});
}
async function extractZipToFolder(zipPath, extractPath) {
if (process.platform === "win32") {
await execFileAsync("powershell", [
"-NoProfile",
"-NonInteractive",
"-Command",
`$ProgressPreference='SilentlyContinue'; Expand-Archive -Path \"${zipPath}\" -DestinationPath \"${extractPath}\" -Force -ErrorAction Stop | Out-Null`
]);
return;
}
await execFileAsync("unzip", ["-o", zipPath, "-d", extractPath]);
}
async function listFilesRecursively(dir) {
const entries = await fs.readdir(dir, { withFileTypes: true });
const files = [];
for (const entry of entries) {
const fullPath = path.join(dir, entry.name);
if (entry.isDirectory()) {
files.push(...await listFilesRecursively(fullPath));
} else {
files.push(fullPath);
}
}
return files;
}
async function pathExists(p) {
try {
await fs.access(p);
return true;
} catch {
return false;
}
}
async function fetchAllComments(owner, repo, issueNumber, token) {
let comments = [];
let page = 1;
while (true) {
const pageComments = await fetchJson(
`https://api.github.com/repos/${owner}/${repo}/issues/${issueNumber}/comments?per_page=${GH_API_PER_PAGE}&page=${page}`,
token
);
comments = comments.concat(pageComments);
if (pageComments.length < GH_API_PER_PAGE) break;
page++;
}
return comments;
}
async function fetchIssueAndComments(owner, repo, issueNumber, token) {
const issue = await fetchJson(`https://api.github.com/repos/${owner}/${repo}/issues/${issueNumber}`, token);
const comments = issue.comments > 0 ? await fetchAllComments(owner, repo, issueNumber, token) : [];
return { issue, comments };
}
function buildBlobs(issue, comments) {
return [issue.body || "", ...comments.map(c => c.body || "")].join("\n\n---\n\n");
}
server.registerTool(
"github_issue_images",
{
title: "GitHub Issue Images",
description: `Download and return images from a GitHub issue or pull request.
USE THIS TOOL WHEN:
- User asks about a GitHub issue/PR that contains screenshots, images, or visual content
- User wants to understand a bug report with attached images
- User asks to analyze, describe, or review images in an issue/PR
- User references a GitHub issue/PR URL and the context suggests images are relevant
- User asks about UI bugs, visual glitches, design issues, or anything visual in nature
WHAT IT DOES:
- Fetches all images from the issue/PR body and all comments
- Returns actual image data (not just URLs) so the LLM can see and analyze the images
- Supports PNG, JPEG, GIF, and other common image formats
EXAMPLES OF WHEN TO USE:
- "What does the bug in issue #123 look like?"
- "Can you see the screenshot in this PR?"
- "Analyze the images in microsoft/PowerToys#25595"
- "What UI problem is shown in this issue?"`,
inputSchema: {
owner: z.string(),
repo: z.string(),
issueNumber: z.number(),
maxImages: z.number().min(1).max(20).optional()
},
outputSchema: {
images: z.number(),
comments: z.number()
}
},
async ({ owner, repo, issueNumber, maxImages = 20 }) => {
try {
const token = process.env.GITHUB_TOKEN;
const { issue, comments } = await fetchIssueAndComments(owner, repo, issueNumber, token);
const blobs = buildBlobs(issue, comments);
const urls = extractImageUrls(blobs).slice(0, maxImages);
const content = [
{ type: "text", text: `Found ${urls.length} image(s) in issue #${issueNumber} (from ${comments.length} comments). Returning as image parts.` }
];
for (const url of urls) {
const { buf, mimeType } = await downloadBytes(url, token);
const b64 = Buffer.from(buf).toString("base64");
content.push({ type: "image", data: b64, mimeType });
content.push({ type: "text", text: `Image source: ${url}` });
}
const output = { images: urls.length, comments: comments.length };
return { content, structuredContent: output };
} catch (err) {
const message = err instanceof Error ? err.message : String(err);
return { content: [{ type: "text", text: `Error: ${message}` }], isError: true };
}
}
);
server.registerTool(
"github_issue_attachments",
{
title: "GitHub Issue Attachments",
description: `Download and extract ZIP file attachments from a GitHub issue or pull request.
USE THIS TOOL WHEN:
- User asks about diagnostic logs, crash reports, or debug information in an issue
- Issue contains ZIP attachments like PowerToysReport_*.zip, logs.zip, debug.zip
- User wants to analyze log files, configuration files, or system info from an issue
- User asks about error logs, stack traces, or diagnostic data attached to an issue
- Issue mentions attached files that need to be examined
WHAT IT DOES:
- Finds all ZIP file attachments in the issue body and comments
- Downloads and extracts each ZIP to a local folder
- Returns file listing and contents of text files (logs, json, xml, txt, etc.)
- Each ZIP is extracted to: {extractFolder}/{zipFileName}/
EXAMPLES OF WHEN TO USE:
- "What's in the diagnostic report attached to issue #39476?"
- "Can you check the logs in the PowerToysReport zip?"
- "Analyze the crash dump attached to this issue"
- "What error is shown in the attached log files?"`,
inputSchema: {
owner: z.string(),
repo: z.string(),
issueNumber: z.number(),
extractFolder: z.string(),
maxFiles: z.number().min(1).optional()
},
outputSchema: {
zips: z.number(),
extracted: z.number(),
extractedTo: z.array(z.string())
}
},
async ({ owner, repo, issueNumber, extractFolder, maxFiles = 50 }) => {
try {
const token = process.env.GITHUB_TOKEN;
const { issue, comments } = await fetchIssueAndComments(owner, repo, issueNumber, token);
const blobs = buildBlobs(issue, comments);
const zipUrls = extractZipUrls(blobs);
if (zipUrls.length === 0) {
return {
content: [{ type: "text", text: `No ZIP attachments found in issue #${issueNumber}.` }],
structuredContent: { zips: 0, extracted: 0, extractedTo: [] }
};
}
await fs.mkdir(extractFolder, { recursive: true });
const content = [
{ type: "text", text: `Found ${zipUrls.length} ZIP attachment(s) in issue #${issueNumber}. Extracting to: ${extractFolder}` }
];
let totalFilesReturned = 0;
const extractedPaths = [];
let extractedCount = 0;
for (const zipUrl of zipUrls) {
try {
const urlPath = new URL(zipUrl).pathname;
const zipFileName = path.basename(urlPath, ".zip");
const extractPath = path.join(extractFolder, zipFileName);
const zipPath = path.join(extractFolder, `${zipFileName}.zip`);
let extractedFiles = [];
const extractPathExists = await pathExists(extractPath);
if (extractPathExists) {
extractedFiles = await listFilesRecursively(extractPath);
}
if (!extractPathExists || extractedFiles.length === 0) {
const zipExists = await pathExists(zipPath);
if (!zipExists) {
const buf = await downloadZipBytes(zipUrl, token);
await fs.writeFile(zipPath, buf);
}
await fs.mkdir(extractPath, { recursive: true });
await extractZipToFolder(zipPath, extractPath);
extractedFiles = await listFilesRecursively(extractPath);
}
extractedPaths.push(extractPath);
extractedCount++;
const fileList = [];
const textContents = [];
for (const fullPath of extractedFiles) {
const relPath = path.relative(extractPath, fullPath).replace(/\\/g, "/");
const ext = path.extname(relPath).toLowerCase();
const stat = await fs.stat(fullPath);
const sizeKB = Math.round(stat.size / 1024);
fileList.push(` ${relPath} (${sizeKB} KB)`);
if (TEXT_EXTENSIONS.has(ext) && totalFilesReturned < maxFiles && stat.size < MAX_TEXT_FILE_BYTES) {
try {
const textContent = await fs.readFile(fullPath, "utf-8");
textContents.push({ path: relPath, content: textContent });
totalFilesReturned++;
} catch {
// Not valid UTF-8, skip
}
}
}
content.push({ type: "text", text: `\n📦 ${zipFileName}.zip extracted to: ${extractPath}\nFiles:\n${fileList.join("\n")}` });
for (const { path: fPath, content: fContent } of textContents) {
content.push({ type: "text", text: `\n--- ${fPath} ---\n${fContent}` });
}
} catch (err) {
const message = err instanceof Error ? err.message : String(err);
content.push({ type: "text", text: `❌ Failed to extract ${zipUrl}: ${message}` });
}
}
const output = { zips: zipUrls.length, extracted: extractedCount, extractedTo: extractedPaths };
return { content, structuredContent: output };
} catch (err) {
const message = err instanceof Error ? err.message : String(err);
return { content: [{ type: "text", text: `Error: ${message}` }], isError: true };
}
}
);
const transport = new StdioServerTransport();
await server.connect(transport);

View File

@@ -0,0 +1,141 @@
// Test script for github_issue_attachments tool
// Run with: node test-github_issue_attachments.js
// Make sure GITHUB_TOKEN is set in environment
import { spawn } from "child_process";
import fs from "fs/promises";
import path from "path";
import { fileURLToPath } from "url";
const __dirname = path.dirname(fileURLToPath(import.meta.url));
const extractFolder = path.join(__dirname, "test-extracts");
const server = spawn("node", ["server.js"], {
stdio: ["pipe", "pipe", "inherit"],
env: { ...process.env }
});
// Send initialize request
const initRequest = {
jsonrpc: "2.0",
id: 1,
method: "initialize",
params: {
protocolVersion: "2024-11-05",
capabilities: {},
clientInfo: { name: "test-client", version: "1.0.0" }
}
};
server.stdin.write(JSON.stringify(initRequest) + "\n");
// Send list tools request
setTimeout(() => {
const listToolsRequest = {
jsonrpc: "2.0",
id: 2,
method: "tools/list",
params: {}
};
server.stdin.write(JSON.stringify(listToolsRequest) + "\n");
}, 500);
// Send call tool request - test with PowerToys issue that has ZIP attachments
setTimeout(() => {
const callToolRequest = {
jsonrpc: "2.0",
id: 3,
method: "tools/call",
params: {
name: "github_issue_attachments",
arguments: {
owner: "microsoft",
repo: "PowerToys",
issueNumber: 39476, // Has PowerToysReport_*.zip attachment
extractFolder: extractFolder,
maxFiles: 20
}
}
};
server.stdin.write(JSON.stringify(callToolRequest) + "\n");
}, 1000);
// Track summary
let summary = { zips: 0, files: 0, extractPath: "" };
let buffer = "";
// Read responses
server.stdout.on("data", (data) => {
buffer += data.toString();
// Try to parse complete JSON objects from buffer
const lines = buffer.split("\n");
buffer = lines.pop() || ""; // Keep incomplete line in buffer
for (const line of lines) {
if (!line.trim()) continue;
try {
const response = JSON.parse(line);
console.log("\n=== Response ===");
console.log("ID:", response.id);
if (response.result?.tools) {
console.log("Tools:", response.result.tools.map(t => t.name));
} else if (response.result?.content) {
for (const item of response.result.content) {
if (item.type === "text") {
// Truncate long file contents for display
const text = item.text;
if (text.startsWith("---") && text.length > 500) {
console.log(text.substring(0, 500) + "\n... [truncated]");
} else {
console.log(text);
}
// Track stats
if (text.includes("ZIP attachment")) {
const match = text.match(/Found (\d+) ZIP/);
if (match) summary.zips = parseInt(match[1]);
}
if (text.includes("extracted to:")) {
summary.extractPath = text.match(/extracted to: (.+)/)?.[1] || "";
}
if (text.includes("Files:")) {
summary.files = (text.match(/ /g) || []).length;
}
}
}
} else {
console.log("Result:", JSON.stringify(response.result, null, 2));
}
} catch (e) {
// Likely incomplete JSON, will be in next chunk
}
}
});
// Exit after 60 seconds (ZIP download may take time)
setTimeout(() => {
console.log("\n" + "=".repeat(50));
console.log("=== Test Summary ===");
console.log("=".repeat(50));
console.log(`ZIP files found: ${summary.zips}`);
console.log(`Files extracted: ${summary.files}`);
if (summary.extractPath) {
console.log(`Extract location: ${summary.extractPath}`);
}
console.log("=".repeat(50));
console.log("Cleaning up extracted files...");
fs.rm(extractFolder, { recursive: true, force: true })
.then(() => {
console.log("Cleanup done.");
})
.catch((err) => {
console.log(`Cleanup failed: ${err.message}`);
})
.finally(() => {
console.log("Test complete!");
server.kill();
process.exit(0);
});
}, 60000);

View File

@@ -0,0 +1,118 @@
// Simple test script - run with: node test-github_issue_images.js
// Make sure GITHUB_TOKEN is set in environment
import { spawn } from "child_process";
const server = spawn("node", ["server.js"], {
stdio: ["pipe", "pipe", "inherit"],
env: { ...process.env }
});
// Send initialize request
const initRequest = {
jsonrpc: "2.0",
id: 1,
method: "initialize",
params: {
protocolVersion: "2024-11-05",
capabilities: {},
clientInfo: { name: "test-client", version: "1.0.0" }
}
};
server.stdin.write(JSON.stringify(initRequest) + "\n");
// Send list tools request
setTimeout(() => {
const listToolsRequest = {
jsonrpc: "2.0",
id: 2,
method: "tools/list",
params: {}
};
server.stdin.write(JSON.stringify(listToolsRequest) + "\n");
}, 500);
// Send call tool request (test with a real issue)
setTimeout(() => {
const callToolRequest = {
jsonrpc: "2.0",
id: 3,
method: "tools/call",
params: {
name: "github_issue_images",
arguments: {
owner: "microsoft",
repo: "PowerToys",
issueNumber: 25595, // 315 comments, many images - tests pagination!
maxImages: 5
}
}
};
server.stdin.write(JSON.stringify(callToolRequest) + "\n");
}, 1000);
// Track summary
let summary = { images: 0, totalKB: 0, text: "" };
let gotToolResponse = false;
let buffer = "";
// Read responses
server.stdout.on("data", (data) => {
buffer += data.toString();
// Try to parse complete JSON objects from buffer
const lines = buffer.split("\n");
buffer = lines.pop() || ""; // Keep incomplete line in buffer
for (const line of lines) {
if (!line.trim()) continue;
try {
const response = JSON.parse(line);
console.log("\n=== Response ===");
console.log("ID:", response.id);
if (response.result?.tools) {
console.log("Tools:", response.result.tools.map(t => t.name));
} else if (response.result?.content) {
gotToolResponse = true;
let imageCount = 0;
for (const item of response.result.content) {
if (item.type === "text") {
console.log("Text:", item.text);
summary.text = item.text;
} else if (item.type === "image") {
imageCount++;
const sizeKB = Math.round(item.data.length * 0.75 / 1024); // base64 to actual size
console.log(` [Image ${imageCount}] ${item.mimeType} - ${sizeKB} KB downloaded`);
summary.images++;
summary.totalKB += sizeKB;
}
}
} else {
console.log("Result:", JSON.stringify(response.result, null, 2));
}
} catch (e) {
// Likely incomplete JSON, will be in next chunk
}
}
});
// Exit after 60 seconds (more time for downloads)
setTimeout(() => {
console.log("\n" + "=".repeat(50));
console.log("=== Test Summary ===");
console.log("=".repeat(50));
if (summary.text) {
console.log(summary.text);
}
if (summary.images > 0) {
console.log(`Total images downloaded: ${summary.images}`);
console.log(`Total size: ${summary.totalKB} KB`);
} else if (!gotToolResponse) {
console.log("No tool response received yet. The request may still be running or was rate-limited.");
}
console.log("=".repeat(50));
console.log("Test complete!");
server.kill();
process.exit(0);
}, 60000);