mirror of
https://github.com/microsoft/PowerToys.git
synced 2026-01-22 03:57:13 +01:00
Compare commits
2 Commits
dev/vanzue
...
user/yeela
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
30bbcf32a4 | ||
|
|
5422bc31bb |
6
.github/actions/spell-check/expect.txt
vendored
6
.github/actions/spell-check/expect.txt
vendored
@@ -32,6 +32,7 @@ advfirewall
|
||||
AFeature
|
||||
affordances
|
||||
AFX
|
||||
agentskills
|
||||
AGGREGATABLE
|
||||
AHK
|
||||
AHybrid
|
||||
@@ -219,6 +220,7 @@ CIELCh
|
||||
cim
|
||||
CImage
|
||||
cla
|
||||
claude
|
||||
CLASSDC
|
||||
classmethod
|
||||
CLASSNOTAVAILABLE
|
||||
@@ -1040,6 +1042,7 @@ mmi
|
||||
mmsys
|
||||
mobileredirect
|
||||
mockapi
|
||||
modelcontextprotocol
|
||||
MODALFRAME
|
||||
MODESPRUNED
|
||||
MONITORENUMPROC
|
||||
@@ -1737,6 +1740,7 @@ STICKYKEYS
|
||||
sticpl
|
||||
storelogo
|
||||
stprintf
|
||||
streamable
|
||||
streamjsonrpc
|
||||
STRINGIZE
|
||||
stringtable
|
||||
@@ -1756,6 +1760,7 @@ SUBMODULEUPDATE
|
||||
subresource
|
||||
Superbar
|
||||
sut
|
||||
swe
|
||||
svchost
|
||||
SVGIn
|
||||
SVGIO
|
||||
@@ -1962,6 +1967,7 @@ visualeffects
|
||||
vkey
|
||||
vmovl
|
||||
VMs
|
||||
vnd
|
||||
vorrq
|
||||
VOS
|
||||
vpaddlq
|
||||
|
||||
92
.github/agents/FixIssue.agent.md
vendored
Normal file
92
.github/agents/FixIssue.agent.md
vendored
Normal file
@@ -0,0 +1,92 @@
|
||||
---
|
||||
description: 'Implements fixes for GitHub issues based on implementation plans'
|
||||
name: 'FixIssue'
|
||||
tools: ['read', 'edit', 'search', 'execute', 'agent', 'usages', 'problems', 'changes', 'testFailure', 'github/*', 'github.vscode-pull-request-github/*']
|
||||
argument-hint: 'GitHub issue number (e.g., #12345)'
|
||||
infer: true
|
||||
---
|
||||
|
||||
# FixIssue Agent
|
||||
|
||||
You are an **IMPLEMENTATION AGENT** specialized in executing implementation plans to fix GitHub issues.
|
||||
|
||||
## Identity & Expertise
|
||||
|
||||
- Expert at translating plans into working code
|
||||
- Deep knowledge of PowerToys codebase patterns and conventions
|
||||
- Skilled at writing tests, handling edge cases, and validating builds
|
||||
- You follow plans precisely while handling ambiguity gracefully
|
||||
|
||||
## Goal
|
||||
|
||||
For the given **issue_number**, execute the implementation plan and produce:
|
||||
1. Working code changes applied directly to the repository
|
||||
2. `Generated Files/issueFix/{{issue_number}}/pr-description.md` — PR-ready description
|
||||
3. `Generated Files/issueFix/{{issue_number}}/manual-steps.md` — Only if human action needed
|
||||
|
||||
## Core Directive
|
||||
|
||||
**Follow the implementation plan in `Generated Files/issueReview/{{issue_number}}/implementation-plan.md` as the single source of truth.**
|
||||
|
||||
If the plan doesn't exist, invoke PlanIssue agent first via `runSubagent`.
|
||||
|
||||
## Working Principles
|
||||
|
||||
- **Plan First**: Read and understand the entire implementation plan before coding
|
||||
- **Validate Always**: For each change: Edit → Build → Verify → Commit. Never proceed if build fails.
|
||||
- **Atomic Commits**: Each commit must be self-contained, buildable, and meaningful
|
||||
- **Ask, Don't Guess**: When uncertain, insert `// TODO(Human input needed): <question>` and document in manual-steps.md
|
||||
|
||||
## Strategy
|
||||
|
||||
**Core Loop** — For every unit of work:
|
||||
1. **Edit**: Make focused changes to implement one logical piece
|
||||
2. **Build**: Run `tools\build\build.cmd` and check for exit code 0
|
||||
3. **Verify**: Use `problems` tool for lint/compile errors; run relevant tests
|
||||
4. **Commit**: Only after build passes — use `.github/prompts/create-commit-title.prompt.md`
|
||||
|
||||
Never skip steps. Never commit broken code. Never proceed if build fails.
|
||||
|
||||
**Feature-by-Feature E2E**: For big scenarios with multiple features, complete each feature end-to-end before moving to the next:
|
||||
- Settings UI → Functionality → Logging → Tests (for Feature 1)
|
||||
- Then repeat for Feature 2
|
||||
- Benefits: Each feature is self-contained, testable, easier to review, can ship incrementally
|
||||
|
||||
**Large Changes** (3+ files or cross-module):
|
||||
- Use `tools\build\New-WorktreeFromBranch.ps1` for isolated worktrees
|
||||
- Create separate branches per feature (e.g., `issue/{{issue_number}}-export`, `issue/{{issue_number}}-import`)
|
||||
- Merge feature branches back after each is validated
|
||||
|
||||
**Recovery**: If implementation goes wrong:
|
||||
- Create a checkpoint branch before risky changes
|
||||
- On failure: branch from last known-good state, cherry-pick working changes, abandon broken branch
|
||||
- For complex changes, consider multiple smaller PRs
|
||||
|
||||
## Guidelines
|
||||
|
||||
**DO**:
|
||||
- Follow the plan exactly
|
||||
- Validate build before every commit — **NEVER commit broken code**
|
||||
- Use `.github/prompts/create-commit-title.prompt.md` for commit messages
|
||||
- Add comprehensive tests for changed behavior
|
||||
- Use worktrees for large changes (3+ files or cross-module)
|
||||
- Document deviations from plan
|
||||
|
||||
**DON'T**:
|
||||
- Implement everything in a single massive commit
|
||||
- Continue after a failed build without fixing
|
||||
- Make drive-by refactors outside issue scope
|
||||
- Skip tests for behavioral changes
|
||||
- Add noisy logs in hot paths
|
||||
- Break IPC/JSON contracts without updating both sides
|
||||
- Introduce dependencies without documenting in NOTICE.md
|
||||
|
||||
## References
|
||||
|
||||
- [Build Guidelines](../../tools/build/BUILD-GUIDELINES.md) — Build commands and validation
|
||||
- [Coding Style](../../doc/devdocs/development/style.md) — Formatting and conventions
|
||||
- [AGENTS.md](../../AGENTS.md) — Full contributor guide
|
||||
|
||||
## Parameter
|
||||
|
||||
- **issue_number**: Extract from `#123`, `issue 123`, or plain number. If missing, ask user.
|
||||
65
.github/agents/PlanIssue.agent.md
vendored
Normal file
65
.github/agents/PlanIssue.agent.md
vendored
Normal file
@@ -0,0 +1,65 @@
|
||||
---
|
||||
description: 'Analyzes GitHub issues to produce overview and implementation plans'
|
||||
name: 'PlanIssue'
|
||||
tools: ['execute', 'read', 'edit', 'search', 'web', 'github/*', 'agent', 'github-artifacts/*', 'todo']
|
||||
argument-hint: 'GitHub issue number (e.g., #12345)'
|
||||
handoffs:
|
||||
- label: Start Implementation
|
||||
agent: FixIssue
|
||||
prompt: 'Fix issue #{{issue_number}} using the implementation plan'
|
||||
- label: Open Plan in Editor
|
||||
agent: agent
|
||||
prompt: 'Open Generated Files/issueReview/{{issue_number}}/overview.md and implementation-plan.md'
|
||||
showContinueOn: false
|
||||
send: true
|
||||
infer: true
|
||||
---
|
||||
|
||||
# PlanIssue Agent
|
||||
|
||||
You are a **PLANNING AGENT** specialized in analyzing GitHub issues and producing comprehensive planning documentation.
|
||||
|
||||
## Identity & Expertise
|
||||
|
||||
- Expert at issue triage, priority scoring, and technical analysis
|
||||
- Deep knowledge of PowerToys architecture and codebase patterns
|
||||
- Skilled at breaking down problems into actionable implementation steps
|
||||
- You research thoroughly before planning, gathering 80% confidence before drafting
|
||||
|
||||
## Goal
|
||||
|
||||
For the given **issue_number**, produce two deliverables:
|
||||
1. `Generated Files/issueReview/{{issue_number}}/overview.md` — Issue analysis with scoring
|
||||
2. `Generated Files/issueReview/{{issue_number}}/implementation-plan.md` — Technical implementation plan
|
||||
Above is the core interaction with the end user. If you cannot produce the files above, you fail the task. Each time, you must check whether the files exist or have been modified by the end user, without assuming you know their contents.
|
||||
3. `Generated Files/issueReview/{{issue_number}}/logs/**` — logs for your diagnostic of root cause, research steps, and reasoning
|
||||
|
||||
## Core Directive
|
||||
|
||||
**Follow the template in `.github/prompts/review-issue.prompt.md` exactly.** Read it first, then apply every section as specified.
|
||||
|
||||
- Fetch issue details: reactions, comments, linked PRs, images, logs
|
||||
- Search related code and similar past fixes
|
||||
- Ask clarifying questions when ambiguous
|
||||
- Identify subject matter experts via git history
|
||||
|
||||
<stopping_rules>
|
||||
You are a PLANNING agent, NOT an implementation agent.
|
||||
|
||||
STOP if you catch yourself:
|
||||
- Writing code or editing source files outside `Generated Files/issueReview/`
|
||||
- Making assumptions without researching
|
||||
- Skipping the scoring/assessment phase
|
||||
|
||||
Plans describe what the USER or FixIssue agent will execute later.
|
||||
</stopping_rules>
|
||||
|
||||
## References
|
||||
|
||||
- [Review Issue Prompt](../.github/prompts/review-issue.prompt.md) — Template for plan structure
|
||||
- [Architecture Overview](../../doc/devdocs/core/architecture.md) — System design context
|
||||
- [AGENTS.md](../../AGENTS.md) — Full contributor guide
|
||||
|
||||
## Parameter
|
||||
|
||||
- **issue_number**: Extract from `#123`, `issue 123`, or plain number. If missing, ask user.
|
||||
261
.github/instructions/agent-skills.instructions.md
vendored
Normal file
261
.github/instructions/agent-skills.instructions.md
vendored
Normal file
@@ -0,0 +1,261 @@
|
||||
---
|
||||
description: 'Guidelines for creating high-quality Agent Skills for GitHub Copilot'
|
||||
applyTo: '**/.github/skills/**/SKILL.md, **/.claude/skills/**/SKILL.md'
|
||||
---
|
||||
|
||||
# Agent Skills File Guidelines
|
||||
|
||||
Instructions for creating effective and portable Agent Skills that enhance GitHub Copilot with specialized capabilities, workflows, and bundled resources.
|
||||
|
||||
## What Are Agent Skills?
|
||||
|
||||
Agent Skills are self-contained folders with instructions and bundled resources that teach AI agents specialized capabilities. Unlike custom instructions (which define coding standards), skills enable task-specific workflows that can include scripts, examples, templates, and reference data.
|
||||
|
||||
Key characteristics:
|
||||
- **Portable**: Works across VS Code, Copilot CLI, and Copilot coding agent
|
||||
- **Progressive loading**: Only loaded when relevant to the user's request
|
||||
- **Resource-bundled**: Can include scripts, templates, examples alongside instructions
|
||||
- **On-demand**: Activated automatically based on prompt relevance
|
||||
|
||||
## Directory Structure
|
||||
|
||||
Skills are stored in specific locations:
|
||||
|
||||
| Location | Scope | Recommendation |
|
||||
|----------|-------|----------------|
|
||||
| `.github/skills/<skill-name>/` | Project/repository | Recommended for project skills |
|
||||
| `.claude/skills/<skill-name>/` | Project/repository | Legacy, for backward compatibility |
|
||||
| `~/.github/skills/<skill-name>/` | Personal (user-wide) | Recommended for personal skills |
|
||||
| `~/.claude/skills/<skill-name>/` | Personal (user-wide) | Legacy, for backward compatibility |
|
||||
|
||||
Each skill **must** have its own subdirectory containing at minimum a `SKILL.md` file.
|
||||
|
||||
## Required SKILL.md Format
|
||||
|
||||
### Frontmatter (Required)
|
||||
|
||||
```yaml
|
||||
---
|
||||
name: webapp-testing
|
||||
description: Toolkit for testing local web applications using Playwright. Use when asked to verify frontend functionality, debug UI behavior, capture browser screenshots, check for visual regressions, or view browser console logs. Supports Chrome, Firefox, and WebKit browsers.
|
||||
license: Complete terms in LICENSE.txt
|
||||
---
|
||||
```
|
||||
|
||||
| Field | Required | Constraints |
|
||||
|-------|----------|-------------|
|
||||
| `name` | Yes | Lowercase, hyphens for spaces, max 64 characters (e.g., `webapp-testing`) |
|
||||
| `description` | Yes | Clear description of capabilities AND use cases, max 1024 characters |
|
||||
| `license` | No | Reference to LICENSE.txt (e.g., `Complete terms in LICENSE.txt`) or SPDX identifier |
|
||||
|
||||
### Description Best Practices
|
||||
|
||||
**CRITICAL**: The `description` field is the PRIMARY mechanism for automatic skill discovery. Copilot reads ONLY the `name` and `description` to decide whether to load a skill. If your description is vague, the skill will never be activated.
|
||||
|
||||
**What to include in description:**
|
||||
1. **WHAT** the skill does (capabilities)
|
||||
2. **WHEN** to use it (specific triggers, scenarios, file types, or user requests)
|
||||
3. **Keywords** that users might mention in their prompts
|
||||
|
||||
**Good description:**
|
||||
```yaml
|
||||
description: Toolkit for testing local web applications using Playwright. Use when asked to verify frontend functionality, debug UI behavior, capture browser screenshots, check for visual regressions, or view browser console logs. Supports Chrome, Firefox, and WebKit browsers.
|
||||
```
|
||||
|
||||
**Poor description:**
|
||||
```yaml
|
||||
description: Web testing helpers
|
||||
```
|
||||
|
||||
The poor description fails because:
|
||||
- No specific triggers (when should Copilot load this?)
|
||||
- No keywords (what user prompts would match?)
|
||||
- No capabilities (what can it actually do?)
|
||||
|
||||
### Body Content
|
||||
|
||||
The body contains detailed instructions that Copilot loads AFTER the skill is activated. Recommended sections:
|
||||
|
||||
| Section | Purpose |
|
||||
|---------|---------|
|
||||
| `# Title` | Brief overview of what this skill enables |
|
||||
| `## When to Use This Skill` | List of scenarios (reinforces description triggers) |
|
||||
| `## Prerequisites` | Required tools, dependencies, environment setup |
|
||||
| `## Step-by-Step Workflows` | Numbered steps for common tasks |
|
||||
| `## Troubleshooting` | Common issues and solutions table |
|
||||
| `## References` | Links to bundled docs or external resources |
|
||||
|
||||
## Bundling Resources
|
||||
|
||||
Skills can include additional files that Copilot accesses on-demand:
|
||||
|
||||
### Supported Resource Types
|
||||
|
||||
| Folder | Purpose | Loaded into Context? | Example Files |
|
||||
|--------|---------|---------------------|---------------|
|
||||
| `scripts/` | Executable automation that performs specific operations | When executed | `helper.py`, `validate.sh`, `build.ts` |
|
||||
| `references/` | Documentation the AI agent reads to inform decisions | Yes, when referenced | `api_reference.md`, `schema.md`, `workflow_guide.md` |
|
||||
| `assets/` | **Static files used AS-IS** in output (not modified by the AI agent) | No | `logo.png`, `brand-template.pptx`, `custom-font.ttf` |
|
||||
| `templates/` | **Starter code/scaffolds that the AI agent MODIFIES** and builds upon | Yes, when referenced | `viewer.html` (insert algorithm), `hello-world/` (extend) |
|
||||
|
||||
### Directory Structure Example
|
||||
|
||||
```
|
||||
.github/skills/my-skill/
|
||||
├── SKILL.md # Required: Main instructions
|
||||
├── LICENSE.txt # Recommended: License terms (Apache 2.0 typical)
|
||||
├── scripts/ # Optional: Executable automation
|
||||
│ ├── helper.py # Python script
|
||||
│ └── helper.ps1 # PowerShell script
|
||||
├── references/ # Optional: Documentation loaded into context
|
||||
│ ├── api_reference.md
|
||||
│ ├── step1-setup.md # Detailed workflow (>3 steps)
|
||||
│ └── step2-deployment.md
|
||||
├── assets/ # Optional: Static files used AS-IS in output
|
||||
│ ├── baseline.png # Reference image for comparison
|
||||
│ └── report-template.html
|
||||
└── templates/ # Optional: Starter code the AI agent modifies
|
||||
├── scaffold.py # Code scaffold the AI agent customizes
|
||||
└── config.template # Config template the AI agent fills in
|
||||
```
|
||||
|
||||
> **LICENSE.txt**: When creating a skill, download the Apache 2.0 license text from https://www.apache.org/licenses/LICENSE-2.0.txt and save as `LICENSE.txt`. Update the copyright year and owner in the appendix section.
|
||||
|
||||
### Assets vs Templates: Key Distinction
|
||||
|
||||
**Assets** are static resources **consumed unchanged** in the output:
|
||||
- A `logo.png` that gets embedded into a generated document
|
||||
- A `report-template.html` copied as output format
|
||||
- A `custom-font.ttf` applied to text rendering
|
||||
|
||||
**Templates** are starter code/scaffolds that **the AI agent actively modifies**:
|
||||
- A `scaffold.py` where the AI agent inserts logic
|
||||
- A `config.template` where the AI agent fills in values based on user requirements
|
||||
- A `hello-world/` project directory that the AI agent extends with new features
|
||||
|
||||
**Rule of thumb**: If the AI agent reads and builds upon the file content → `templates/`. If the file is used as-is in output → `assets/`.
|
||||
|
||||
### Referencing Resources in SKILL.md
|
||||
|
||||
Use relative paths to reference files within the skill directory:
|
||||
|
||||
```markdown
|
||||
## Available Scripts
|
||||
|
||||
Run the [helper script](./scripts/helper.py) to automate common tasks.
|
||||
|
||||
See [API reference](./references/api_reference.md) for detailed documentation.
|
||||
|
||||
Use the [scaffold](./templates/scaffold.py) as a starting point.
|
||||
```
|
||||
|
||||
## Progressive Loading Architecture
|
||||
|
||||
Skills use three-level loading for efficiency:
|
||||
|
||||
| Level | What Loads | When |
|
||||
|-------|------------|------|
|
||||
| 1. Discovery | `name` and `description` only | Always (lightweight metadata) |
|
||||
| 2. Instructions | Full `SKILL.md` body | When request matches description |
|
||||
| 3. Resources | Scripts, examples, docs | Only when Copilot references them |
|
||||
|
||||
This means:
|
||||
- Install many skills without consuming context
|
||||
- Only relevant content loads per task
|
||||
- Resources don't load until explicitly needed
|
||||
|
||||
## Content Guidelines
|
||||
|
||||
### Writing Style
|
||||
|
||||
- Use imperative mood: "Run", "Create", "Configure" (not "You should run")
|
||||
- Be specific and actionable
|
||||
- Include exact commands with parameters
|
||||
- Show expected outputs where helpful
|
||||
- Keep sections focused and scannable
|
||||
|
||||
### Script Requirements
|
||||
|
||||
When including scripts, prefer cross-platform languages:
|
||||
|
||||
| Language | Use Case |
|
||||
|----------|----------|
|
||||
| Python | Complex automation, data processing |
|
||||
| pwsh | PowerShell Core scripting |
|
||||
| Node.js | JavaScript-based tooling |
|
||||
| Bash/Shell | Simple automation tasks |
|
||||
|
||||
Best practices:
|
||||
- Include help/usage documentation (`--help` flag)
|
||||
- Handle errors gracefully with clear messages
|
||||
- Avoid storing credentials or secrets
|
||||
- Use relative paths where possible
|
||||
|
||||
### When to Bundle Scripts
|
||||
|
||||
Include scripts in your skill when:
|
||||
- The same code would be rewritten repeatedly by the agent
|
||||
- Deterministic reliability is critical (e.g., file manipulation, API calls)
|
||||
- Complex logic benefits from being pre-tested rather than generated each time
|
||||
- The operation has a self-contained purpose that can evolve independently
|
||||
- Testability matters — scripts can be unit tested and validated
|
||||
- Predictable behavior is preferred over dynamic generation
|
||||
|
||||
Scripts enable evolution: even simple operations benefit from being implemented as scripts when they may grow in complexity, need consistent behavior across invocations, or require future extensibility.
|
||||
|
||||
### Security Considerations
|
||||
|
||||
- Scripts rely on existing credential helpers (no credential storage)
|
||||
- Include `--force` flags only for destructive operations
|
||||
- Warn users before irreversible actions
|
||||
- Document any network operations or external calls
|
||||
|
||||
## Common Patterns
|
||||
|
||||
### Parameter Table Pattern
|
||||
|
||||
Document parameters clearly:
|
||||
|
||||
```markdown
|
||||
| Parameter | Required | Default | Description |
|
||||
|-----------|----------|---------|-------------|
|
||||
| `--input` | Yes | - | Input file or URL to process |
|
||||
| `--action` | Yes | - | Action to perform |
|
||||
| `--verbose` | No | `false` | Enable verbose output |
|
||||
```
|
||||
|
||||
## Validation Checklist
|
||||
|
||||
Before publishing a skill:
|
||||
|
||||
- [ ] `SKILL.md` has valid frontmatter with `name` and `description`
|
||||
- [ ] `name` is lowercase with hyphens, ≤64 characters
|
||||
- [ ] `description` clearly states **WHAT** it does, **WHEN** to use it, and relevant **KEYWORDS**
|
||||
- [ ] Body includes when to use, prerequisites, and step-by-step workflows
|
||||
- [ ] SKILL.md body kept under 500 lines (split large content into `references/` folder)
|
||||
- [ ] Large workflows (>5 steps) split into `references/` folder with clear links from SKILL.md
|
||||
- [ ] Scripts include help documentation and error handling
|
||||
- [ ] Relative paths used for all resource references
|
||||
- [ ] No hardcoded credentials or secrets
|
||||
|
||||
## Workflow Execution Pattern
|
||||
|
||||
When executing multi-step workflows, create a TODO list where each step references the relevant documentation:
|
||||
|
||||
```markdown
|
||||
## TODO
|
||||
- [ ] Step 1: Configure environment - see [workflow-setup.md](./references/workflow-setup.md#environment)
|
||||
- [ ] Step 2: Build project - see [workflow-setup.md](./references/workflow-setup.md#build)
|
||||
- [ ] Step 3: Deploy to staging - see [workflow-deployment.md](./references/workflow-deployment.md#staging)
|
||||
- [ ] Step 4: Run validation - see [workflow-deployment.md](./references/workflow-deployment.md#validation)
|
||||
- [ ] Step 5: Deploy to production - see [workflow-deployment.md](./references/workflow-deployment.md#production)
|
||||
```
|
||||
|
||||
This ensures traceability and allows resuming workflows if interrupted.
|
||||
|
||||
## Related Resources
|
||||
|
||||
- [Agent Skills Specification](https://agentskills.io/)
|
||||
- [VS Code Agent Skills Documentation](https://code.visualstudio.com/docs/copilot/customization/agent-skills)
|
||||
- [Reference Skills Repository](https://github.com/anthropics/skills)
|
||||
- [Awesome Copilot Skills](https://github.com/github/awesome-copilot/blob/main/docs/README.skills.md)
|
||||
228
.github/instructions/typescript-mcp-server.instructions.md
vendored
Normal file
228
.github/instructions/typescript-mcp-server.instructions.md
vendored
Normal file
@@ -0,0 +1,228 @@
|
||||
---
|
||||
description: 'Instructions for building Model Context Protocol (MCP) servers using the TypeScript SDK'
|
||||
applyTo: '**/*.ts, **/*.js, **/package.json'
|
||||
---
|
||||
|
||||
# TypeScript MCP Server Development
|
||||
|
||||
## Instructions
|
||||
|
||||
- Use the **@modelcontextprotocol/sdk** npm package: `npm install @modelcontextprotocol/sdk`
|
||||
- Import from specific paths: `@modelcontextprotocol/sdk/server/mcp.js`, `@modelcontextprotocol/sdk/server/stdio.js`, etc.
|
||||
- Use `McpServer` class for high-level server implementation with automatic protocol handling
|
||||
- Use `Server` class for low-level control with manual request handlers
|
||||
- Use **zod** for input/output schema validation: `npm install zod@3`
|
||||
- Always provide `title` field for tools, resources, and prompts for better UI display
|
||||
- Use `registerTool()`, `registerResource()`, and `registerPrompt()` methods (recommended over older APIs)
|
||||
- Define schemas using zod: `{ inputSchema: { param: z.string() }, outputSchema: { result: z.string() } }`
|
||||
- Return both `content` (for display) and `structuredContent` (for structured data) from tools
|
||||
- For HTTP servers, use `StreamableHTTPServerTransport` with Express or similar frameworks
|
||||
- For local integrations, use `StdioServerTransport` for stdio-based communication
|
||||
- Create new transport instances per request to prevent request ID collisions (stateless mode)
|
||||
- Use session management with `sessionIdGenerator` for stateful servers
|
||||
- Enable DNS rebinding protection for local servers: `enableDnsRebindingProtection: true`
|
||||
- Configure CORS headers and expose `Mcp-Session-Id` for browser-based clients
|
||||
- Use `ResourceTemplate` for dynamic resources with URI parameters: `new ResourceTemplate('resource://{param}', { list: undefined })`
|
||||
- Support completions for better UX using `completable()` wrapper from `@modelcontextprotocol/sdk/server/completable.js`
|
||||
- Implement sampling with `server.server.createMessage()` to request LLM completions from clients
|
||||
- Use `server.server.elicitInput()` to request additional user input during tool execution
|
||||
- Enable notification debouncing for bulk updates: `debouncedNotificationMethods: ['notifications/tools/list_changed']`
|
||||
- Dynamic updates: call `.enable()`, `.disable()`, `.update()`, or `.remove()` on registered items to emit `listChanged` notifications
|
||||
- Use `getDisplayName()` from `@modelcontextprotocol/sdk/shared/metadataUtils.js` for UI display names
|
||||
- Test servers with MCP Inspector: `npx @modelcontextprotocol/inspector`
|
||||
|
||||
## Best Practices
|
||||
|
||||
- Keep tool implementations focused on single responsibilities
|
||||
- Provide clear, descriptive titles and descriptions for LLM understanding
|
||||
- Use proper TypeScript types for all parameters and return values
|
||||
- Implement comprehensive error handling with try-catch blocks
|
||||
- Return `isError: true` in tool results for error conditions
|
||||
- Use async/await for all asynchronous operations
|
||||
- Close database connections and clean up resources properly
|
||||
- Validate input parameters before processing
|
||||
- Use structured logging for debugging without polluting stdout/stderr
|
||||
- Consider security implications when exposing file system or network access
|
||||
- Implement proper resource cleanup on transport close events
|
||||
- Use environment variables for configuration (ports, API keys, etc.)
|
||||
- Document tool capabilities and limitations clearly
|
||||
- Test with multiple clients to ensure compatibility
|
||||
|
||||
## Common Patterns
|
||||
|
||||
### Basic Server Setup (HTTP)
|
||||
```typescript
|
||||
import { McpServer } from '@modelcontextprotocol/sdk/server/mcp.js';
|
||||
import { StreamableHTTPServerTransport } from '@modelcontextprotocol/sdk/server/streamableHttp.js';
|
||||
import express from 'express';
|
||||
|
||||
const server = new McpServer({
|
||||
name: 'my-server',
|
||||
version: '1.0.0'
|
||||
});
|
||||
|
||||
const app = express();
|
||||
app.use(express.json());
|
||||
|
||||
app.post('/mcp', async (req, res) => {
|
||||
const transport = new StreamableHTTPServerTransport({
|
||||
sessionIdGenerator: undefined,
|
||||
enableJsonResponse: true
|
||||
});
|
||||
|
||||
res.on('close', () => transport.close());
|
||||
|
||||
await server.connect(transport);
|
||||
await transport.handleRequest(req, res, req.body);
|
||||
});
|
||||
|
||||
app.listen(3000);
|
||||
```
|
||||
|
||||
### Basic Server Setup (stdio)
|
||||
```typescript
|
||||
import { McpServer } from '@modelcontextprotocol/sdk/server/mcp.js';
|
||||
import { StdioServerTransport } from '@modelcontextprotocol/sdk/server/stdio.js';
|
||||
|
||||
const server = new McpServer({
|
||||
name: 'my-server',
|
||||
version: '1.0.0'
|
||||
});
|
||||
|
||||
// ... register tools, resources, prompts ...
|
||||
|
||||
const transport = new StdioServerTransport();
|
||||
await server.connect(transport);
|
||||
```
|
||||
|
||||
### Simple Tool
|
||||
```typescript
|
||||
import { z } from 'zod';
|
||||
|
||||
server.registerTool(
|
||||
'calculate',
|
||||
{
|
||||
title: 'Calculator',
|
||||
description: 'Perform basic calculations',
|
||||
inputSchema: { a: z.number(), b: z.number(), op: z.enum(['+', '-', '*', '/']) },
|
||||
outputSchema: { result: z.number() }
|
||||
},
|
||||
async ({ a, b, op }) => {
|
||||
const result = op === '+' ? a + b : op === '-' ? a - b :
|
||||
op === '*' ? a * b : a / b;
|
||||
const output = { result };
|
||||
return {
|
||||
content: [{ type: 'text', text: JSON.stringify(output) }],
|
||||
structuredContent: output
|
||||
};
|
||||
}
|
||||
);
|
||||
```
|
||||
|
||||
### Dynamic Resource
|
||||
```typescript
|
||||
import { ResourceTemplate } from '@modelcontextprotocol/sdk/server/mcp.js';
|
||||
|
||||
server.registerResource(
|
||||
'user',
|
||||
new ResourceTemplate('users://{userId}', { list: undefined }),
|
||||
{
|
||||
title: 'User Profile',
|
||||
description: 'Fetch user profile data'
|
||||
},
|
||||
async (uri, { userId }) => ({
|
||||
contents: [{
|
||||
uri: uri.href,
|
||||
text: `User ${userId} data here`
|
||||
}]
|
||||
})
|
||||
);
|
||||
```
|
||||
|
||||
### Tool with Sampling
|
||||
```typescript
|
||||
server.registerTool(
|
||||
'summarize',
|
||||
{
|
||||
title: 'Text Summarizer',
|
||||
description: 'Summarize text using LLM',
|
||||
inputSchema: { text: z.string() },
|
||||
outputSchema: { summary: z.string() }
|
||||
},
|
||||
async ({ text }) => {
|
||||
const response = await server.server.createMessage({
|
||||
messages: [{
|
||||
role: 'user',
|
||||
content: { type: 'text', text: `Summarize: ${text}` }
|
||||
}],
|
||||
maxTokens: 500
|
||||
});
|
||||
|
||||
const summary = response.content.type === 'text' ?
|
||||
response.content.text : 'Unable to summarize';
|
||||
const output = { summary };
|
||||
return {
|
||||
content: [{ type: 'text', text: JSON.stringify(output) }],
|
||||
structuredContent: output
|
||||
};
|
||||
}
|
||||
);
|
||||
```
|
||||
|
||||
### Prompt with Completion
|
||||
```typescript
|
||||
import { completable } from '@modelcontextprotocol/sdk/server/completable.js';
|
||||
|
||||
server.registerPrompt(
|
||||
'review',
|
||||
{
|
||||
title: 'Code Review',
|
||||
description: 'Review code with specific focus',
|
||||
argsSchema: {
|
||||
language: completable(z.string(), value =>
|
||||
['typescript', 'python', 'javascript', 'java']
|
||||
.filter(l => l.startsWith(value))
|
||||
),
|
||||
code: z.string()
|
||||
}
|
||||
},
|
||||
({ language, code }) => ({
|
||||
messages: [{
|
||||
role: 'user',
|
||||
content: {
|
||||
type: 'text',
|
||||
text: `Review this ${language} code:\n\n${code}`
|
||||
}
|
||||
}]
|
||||
})
|
||||
);
|
||||
```
|
||||
|
||||
### Error Handling
|
||||
```typescript
|
||||
server.registerTool(
|
||||
'risky-operation',
|
||||
{
|
||||
title: 'Risky Operation',
|
||||
description: 'An operation that might fail',
|
||||
inputSchema: { input: z.string() },
|
||||
outputSchema: { result: z.string() }
|
||||
},
|
||||
async ({ input }) => {
|
||||
try {
|
||||
const result = await performRiskyOperation(input);
|
||||
const output = { result };
|
||||
return {
|
||||
content: [{ type: 'text', text: JSON.stringify(output) }],
|
||||
structuredContent: output
|
||||
};
|
||||
} catch (err: unknown) {
|
||||
const error = err as Error;
|
||||
return {
|
||||
content: [{ type: 'text', text: `Error: ${error.message}` }],
|
||||
isError: true
|
||||
};
|
||||
}
|
||||
}
|
||||
);
|
||||
```
|
||||
@@ -1,6 +1,5 @@
|
||||
---
|
||||
agent: 'agent'
|
||||
model: 'GPT-5.1-Codex-Max'
|
||||
description: 'Generate an 80-character git commit title for the local diff'
|
||||
---
|
||||
|
||||
|
||||
1
.github/prompts/create-pr-summary.prompt.md
vendored
1
.github/prompts/create-pr-summary.prompt.md
vendored
@@ -1,6 +1,5 @@
|
||||
---
|
||||
agent: 'agent'
|
||||
model: 'GPT-5.1-Codex-Max'
|
||||
description: 'Generate a PowerToys-ready pull request description from the local diff'
|
||||
---
|
||||
|
||||
|
||||
1
.github/prompts/fix-issue.prompt.md
vendored
1
.github/prompts/fix-issue.prompt.md
vendored
@@ -1,6 +1,5 @@
|
||||
---
|
||||
agent: 'agent'
|
||||
model: 'GPT-5.1-Codex-Max'
|
||||
description: 'Execute the fix for a GitHub issue using the previously generated implementation plan'
|
||||
---
|
||||
|
||||
|
||||
1
.github/prompts/fix-spelling.prompt.md
vendored
1
.github/prompts/fix-spelling.prompt.md
vendored
@@ -1,6 +1,5 @@
|
||||
---
|
||||
agent: 'agent'
|
||||
model: 'GPT-5.1-Codex-Max'
|
||||
description: 'Resolve Code scanning / check-spelling comments on the active PR'
|
||||
---
|
||||
|
||||
|
||||
11
.github/prompts/review-issue.prompt.md
vendored
11
.github/prompts/review-issue.prompt.md
vendored
@@ -1,6 +1,5 @@
|
||||
---
|
||||
agent: 'agent'
|
||||
model: 'GPT-5.1-Codex-Max'
|
||||
description: 'Review a GitHub issue, score it (0-100), and generate an implementation plan'
|
||||
---
|
||||
|
||||
@@ -15,8 +14,14 @@ For **#{{issue_number}}** produce:
|
||||
Figure out required inputs {{issue_number}} from the invocation context; if anything is missing, ask for the value or note it as a gap.
|
||||
|
||||
# CONTEXT (brief)
|
||||
Ground evidence using `gh issue view {{issue_number}} --json number,title,body,author,createdAt,updatedAt,state,labels,milestone,reactions,comments,linkedPullRequests`, and download images to better understand the issue context.
|
||||
Locate source code in the current workspace; feel free to use `rg`/`git grep`. Link related issues and PRs.
|
||||
Ground evidence using `gh issue view {{issue_number}} --json number,title,body,author,createdAt,updatedAt,state,labels,milestone,reactions,comments,linkedPullRequests`, download images via MCP `github_issue_images` to better understand the issue context. Finally, use MCP `github_issue_attachments` to download logs with parameter `extractFolder` as `Generated Files/issueReview/{{issue_number}}/logs`, and analyze the downloaded logs if available to identify relevant issues. Locate the source code in the current workspace (use `rg`/`git grep` as needed). Link related issues and PRs.
|
||||
|
||||
## When to call MCP tools
|
||||
If the following MCP "github-artifacts" tools are available in the environment, use them:
|
||||
- `github_issue_images`: use when the issue/PR likely contains screenshots or other visual evidence (UI bugs, glitches, design problems).
|
||||
- `github_issue_attachments`: use when the issue/PR mentions attached ZIPs (PowerToysReport_*.zip, logs.zip, debug.zip) or asks to analyze logs/diagnostics. Always provide `extractFolder` as `Generated Files/issueReview/{{issue_number}}/logs`
|
||||
|
||||
If these tools are not available (not listed by the runtime), start the MCP server "github-artifacts" first.
|
||||
|
||||
# OVERVIEW.MD
|
||||
## Summary
|
||||
|
||||
1
.github/prompts/review-pr.prompt.md
vendored
1
.github/prompts/review-pr.prompt.md
vendored
@@ -1,6 +1,5 @@
|
||||
---
|
||||
agent: 'agent'
|
||||
model: 'GPT-5.1-Codex-Max'
|
||||
description: 'Perform a comprehensive PR review with per-step Markdown and machine-readable outputs'
|
||||
---
|
||||
|
||||
|
||||
201
.github/skills/release-note-generation/LICENSE.txt
vendored
Normal file
201
.github/skills/release-note-generation/LICENSE.txt
vendored
Normal file
@@ -0,0 +1,201 @@
|
||||
Apache License
|
||||
Version 2.0, January 2004
|
||||
http://www.apache.org/licenses/
|
||||
|
||||
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
|
||||
|
||||
1. Definitions.
|
||||
|
||||
"License" shall mean the terms and conditions for use, reproduction,
|
||||
and distribution as defined by Sections 1 through 9 of this document.
|
||||
|
||||
"Licensor" shall mean the copyright owner or entity authorized by
|
||||
the copyright owner that is granting the License.
|
||||
|
||||
"Legal Entity" shall mean the union of the acting entity and all
|
||||
other entities that control, are controlled by, or are under common
|
||||
control with that entity. For the purposes of this definition,
|
||||
"control" means (i) the power, direct or indirect, to cause the
|
||||
direction or management of such entity, whether by contract or
|
||||
otherwise, or (ii) ownership of fifty percent (50%) or more of the
|
||||
outstanding shares, or (iii) beneficial ownership of such entity.
|
||||
|
||||
"You" (or "Your") shall mean an individual or Legal Entity
|
||||
exercising permissions granted by this License.
|
||||
|
||||
"Source" form shall mean the preferred form for making modifications,
|
||||
including but not limited to software source code, documentation
|
||||
source, and configuration files.
|
||||
|
||||
"Object" form shall mean any form resulting from mechanical
|
||||
transformation or translation of a Source form, including but
|
||||
not limited to compiled object code, generated documentation,
|
||||
and conversions to other media types.
|
||||
|
||||
"Work" shall mean the work of authorship, whether in Source or
|
||||
Object form, made available under the License, as indicated by a
|
||||
copyright notice that is included in or attached to the work
|
||||
(an example is provided in the Appendix below).
|
||||
|
||||
"Derivative Works" shall mean any work, whether in Source or Object
|
||||
form, that is based on (or derived from) the Work and for which the
|
||||
editorial revisions, annotations, elaborations, or other modifications
|
||||
represent, as a whole, an original work of authorship. For the purposes
|
||||
of this License, Derivative Works shall not include works that remain
|
||||
separable from, or merely link (or bind by name) to the interfaces of,
|
||||
the Work and Derivative Works thereof.
|
||||
|
||||
"Contribution" shall mean any work of authorship, including
|
||||
the original version of the Work and any modifications or additions
|
||||
to that Work or Derivative Works thereof, that is intentionally
|
||||
submitted to the Licensor for inclusion in the Work by the copyright owner
|
||||
or by an individual or Legal Entity authorized to submit on behalf of
|
||||
the copyright owner. For the purposes of this definition, "submitted"
|
||||
means any form of electronic, verbal, or written communication sent
|
||||
to the Licensor or its representatives, including but not limited to
|
||||
communication on electronic mailing lists, source code control systems,
|
||||
and issue tracking systems that are managed by, or on behalf of, the
|
||||
Licensor for the purpose of discussing and improving the Work, but
|
||||
excluding communication that is conspicuously marked or otherwise
|
||||
designated in writing by the copyright owner as "Not a Contribution."
|
||||
|
||||
"Contributor" shall mean Licensor and any individual or Legal Entity
|
||||
on behalf of whom a Contribution has been received by Licensor and
|
||||
subsequently incorporated within the Work.
|
||||
|
||||
2. Grant of Copyright License. Subject to the terms and conditions of
|
||||
this License, each Contributor hereby grants to You a perpetual,
|
||||
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
|
||||
copyright license to reproduce, prepare Derivative Works of,
|
||||
publicly display, publicly perform, sublicense, and distribute the
|
||||
Work and such Derivative Works in Source or Object form.
|
||||
|
||||
3. Grant of Patent License. Subject to the terms and conditions of
|
||||
this License, each Contributor hereby grants to You a perpetual,
|
||||
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
|
||||
(except as stated in this section) patent license to make, have made,
|
||||
use, offer to sell, sell, import, and otherwise transfer the Work,
|
||||
where such license applies only to those patent claims licensable
|
||||
by such Contributor that are necessarily infringed by their
|
||||
Contribution(s) alone or by combination of their Contribution(s)
|
||||
with the Work to which such Contribution(s) was submitted. If You
|
||||
institute patent litigation against any entity (including a
|
||||
cross-claim or counterclaim in a lawsuit) alleging that the Work
|
||||
or a Contribution incorporated within the Work constitutes direct
|
||||
or contributory patent infringement, then any patent licenses
|
||||
granted to You under this License for that Work shall terminate
|
||||
as of the date such litigation is filed.
|
||||
|
||||
4. Redistribution. You may reproduce and distribute copies of the
|
||||
Work or Derivative Works thereof in any medium, with or without
|
||||
modifications, and in Source or Object form, provided that You
|
||||
meet the following conditions:
|
||||
|
||||
(a) You must give any other recipients of the Work or
|
||||
Derivative Works a copy of this License; and
|
||||
|
||||
(b) You must cause any modified files to carry prominent notices
|
||||
stating that You changed the files; and
|
||||
|
||||
(c) You must retain, in the Source form of any Derivative Works
|
||||
that You distribute, all copyright, patent, trademark, and
|
||||
attribution notices from the Source form of the Work,
|
||||
excluding those notices that do not pertain to any part of
|
||||
the Derivative Works; and
|
||||
|
||||
(d) If the Work includes a "NOTICE" text file as part of its
|
||||
distribution, then any Derivative Works that You distribute must
|
||||
include a readable copy of the attribution notices contained
|
||||
within such NOTICE file, excluding those notices that do not
|
||||
pertain to any part of the Derivative Works, in at least one
|
||||
of the following places: within a NOTICE text file distributed
|
||||
as part of the Derivative Works; within the Source form or
|
||||
documentation, if provided along with the Derivative Works; or,
|
||||
within a display generated by the Derivative Works, if and
|
||||
wherever such third-party notices normally appear. The contents
|
||||
of the NOTICE file are for informational purposes only and
|
||||
do not modify the License. You may add Your own attribution
|
||||
notices within Derivative Works that You distribute, alongside
|
||||
or as an addendum to the NOTICE text from the Work, provided
|
||||
that such additional attribution notices cannot be construed
|
||||
as modifying the License.
|
||||
|
||||
You may add Your own copyright statement to Your modifications and
|
||||
may provide additional or different license terms and conditions
|
||||
for use, reproduction, or distribution of Your modifications, or
|
||||
for any such Derivative Works as a whole, provided Your use,
|
||||
reproduction, and distribution of the Work otherwise complies with
|
||||
the conditions stated in this License.
|
||||
|
||||
5. Submission of Contributions. Unless You explicitly state otherwise,
|
||||
any Contribution intentionally submitted for inclusion in the Work
|
||||
by You to the Licensor shall be under the terms and conditions of
|
||||
this License, without any additional terms or conditions.
|
||||
Notwithstanding the above, nothing herein shall supersede or modify
|
||||
the terms of any separate license agreement you may have executed
|
||||
with Licensor regarding such Contributions.
|
||||
|
||||
6. Trademarks. This License does not grant permission to use the trade
|
||||
names, trademarks, service marks, or product names of the Licensor,
|
||||
except as required for reasonable and customary use in describing the
|
||||
origin of the Work and reproducing the content of the NOTICE file.
|
||||
|
||||
7. Disclaimer of Warranty. Unless required by applicable law or
|
||||
agreed to in writing, Licensor provides the Work (and each
|
||||
Contributor provides its Contributions) on an "AS IS" BASIS,
|
||||
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
|
||||
implied, including, without limitation, any warranties or conditions
|
||||
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
|
||||
PARTICULAR PURPOSE. You are solely responsible for determining the
|
||||
appropriateness of using or redistributing the Work and assume any
|
||||
risks associated with Your exercise of permissions under this License.
|
||||
|
||||
8. Limitation of Liability. In no event and under no legal theory,
|
||||
whether in tort (including negligence), contract, or otherwise,
|
||||
unless required by applicable law (such as deliberate and grossly
|
||||
negligent acts) or agreed to in writing, shall any Contributor be
|
||||
liable to You for damages, including any direct, indirect, special,
|
||||
incidental, or consequential damages of any character arising as a
|
||||
result of this License or out of the use or inability to use the
|
||||
Work (including but not limited to damages for loss of goodwill,
|
||||
work stoppage, computer failure or malfunction, or any and all
|
||||
other commercial damages or losses), even if such Contributor
|
||||
has been advised of the possibility of such damages.
|
||||
|
||||
9. Accepting Warranty or Additional Liability. While redistributing
|
||||
the Work or Derivative Works thereof, You may choose to offer,
|
||||
and charge a fee for, acceptance of support, warranty, indemnity,
|
||||
or other liability obligations and/or rights consistent with this
|
||||
License. However, in accepting such obligations, You may act only
|
||||
on Your own behalf and on Your sole responsibility, not on behalf
|
||||
of any other Contributor, and only if You agree to indemnify,
|
||||
defend, and hold each Contributor harmless for any liability
|
||||
incurred by, or claims asserted against, such Contributor by reason
|
||||
of your accepting any such warranty or additional liability.
|
||||
|
||||
END OF TERMS AND CONDITIONS
|
||||
|
||||
APPENDIX: How to apply the Apache License to your work.
|
||||
|
||||
To apply the Apache License to your work, attach the following
|
||||
boilerplate notice, with the fields enclosed by brackets "[]"
|
||||
replaced with your own identifying information. (Don't include
|
||||
the brackets!) The text should be enclosed in the appropriate
|
||||
comment syntax for the file format. We also recommend that a
|
||||
file or class name and description of purpose be included on the
|
||||
same "printed page" as the copyright notice for easier
|
||||
identification within third-party archives.
|
||||
|
||||
Copyright 2026 Microsoft Corporation
|
||||
|
||||
Licensed under the Apache License, Version 2.0 (the "License");
|
||||
you may not use this file except in compliance with the License.
|
||||
You may obtain a copy of the License at
|
||||
|
||||
http://www.apache.org/licenses/LICENSE-2.0
|
||||
|
||||
Unless required by applicable law or agreed to in writing, software
|
||||
distributed under the License is distributed on an "AS IS" BASIS,
|
||||
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
See the License for the specific language governing permissions and
|
||||
limitations under the License.
|
||||
132
.github/skills/release-note-generation/SKILL.md
vendored
Normal file
132
.github/skills/release-note-generation/SKILL.md
vendored
Normal file
@@ -0,0 +1,132 @@
|
||||
---
|
||||
name: release-note-generation
|
||||
description: Toolkit for generating PowerToys release notes from GitHub milestone PRs or commit ranges. Use when asked to create release notes, summarize milestone PRs, generate changelog, prepare release documentation, request Copilot reviews for PRs, update README for a new release, manage PR milestones, or collect PRs between commits/tags. Supports PR collection by milestone or commit range, milestone assignment, grouping by label, summarization with external contributor attribution, and README version bumping.
|
||||
license: Complete terms in LICENSE.txt
|
||||
---
|
||||
|
||||
# Release Note Generation Skill
|
||||
|
||||
Generate professional release notes for PowerToys milestones by collecting merged PRs, requesting Copilot code reviews, grouping by label, and producing user-facing summaries.
|
||||
|
||||
## Output Directory
|
||||
|
||||
All generated artifacts are placed under `Generated Files/ReleaseNotes/` at the repository root (gitignored).
|
||||
|
||||
```
|
||||
Generated Files/ReleaseNotes/
|
||||
├── milestone_prs.json # Raw PR data from GitHub
|
||||
├── sorted_prs.csv # Sorted PR list with Copilot summaries
|
||||
├── prs_with_milestone.csv # Milestone assignment tracking
|
||||
├── grouped_csv/ # PRs grouped by label (one CSV per label)
|
||||
├── grouped_md/ # Generated markdown summaries per label
|
||||
└── v{VERSION}-release-notes.md # Final consolidated release notes
|
||||
```
|
||||
|
||||
## When to Use This Skill
|
||||
|
||||
- Generate release notes for a milestone
|
||||
- Summarize PRs merged in a release
|
||||
- Request Copilot reviews for milestone PRs
|
||||
- Assign milestones to PRs missing them
|
||||
- Collect PRs between two commits/tags
|
||||
- Update README.md for a new version
|
||||
|
||||
## Prerequisites
|
||||
|
||||
- GitHub CLI (`gh`) installed and authenticated
|
||||
- MCP Server: github-mcp-server installed
|
||||
- GitHub Copilot code review enabled for the org/repo
|
||||
|
||||
## Required Variables
|
||||
|
||||
⚠️ **Before starting**, confirm `{{ReleaseVersion}}` with the user. If not provided, **ASK**: "What release version are we generating notes for? (e.g., 0.98)"
|
||||
|
||||
| Variable | Description | Example |
|
||||
|----------|-------------|---------|
|
||||
| `{{ReleaseVersion}}` | Target release version | `0.98` |
|
||||
|
||||
## Workflow Overview
|
||||
|
||||
```
|
||||
┌────────────────────────────────┐
|
||||
│ 1.1 Collect PRs (stable range) │
|
||||
└────────────────────────────────┘
|
||||
↓
|
||||
┌────────────────────────────────┐
|
||||
│ 1.2 Assign Milestones │
|
||||
└────────────────────────────────┘
|
||||
↓
|
||||
┌────────────────────────────────┐
|
||||
│ 2.1–2.4 Label PRs (auto+human) │
|
||||
└────────────────────────────────┘
|
||||
↓
|
||||
┌────────────────────────────────┐
|
||||
│ 3.1 Request Reviews (Copilot) │
|
||||
└────────────────────────────────┘
|
||||
↓
|
||||
┌────────────────────────────────┐
|
||||
│ 3.2 Refresh PR data │
|
||||
│ (CopilotSummary) │
|
||||
└────────────────────────────────┘
|
||||
↓
|
||||
┌────────────────────────────────┐
|
||||
│ 3.3 Group by label │
|
||||
│ (grouped_csv) │
|
||||
└────────────────────────────────┘
|
||||
↓
|
||||
┌────────────────────────────────┐
|
||||
│ 4.1 Summarize (grouped_md) │
|
||||
└────────────────────────────────┘
|
||||
↓
|
||||
┌────────────────────────────────┐
|
||||
│ 4.2 Final notes (v{VERSION}.md) │
|
||||
└────────────────────────────────┘
|
||||
```
|
||||
|
||||
| Step | Action | Details |
|
||||
|------|--------|---------|
|
||||
| 1.1 | Collect PRs | From previous release tag on `stable` branch → `sorted_prs.csv` |
|
||||
| 1.2 | Assign Milestones | Ensure all PRs have correct milestone |
|
||||
| 2.1–2.4 | Label PRs | Auto-suggest + human label low-confidence |
|
||||
| 3.1–3.3 | Reviews & Grouping | Request Copilot reviews → refresh → group by label |
|
||||
| 4.1–4.2 | Summaries & Final | Generate grouped summaries, then consolidate |
|
||||
|
||||
## Detailed workflow docs
|
||||
|
||||
Do not read all steps at once—only read the step you are executing.
|
||||
|
||||
- [Step 1: Collection & Milestones](./references/step1-collection.md)
|
||||
- [Step 2: Labeling PRs](./references/step2-labeling.md)
|
||||
- [Step 3: Reviews & Grouping](./references/step3-review-grouping.md)
|
||||
- [Step 4: Summarization](./references/step4-summarization.md)
|
||||
|
||||
|
||||
## Available Scripts
|
||||
|
||||
| Script | Purpose |
|
||||
|--------|---------|
|
||||
| [dump-prs-since-commit.ps1](./scripts/dump-prs-since-commit.ps1) | Fetch PRs between commits/tags |
|
||||
| [group-prs-by-label.ps1](./scripts/group-prs-by-label.ps1) | Group PRs into CSVs |
|
||||
| [collect-or-apply-milestones.ps1](./scripts/collect-or-apply-milestones.ps1) | Assign milestones |
|
||||
| [diff_prs.ps1](./scripts/diff_prs.ps1) | Incremental PR diff |
|
||||
|
||||
## References
|
||||
|
||||
- [Sample Output](./references/SampleOutput.md) - Example summary formatting
|
||||
- [Detailed Instructions](./references/Instruction.md) - Legacy full documentation
|
||||
|
||||
## Conventions
|
||||
|
||||
- **Terminal usage**: Disabled by default; only run scripts when user explicitly requests
|
||||
- **Batch generation**: Generate ALL grouped_md files in one pass, then human reviews
|
||||
- **PR order**: Preserve order from `sorted_prs.csv` in all outputs
|
||||
- **Label filtering**: Keeps `Product-*`, `Area-*`, `GitHub*`, `*Plugin`, `Issue-*`
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
| Issue | Solution |
|
||||
|-------|----------|
|
||||
| `gh` command not found | Install GitHub CLI and add to PATH |
|
||||
| No PRs returned | Verify milestone title matches exactly |
|
||||
| Empty CopilotSummary | Request Copilot reviews first, then re-run dump |
|
||||
| Many unlabeled PRs | Return to labeling step before grouping |
|
||||
9
.github/skills/release-note-generation/references/SampleOutput.md
vendored
Normal file
9
.github/skills/release-note-generation/references/SampleOutput.md
vendored
Normal file
@@ -0,0 +1,9 @@
|
||||
- Added mouse button actions so you can choose what left, right, or middle click does. Thanks [@PesBandi](https://github.com/PesBandi)!
|
||||
|
||||
- Aligned window styling with current Windows theme for a cleaner look. Thanks [@sadirano](https://github.com/sadirano)!
|
||||
|
||||
- Ensured screen readers are notified when the selected item in the list changes for better accessibility.
|
||||
|
||||
- Implemented configurable UI test pipeline that can use pre-built official releases instead of building everything from scratch, reducing test execution time from 2+ hours.
|
||||
|
||||
- Fixed Alt+Left Arrow navigation not working when search box contains text. Thanks [@jiripolasek](https://github.com/jiripolasek)!
|
||||
143
.github/skills/release-note-generation/references/step1-collection.md
vendored
Normal file
143
.github/skills/release-note-generation/references/step1-collection.md
vendored
Normal file
@@ -0,0 +1,143 @@
|
||||
# Step 1: Collection and Milestones
|
||||
|
||||
## 1.0 To-do
|
||||
- 1.0.1 Generate MemberList.md (REQUIRED)
|
||||
- 1.1 Collect PRs
|
||||
- 1.2 Assign Milestones (REQUIRED)
|
||||
|
||||
## Required Variables
|
||||
|
||||
⚠️ **Before starting**, confirm these values with the user:
|
||||
|
||||
| Variable | Description | Example |
|
||||
|----------|-------------|---------|
|
||||
| `{{ReleaseVersion}}` | Target release version | `0.97` |
|
||||
| `{{PreviousReleaseTag}}` | Previous release tag from releases page | `v0.96.1` |
|
||||
|
||||
**If user hasn't specified `{{ReleaseVersion}}`, ASK:** "What release version are we generating notes for? (e.g., 0.97)"
|
||||
|
||||
**`{{PreviousReleaseTag}}` is derived from the releases page, not user input.** Use the latest published release tag (top of the page). You will use its tag name and tag commit SHA in Step 1.
|
||||
|
||||
---
|
||||
|
||||
## 1.0.1 Generate MemberList.md (REQUIRED)
|
||||
|
||||
Create `Generated Files/ReleaseNotes/MemberList.md` from the **PowerToys core team** section in [COMMUNITY.md](../../../COMMUNITY.md).
|
||||
|
||||
Rules:
|
||||
- One GitHub username per line, **no** `@` prefix.
|
||||
- Use the usernames exactly as listed in the core team section.
|
||||
- Do not include former team members or other sections.
|
||||
|
||||
Example (format only):
|
||||
```
|
||||
example-user
|
||||
another-user
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 1.1 Collect PRs
|
||||
|
||||
### 1.1.1 Get the previous release commit
|
||||
|
||||
1. Open the [PowerToys releases page](https://github.com/microsoft/PowerToys/releases/)
|
||||
2. Find the latest release (e.g., v0.96.1, which should be at the top)
|
||||
3. Set `{{PreviousReleaseTag}}` to that tag name (e.g., `v0.96.1`)
|
||||
4. Copy the full tag commit SHA as `{{SHALastRelease}}`
|
||||
|
||||
|
||||
**If the release SHA is not in your branch history:** Use the helper script to find an equivalent commit on the target branch by matching the commit title:
|
||||
|
||||
```powershell
|
||||
pwsh ./.github/skills/release-note-generation/scripts/find-commit-by-title.ps1 `
|
||||
-Commit '{{SHALastRelease}}' `
|
||||
-Branch 'stable'
|
||||
```
|
||||
|
||||
### 1.1.2 Run collection script against stable branch
|
||||
|
||||
```powershell
|
||||
# Collect PRs from previous release to current HEAD of stable branch
|
||||
pwsh ./.github/skills/release-note-generation/scripts/dump-prs-since-commit.ps1 `
|
||||
-StartCommit '{{SHALastRelease}}' `
|
||||
-Branch 'stable' `
|
||||
-OutputDir 'Generated Files/ReleaseNotes'
|
||||
```
|
||||
|
||||
**Parameters:**
|
||||
- `-StartCommit` - Previous release tag or commit SHA (exclusive)
|
||||
- `-Branch` - Always use `stable` branch, not `main` (script uses `origin/stable` as the end ref)
|
||||
- `-EndCommit` - Optional override if you need a custom end ref
|
||||
- `-OutputDir` - Output directory for generated files
|
||||
|
||||
**Reliability check:** If the script reports “No commits found”, the stable branch has not moved since the last release. In that case, either:
|
||||
- Confirm this is expected and stop (no new release notes), or
|
||||
- Re-run against `main` to gather pending changes for the next release cycle.
|
||||
|
||||
The script detects both merge commits (`Merge pull request #12345`) and squash commits (`Feature (#12345)`).
|
||||
|
||||
**Output** (in `Generated Files/ReleaseNotes/`):
|
||||
- `milestone_prs.json` - raw PR data
|
||||
- `sorted_prs.csv` - sorted PR list with columns: Id, Title, Labels, Author, Url, Body, CopilotSummary, NeedThanks
|
||||
|
||||
---
|
||||
|
||||
## 1.2 Assign Milestones (REQUIRED)
|
||||
|
||||
**Before generating release notes**, ensure all collected PRs have the correct milestone assigned.
|
||||
|
||||
⚠️ **CRITICAL:** Do NOT proceed to labeling until all PRs have milestones assigned.
|
||||
|
||||
### 1.2.1 Check current milestone status (dry run)
|
||||
|
||||
```powershell
|
||||
# Dry run first to see what would be changed:
|
||||
pwsh ./.github/skills/release-note-generation/scripts/collect-or-apply-milestones.ps1 `
|
||||
-InputCsv 'Generated Files/ReleaseNotes/sorted_prs.csv' `
|
||||
-OutputCsv 'Generated Files/ReleaseNotes/prs_with_milestone.csv' `
|
||||
-DefaultMilestone 'PowerToys {{ReleaseVersion}}' `
|
||||
-ApplyMissing -WhatIf
|
||||
```
|
||||
|
||||
This queries GitHub for each PR's current milestone and shows which PRs would be updated.
|
||||
|
||||
### 1.2.2 Apply milestones to PRs missing them
|
||||
|
||||
```powershell
|
||||
# Apply for real:
|
||||
pwsh ./.github/skills/release-note-generation/scripts/collect-or-apply-milestones.ps1 `
|
||||
-InputCsv 'Generated Files/ReleaseNotes/sorted_prs.csv' `
|
||||
-OutputCsv 'Generated Files/ReleaseNotes/prs_with_milestone.csv' `
|
||||
-DefaultMilestone 'PowerToys {{ReleaseVersion}}' `
|
||||
-ApplyMissing
|
||||
```
|
||||
|
||||
**Script Behavior:**
|
||||
- Queries each PR's current milestone from GitHub
|
||||
- PRs that already have a milestone are **skipped** (not overwritten)
|
||||
- PRs missing a milestone get the default milestone applied
|
||||
- Outputs `prs_with_milestone.csv` with (Id, Milestone) columns
|
||||
- Produces summary: `Updated=X Skipped=Y Failed=Z`
|
||||
|
||||
**Validation:** After assignment, all PRs in `prs_with_milestone.csv` should have the target milestone.
|
||||
|
||||
---
|
||||
|
||||
## Additional Commands
|
||||
|
||||
### Collect milestones only (no changes to GitHub)
|
||||
```powershell
|
||||
pwsh ./.github/skills/release-note-generation/scripts/collect-or-apply-milestones.ps1 `
|
||||
-InputCsv 'Generated Files/ReleaseNotes/sorted_prs.csv' `
|
||||
-OutputCsv 'Generated Files/ReleaseNotes/prs_with_milestone.csv'
|
||||
```
|
||||
|
||||
### Local assignment only (fill blanks in CSV, no GitHub changes)
|
||||
```powershell
|
||||
pwsh ./.github/skills/release-note-generation/scripts/collect-or-apply-milestones.ps1 `
|
||||
-InputCsv 'Generated Files/ReleaseNotes/sorted_prs.csv' `
|
||||
-OutputCsv 'Generated Files/ReleaseNotes/prs_with_milestone.csv' `
|
||||
-DefaultMilestone 'PowerToys {{ReleaseVersion}}' `
|
||||
-LocalAssign
|
||||
```
|
||||
131
.github/skills/release-note-generation/references/step2-labeling.md
vendored
Normal file
131
.github/skills/release-note-generation/references/step2-labeling.md
vendored
Normal file
@@ -0,0 +1,131 @@
|
||||
# Step 2: Label Unlabeled PRs
|
||||
|
||||
## 2.0 To-do
|
||||
- 2.1 Identify unlabeled PRs (Agent Mode)
|
||||
- 2.2 Suggest labels (Agent Mode)
|
||||
- 2.3 Human label low-confidence PRs
|
||||
- 2.4 Recheck labels, delete Unlabeled.csv, and re-collect
|
||||
|
||||
**Before grouping**, ensure all PRs have appropriate labels for categorization.
|
||||
|
||||
⚠️ **CRITICAL:** Do NOT proceed to grouping until all PRs have labels assigned. PRs without labels will end up in `Unlabeled.csv` and won't appear in the correct release note sections.
|
||||
|
||||
## 2.1 Identify unlabeled PRs (Agent Mode)
|
||||
|
||||
Read `sorted_prs.csv` and identify PRs with empty or missing `Labels` column.
|
||||
|
||||
For each unlabeled PR, analyze:
|
||||
- **Title** - Often contains module name or feature
|
||||
- **Body** - PR description with context
|
||||
- **CopilotSummary** - AI-generated summary of changes
|
||||
|
||||
## 2.2 Suggest labels (Agent Mode)
|
||||
|
||||
For each unlabeled PR, suggest an appropriate label based on the content analysis.
|
||||
|
||||
**Output:** Create `Generated Files/ReleaseNotes/prs_label_review.md` with the following format:
|
||||
|
||||
```markdown
|
||||
# PR Label Review
|
||||
|
||||
Generated: YYYY-MM-DD HH:mm:ss
|
||||
|
||||
## Summary
|
||||
- Total unlabeled PRs: X
|
||||
- High confidence: X
|
||||
- Medium confidence: X
|
||||
- Low confidence: X
|
||||
|
||||
---
|
||||
|
||||
## PRs Needing Review (sorted by confidence, low first)
|
||||
|
||||
| PR | Title | Suggested Label | Confidence | Reason |
|
||||
|----|-------|-----------------|------------|--------|
|
||||
| [#12347](url) | Some generic fix | ??? | Low | Unclear from content |
|
||||
| [#12346](url) | Update dependencies | `Area-Build` | Medium | Body mentions NuGet packages |
|
||||
```
|
||||
|
||||
Sort by confidence (low first) so human reviews uncertain ones first.
|
||||
|
||||
After writing `prs_label_review.md`, **generate `prs_to_label.csv`, apply labels, and re-run collection** so the CSV/labels stay in sync:
|
||||
|
||||
```powershell
|
||||
# Generate CSV from suggestions (agent)
|
||||
# Apply labels
|
||||
pwsh ./.github/skills/release-note-generation/scripts/apply-labels.ps1 `
|
||||
-InputCsv 'Generated Files/ReleaseNotes/prs_to_label.csv'
|
||||
|
||||
# Refresh collection
|
||||
pwsh ./.github/skills/release-note-generation/scripts/dump-prs-since-commit.ps1 `
|
||||
-StartCommit '{{PreviousReleaseTag}}' -Branch 'stable' `
|
||||
-OutputDir 'Generated Files/ReleaseNotes'
|
||||
```
|
||||
|
||||
## 2.3 Human label low-confidence PRs
|
||||
|
||||
Ask the human to label **low-confidence** PRs directly (in GitHub). Skip any they decide not to label.
|
||||
|
||||
## 2.4 Recheck labels, delete Unlabeled.csv, and re-collect
|
||||
|
||||
Recheck that all PRs now have labels. Delete `Unlabeled.csv` (if present), then re-run the collection script to update `sorted_prs.csv`:
|
||||
|
||||
```powershell
|
||||
# Remove stale unlabeled output if it exists
|
||||
Remove-Item 'Generated Files/ReleaseNotes/Unlabeled.csv' -ErrorAction SilentlyContinue
|
||||
```
|
||||
|
||||
```powershell
|
||||
pwsh ./.github/skills/release-note-generation/scripts/dump-prs-since-commit.ps1 `
|
||||
-StartCommit '{{PreviousReleaseTag}}' -Branch 'stable' `
|
||||
-OutputDir 'Generated Files/ReleaseNotes'
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Common Label Mappings
|
||||
|
||||
| Keywords/Patterns | Suggested Label |
|
||||
| ----------------- | --------------- |
|
||||
| Advanced Paste, AP, clipboard, paste | `Product-Advanced Paste` |
|
||||
| CmdPal, Command Palette, cmdpal | `Product-Command Palette` |
|
||||
| FancyZones, zones, layout | `Product-FancyZones` |
|
||||
| ZoomIt, zoom, screen annotation | `Product-ZoomIt` |
|
||||
| Settings, settings-ui, Quick Access, flyout | `Product-Settings` |
|
||||
| Installer, setup, MSI, MSIX, WiX | `Area-Setup/Install` |
|
||||
| Build, pipeline, CI/CD, msbuild | `Area-Build` |
|
||||
| Test, unit test, UI test, fuzz | `Area-Tests` |
|
||||
| Localization, loc, translation, resw | `Area-Localization` |
|
||||
| Foundry, AI, LLM | `Product-Advanced Paste` (AI features) |
|
||||
| Mouse Without Borders, MWB | `Product-Mouse Without Borders` |
|
||||
| PowerRename, rename, regex | `Product-PowerRename` |
|
||||
| Peek, preview, file preview | `Product-Peek` |
|
||||
| Image Resizer, resize | `Product-Image Resizer` |
|
||||
| LightSwitch, theme, dark mode | `Product-LightSwitch` |
|
||||
| Quick Accent, accent, diacritics | `Product-Quick Accent` |
|
||||
| Awake, keep awake, caffeine | `Product-Awake` |
|
||||
| ColorPicker, color picker, eyedropper | `Product-ColorPicker` |
|
||||
| Hosts, hosts file | `Product-Hosts` |
|
||||
| Keyboard Manager, remap | `Product-Keyboard Manager` |
|
||||
| Mouse Highlighter | `Product-Mouse Highlighter` |
|
||||
| Mouse Jump | `Product-Mouse Jump` |
|
||||
| Find My Mouse | `Product-Find My Mouse` |
|
||||
| Mouse Pointer Crosshairs | `Product-Mouse Pointer Crosshairs` |
|
||||
| Shortcut Guide | `Product-Shortcut Guide` |
|
||||
| Text Extractor, OCR, PowerOCR | `Product-Text Extractor` |
|
||||
| Workspaces | `Product-Workspaces` |
|
||||
| File Locksmith | `Product-File Locksmith` |
|
||||
| Crop And Lock | `Product-CropAndLock` |
|
||||
| Environment Variables | `Product-Environment Variables` |
|
||||
| New+ | `Product-New+` |
|
||||
|
||||
## Label Filtering Rules
|
||||
|
||||
The grouping script keeps labels matching these patterns:
|
||||
- `Product-*`
|
||||
- `Area-*`
|
||||
- `GitHub*`
|
||||
- `*Plugin`
|
||||
- `Issue-*`
|
||||
|
||||
Other labels are ignored for grouping purposes.
|
||||
37
.github/skills/release-note-generation/references/step3-review-grouping.md
vendored
Normal file
37
.github/skills/release-note-generation/references/step3-review-grouping.md
vendored
Normal file
@@ -0,0 +1,37 @@
|
||||
# Step 3: Copilot Reviews and Grouping
|
||||
|
||||
## 3.0 To-do
|
||||
- 3.1 Request Copilot Reviews (Agent Mode)
|
||||
- 3.2 Refresh PR Data
|
||||
- 3.3 Group PRs by Label
|
||||
|
||||
## 3.1 Request Copilot Reviews (Agent Mode)
|
||||
|
||||
Use MCP tools to request Copilot reviews for all PRs in `Generated Files/ReleaseNotes/sorted_prs.csv`:
|
||||
|
||||
- Use `mcp_github_request_copilot_review` for each PR ID
|
||||
- Do NOT generate or run scripts for this step
|
||||
|
||||
---
|
||||
|
||||
## 3.2 Refresh PR Data
|
||||
|
||||
Re-run the collection script to capture Copilot review summaries into the `CopilotSummary` column:
|
||||
|
||||
```powershell
|
||||
pwsh ./.github/skills/release-note-generation/scripts/dump-prs-since-commit.ps1 `
|
||||
-StartCommit '{{PreviousReleaseTag}}' -Branch 'stable' `
|
||||
-OutputDir 'Generated Files/ReleaseNotes'
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 3.3 Group PRs by Label
|
||||
|
||||
```powershell
|
||||
pwsh ./.github/skills/release-note-generation/scripts/group-prs-by-label.ps1 -CsvPath 'Generated Files/ReleaseNotes/sorted_prs.csv' -OutDir 'Generated Files/ReleaseNotes/grouped_csv'
|
||||
```
|
||||
|
||||
Creates `Generated Files/ReleaseNotes/grouped_csv/` with one CSV per label combination.
|
||||
|
||||
**Validation:** The `Unlabeled.csv` file should be minimal (ideally empty). If many PRs remain unlabeled, return to Step 2 (see [step2-labeling.md](./step2-labeling.md)).
|
||||
88
.github/skills/release-note-generation/references/step4-summarization.md
vendored
Normal file
88
.github/skills/release-note-generation/references/step4-summarization.md
vendored
Normal file
@@ -0,0 +1,88 @@
|
||||
# Step 4: Summaries and Final Release Notes
|
||||
|
||||
## 4.0 To-do
|
||||
- 4.1 Generate Summary Markdown (Agent Mode)
|
||||
- 4.2 Produce Final Release Notes File
|
||||
|
||||
## 4.1 Generate Summary Markdown (Agent Mode)
|
||||
|
||||
For each CSV in `Generated Files/ReleaseNotes/grouped_csv/`, create a markdown file in `Generated Files/ReleaseNotes/grouped_md/`.
|
||||
|
||||
⚠️ **IMPORTANT:** Generate **ALL** markdown files first. Do NOT pause between files or ask for feedback during generation. Complete the entire batch, then human reviews afterwards.
|
||||
|
||||
### Structure per file
|
||||
|
||||
**1. Bullet list** - one concise, user-facing line per PR:
|
||||
- Use the “Verb-ed + Scenario + Impact” sentence structure—make readers think, “That’s exactly what I need” or “Yes, that’s an awesome fix.”; The "impact" can be end-user focused (written to convey user excitement) or technical (performance/stability) when user-facing impact is minimal.
|
||||
- If nothing special on impact or unclear impact, mark as needing human summary
|
||||
- Source from Title, Body, and CopilotSummary (prefer CopilotSummary when available)
|
||||
- If the column `NeedThanks` in CSV is `True`, append: `Thanks [@Author](https://github.com/Author)!`
|
||||
- Do NOT include PR numbers in bullet lines
|
||||
- Do NOT mention “security” or “privacy” issues, since these are not known and could be leveraged by attackers in earlier versions. Instead, describe the user-facing scenario, usage, or impact.
|
||||
- If confidence < 70%, write: `Human Summary Needed: <PR full link>`
|
||||
|
||||
**See [SampleOutput.md](./SampleOutput.md) for examples of well-written bullet summaries.**
|
||||
|
||||
**2. Three-column table** (same PR order):
|
||||
- Column 1: Concise summary (same as bullet)
|
||||
- Column 2: PR link `[#ID](URL)`
|
||||
- Column 3: Confidence level (High/Medium/Low)
|
||||
|
||||
### Review Process (AFTER all files generated)
|
||||
|
||||
- Human reviews each `grouped_md/*.md` file and requests rewrites as needed
|
||||
- Human may say "rewrite Product-X" or "combine these bullets"—apply changes to that specific file
|
||||
- Do NOT interrupt generation to ask for feedback
|
||||
|
||||
---
|
||||
|
||||
## 4.2 Produce Final Release Notes File
|
||||
|
||||
Once all `grouped_md/*.md` files are reviewed and approved, consolidate into a single release notes file.
|
||||
|
||||
**Output:** `Generated Files/ReleaseNotes/v{{ReleaseVersion}}-release-notes.md`
|
||||
|
||||
### Structure
|
||||
|
||||
**1. Highlights section** (top):
|
||||
- 8-12 bullets covering the most user-visible features and impactful fixes
|
||||
- Pattern: `**Module**: brief description`
|
||||
- Avoid internal refactors; focus on what users will notice
|
||||
|
||||
**2. Module sections** (alphabetical order):
|
||||
- One section per product (Advanced Paste, Awake, Command Palette, etc.)
|
||||
- Migrate bullet summaries from the approved `grouped_md/Product-*.md` files
|
||||
- One section 'Development' for all the rest summaries from the approved `grouped_md/Area-*.md` files
|
||||
- Re-review E2E, group release improvements by section, and move the most important items to the top of each section.
|
||||
Some items in the Development section may overlap and should be moved to the Module section where more applicable.
|
||||
|
||||
### Example Final Structure
|
||||
|
||||
```markdown
|
||||
# PowerToys v{{ReleaseVersion}} Release Notes
|
||||
|
||||
## Highlights
|
||||
|
||||
- **Command Palette**: Added theme customization and drag-and-drop support
|
||||
- **Advanced Paste**: Image input for AI, color detection in clipboard history
|
||||
- **FancyZones**: New CLI tool for command-line layout management
|
||||
...
|
||||
|
||||
---
|
||||
|
||||
## Advanced Paste
|
||||
|
||||
- Wrapped paste option lists in a single ScrollViewer
|
||||
- Added image input handling for AI-powered transformations
|
||||
...
|
||||
|
||||
## Awake
|
||||
|
||||
- Fixed timed mode expiration. Thanks [@daverayment](https://github.com/daverayment)!
|
||||
...
|
||||
|
||||
---
|
||||
|
||||
## Development
|
||||
...
|
||||
```
|
||||
90
.github/skills/release-note-generation/scripts/apply-labels.ps1
vendored
Normal file
90
.github/skills/release-note-generation/scripts/apply-labels.ps1
vendored
Normal file
@@ -0,0 +1,90 @@
|
||||
<#
|
||||
.SYNOPSIS
|
||||
Apply labels to PRs from a CSV file.
|
||||
|
||||
.DESCRIPTION
|
||||
Reads a CSV with Id and Label columns and applies the specified label to each PR via GitHub CLI.
|
||||
Supports dry-run mode to preview changes before applying.
|
||||
|
||||
.PARAMETER InputCsv
|
||||
CSV file with Id and Label columns. Default: prs_to_label.csv
|
||||
|
||||
.PARAMETER Repo
|
||||
GitHub repository (owner/name). Default: microsoft/PowerToys
|
||||
|
||||
.PARAMETER WhatIf
|
||||
Dry run - show what would be applied without making changes.
|
||||
|
||||
.EXAMPLE
|
||||
pwsh ./apply-labels.ps1 -InputCsv 'Generated Files/ReleaseNotes/prs_to_label.csv'
|
||||
|
||||
.EXAMPLE
|
||||
pwsh ./apply-labels.ps1 -InputCsv 'Generated Files/ReleaseNotes/prs_to_label.csv' -WhatIf
|
||||
|
||||
.NOTES
|
||||
Requires: gh CLI authenticated with repo write access.
|
||||
|
||||
Input CSV format:
|
||||
Id,Label
|
||||
12345,Product-Advanced Paste
|
||||
12346,Product-Settings
|
||||
#>
|
||||
[CmdletBinding()] param(
|
||||
[Parameter(Mandatory=$false)][string]$InputCsv = 'prs_to_label.csv',
|
||||
[Parameter(Mandatory=$false)][string]$Repo = 'microsoft/PowerToys',
|
||||
[switch]$WhatIf
|
||||
)
|
||||
|
||||
$ErrorActionPreference = 'Stop'
|
||||
|
||||
function Write-Info($m){ Write-Host "[info] $m" -ForegroundColor Cyan }
|
||||
function Write-Warn($m){ Write-Host "[warn] $m" -ForegroundColor Yellow }
|
||||
function Write-Err($m){ Write-Host "[error] $m" -ForegroundColor Red }
|
||||
function Write-OK($m){ Write-Host "[ok] $m" -ForegroundColor Green }
|
||||
|
||||
if (-not (Get-Command gh -ErrorAction SilentlyContinue)) { Write-Err "GitHub CLI 'gh' not found in PATH"; exit 1 }
|
||||
if (-not (Test-Path -LiteralPath $InputCsv)) { Write-Err "Input CSV not found: $InputCsv"; exit 1 }
|
||||
|
||||
$rows = Import-Csv -LiteralPath $InputCsv
|
||||
if (-not $rows) { Write-Info "No rows in CSV."; exit 0 }
|
||||
|
||||
$firstCols = $rows[0].PSObject.Properties.Name
|
||||
if (-not ($firstCols -contains 'Id' -and $firstCols -contains 'Label')) {
|
||||
Write-Err "CSV must contain 'Id' and 'Label' columns"; exit 1
|
||||
}
|
||||
|
||||
Write-Info "Processing $($rows.Count) label assignments..."
|
||||
if ($WhatIf) { Write-Warn "DRY RUN - no changes will be made" }
|
||||
|
||||
$applied = 0
|
||||
$skipped = 0
|
||||
$failed = 0
|
||||
|
||||
foreach ($row in $rows) {
|
||||
$id = $row.Id
|
||||
$label = $row.Label
|
||||
|
||||
if ([string]::IsNullOrWhiteSpace($id) -or [string]::IsNullOrWhiteSpace($label)) {
|
||||
Write-Warn "Skipping row with empty Id or Label"
|
||||
$skipped++
|
||||
continue
|
||||
}
|
||||
|
||||
if ($WhatIf) {
|
||||
Write-Info "Would apply label '$label' to PR #$id"
|
||||
$applied++
|
||||
continue
|
||||
}
|
||||
|
||||
try {
|
||||
gh pr edit $id --repo $Repo --add-label $label 2>&1 | Out-Null
|
||||
Write-OK "Applied '$label' to PR #$id"
|
||||
$applied++
|
||||
} catch {
|
||||
Write-Warn "Failed to apply label to PR #${id}: $_"
|
||||
$failed++
|
||||
}
|
||||
}
|
||||
|
||||
Write-Info ""
|
||||
Write-Info "Summary: Applied=$applied Skipped=$skipped Failed=$failed"
|
||||
172
.github/skills/release-note-generation/scripts/collect-or-apply-milestones.ps1
vendored
Normal file
172
.github/skills/release-note-generation/scripts/collect-or-apply-milestones.ps1
vendored
Normal file
@@ -0,0 +1,172 @@
|
||||
<#
|
||||
.SYNOPSIS
|
||||
Collect existing PR milestones or (optionally) assign/apply a milestone to missing PRs in one script.
|
||||
|
||||
.DESCRIPTION
|
||||
This unified script merges the behaviors of the previous add-milestone-column (collector) and
|
||||
set-milestones-missing (remote updater) scripts.
|
||||
|
||||
Modes (controlled by switches):
|
||||
1. Collect (default) – For each PR Id in the input CSV, queries GitHub for the current milestone and
|
||||
outputs a two-column CSV (Id,Milestone) leaving blanks where none are set.
|
||||
2. LocalAssign – Same as Collect, but for rows that end up blank assigns the value of -DefaultMilestone
|
||||
in memory (does NOT touch GitHub). Useful for quickly preparing a fully populated CSV.
|
||||
3. ApplyMissing – After determining which PRs have no milestone, call GitHub API to set their milestone
|
||||
to -DefaultMilestone. Requires milestone to already exist (open). Network + write.
|
||||
|
||||
You can combine LocalAssign and ApplyMissing: the remote update uses the existing live state; LocalAssign only
|
||||
affects the output CSV/pipeline objects.
|
||||
|
||||
.PARAMETER InputCsv
|
||||
Source CSV with at least an Id column. Default: sorted_prs.csv
|
||||
|
||||
.PARAMETER OutputCsv
|
||||
Destination CSV for collected (and optionally locally assigned) milestones. Default: prs_with_milestone.csv
|
||||
|
||||
.PARAMETER Repo
|
||||
GitHub repository (owner/name). Default: microsoft/PowerToys
|
||||
|
||||
.PARAMETER DefaultMilestone
|
||||
Milestone title used when -LocalAssign or -ApplyMissing is specified. Default: 'PowerToys 0.97'
|
||||
|
||||
.PARAMETER Offline
|
||||
Skip ALL GitHub lookups / updates. Implies Collect-only with all Milestone cells blank (unless LocalAssign).
|
||||
|
||||
.PARAMETER LocalAssign
|
||||
Populate empty Milestone cells in the output with -DefaultMilestone (does not modify GitHub).
|
||||
|
||||
.PARAMETER ApplyMissing
|
||||
For PRs which currently have no milestone (live on GitHub), set them to -DefaultMilestone via Issues API.
|
||||
|
||||
.PARAMETER WhatIf
|
||||
Dry run for ApplyMissing: show intended remote changes without performing PATCH requests.
|
||||
|
||||
.EXAMPLE
|
||||
# Collect only
|
||||
pwsh ./collect-or-apply-milestones.ps1
|
||||
|
||||
.EXAMPLE
|
||||
# Collect and fill blanks locally in the output only
|
||||
pwsh ./collect-or-apply-milestones.ps1 -LocalAssign
|
||||
|
||||
.EXAMPLE
|
||||
# Collect and remotely apply milestone to missing PRs
|
||||
pwsh ./collect-or-apply-milestones.ps1 -ApplyMissing
|
||||
|
||||
.EXAMPLE
|
||||
# Dry run remote application
|
||||
pwsh ./collect-or-apply-milestones.ps1 -ApplyMissing -WhatIf
|
||||
|
||||
.EXAMPLE
|
||||
# Offline local assignment
|
||||
pwsh ./collect-or-apply-milestones.ps1 -Offline -LocalAssign -DefaultMilestone 'PowerToys 0.96'
|
||||
|
||||
.NOTES
|
||||
Requires gh CLI unless -Offline AND -ApplyMissing not specified.
|
||||
Remote apply path queries milestones to resolve numeric ID.
|
||||
#>
|
||||
[CmdletBinding()] param(
|
||||
[Parameter(Mandatory=$false)][string]$InputCsv = 'sorted_prs.csv',
|
||||
[Parameter(Mandatory=$false)][string]$OutputCsv = 'prs_with_milestone.csv',
|
||||
[Parameter(Mandatory=$false)][string]$Repo = 'microsoft/PowerToys',
|
||||
[Parameter(Mandatory=$false)][string]$DefaultMilestone = 'PowerToys 0.97',
|
||||
[switch]$Offline,
|
||||
[switch]$LocalAssign,
|
||||
[switch]$ApplyMissing,
|
||||
[switch]$WhatIf
|
||||
)
|
||||
|
||||
$ErrorActionPreference = 'Stop'
|
||||
function Write-Info($m){ Write-Host "[info] $m" -ForegroundColor Cyan }
|
||||
function Write-Warn($m){ Write-Host "[warn] $m" -ForegroundColor Yellow }
|
||||
function Write-Err($m){ Write-Host "[error] $m" -ForegroundColor Red }
|
||||
|
||||
if (-not (Test-Path -LiteralPath $InputCsv)) { Write-Err "Input CSV not found: $InputCsv"; exit 1 }
|
||||
$rows = Import-Csv -LiteralPath $InputCsv
|
||||
if (-not $rows) { Write-Warn "Input CSV has no rows."; @() | Export-Csv -NoTypeInformation -LiteralPath $OutputCsv; exit 0 }
|
||||
if (-not ($rows[0].PSObject.Properties.Name -contains 'Id')) { Write-Err "Input CSV missing 'Id' column."; exit 1 }
|
||||
|
||||
$needGh = (-not $Offline) -and ($ApplyMissing -or -not $Offline)
|
||||
if ($needGh -and -not (Get-Command gh -ErrorAction SilentlyContinue)) { Write-Err "GitHub CLI 'gh' not found. Use -Offline or install gh."; exit 1 }
|
||||
|
||||
# Step 1: Collect current milestone titles
|
||||
$milestoneCache = @{}
|
||||
$collected = New-Object System.Collections.Generic.List[object]
|
||||
$idx = 0
|
||||
foreach ($row in $rows) {
|
||||
$idx++
|
||||
$id = $row.Id
|
||||
if (-not $id) { Write-Warn "Row $idx missing Id; skipping"; continue }
|
||||
$ms = ''
|
||||
if (-not $Offline) {
|
||||
if ($milestoneCache.ContainsKey($id)) { $ms = $milestoneCache[$id] }
|
||||
else {
|
||||
try {
|
||||
$json = gh pr view $id --repo $Repo --json milestone 2>$null | ConvertFrom-Json
|
||||
if ($json -and $json.milestone -and $json.milestone.title) { $ms = $json.milestone.title }
|
||||
} catch {
|
||||
Write-Warn "Failed to fetch PR #$id milestone: $_"
|
||||
}
|
||||
$milestoneCache[$id] = $ms
|
||||
}
|
||||
}
|
||||
$collected.Add([PSCustomObject]@{ Id = $id; Milestone = $ms }) | Out-Null
|
||||
}
|
||||
|
||||
# Step 2: Remote apply (if requested)
|
||||
$applySummary = @()
|
||||
if ($ApplyMissing) {
|
||||
if ($Offline) { Write-Err "Cannot use -ApplyMissing with -Offline."; exit 1 }
|
||||
Write-Info "Resolving milestone id for '$DefaultMilestone' ..."
|
||||
$milestonesRaw = gh api repos/$Repo/milestones --paginate --jq '.[] | {number,title,state}'
|
||||
$msObj = $milestonesRaw | ConvertFrom-Json | Where-Object { $_.title -eq $DefaultMilestone -and $_.state -eq 'open' } | Select-Object -First 1
|
||||
if (-not $msObj) { Write-Err "Milestone '$DefaultMilestone' not found/open."; exit 1 }
|
||||
$msNumber = $msObj.number
|
||||
$targets = $collected | Where-Object { [string]::IsNullOrWhiteSpace($_.Milestone) }
|
||||
Write-Info ("ApplyMissing: {0} PR(s) without milestone." -f $targets.Count)
|
||||
foreach ($t in $targets) {
|
||||
$id = $t.Id
|
||||
try {
|
||||
# Verify still missing live
|
||||
$current = gh pr view $id --repo $Repo --json milestone --jq '.milestone.title // ""'
|
||||
if ($current) {
|
||||
$applySummary += [PSCustomObject]@{ Id=$id; Action='Skip (already has)'; Milestone=$current; Status='OK' }
|
||||
continue
|
||||
}
|
||||
if ($WhatIf) {
|
||||
$applySummary += [PSCustomObject]@{ Id=$id; Action='Would set'; Milestone=$DefaultMilestone; Status='DRY RUN' }
|
||||
continue
|
||||
}
|
||||
gh api -X PATCH -H 'Accept: application/vnd.github+json' repos/$Repo/issues/$id -f milestone=$msNumber | Out-Null
|
||||
$applySummary += [PSCustomObject]@{ Id=$id; Action='Set'; Milestone=$DefaultMilestone; Status='OK' }
|
||||
# Reflect in collected object for CSV output if not LocalAssign already doing so
|
||||
$t.Milestone = $DefaultMilestone
|
||||
} catch {
|
||||
$errText = $_ | Out-String
|
||||
$applySummary += [PSCustomObject]@{ Id=$id; Action='Failed'; Milestone=$DefaultMilestone; Status=$errText.Trim() }
|
||||
Write-Warn ("Failed to set milestone for PR #{0}: {1}" -f $id, ($errText.Trim()))
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
# Step 3: Local assignment (purely for output) AFTER remote so remote actual result not overwritten accidentally
|
||||
if ($LocalAssign) {
|
||||
foreach ($item in $collected) {
|
||||
if ([string]::IsNullOrWhiteSpace($item.Milestone)) { $item.Milestone = $DefaultMilestone }
|
||||
}
|
||||
}
|
||||
|
||||
# Step 4: Export CSV
|
||||
$collected | Export-Csv -LiteralPath $OutputCsv -NoTypeInformation -Encoding UTF8
|
||||
Write-Info ("Wrote collected CSV -> {0}" -f (Resolve-Path -LiteralPath $OutputCsv))
|
||||
|
||||
# Step 5: Summaries
|
||||
if ($ApplyMissing) {
|
||||
$updated = ($applySummary | Where-Object { $_.Action -eq 'Set' }).Count
|
||||
$skipped = ($applySummary | Where-Object { $_.Action -like 'Skip*' }).Count
|
||||
$failed = ($applySummary | Where-Object { $_.Action -eq 'Failed' }).Count
|
||||
Write-Info ("ApplyMissing summary: Updated={0} Skipped={1} Failed={2}" -f $updated, $skipped, $failed)
|
||||
}
|
||||
|
||||
# Emit objects (final collected set)
|
||||
return $collected
|
||||
100
.github/skills/release-note-generation/scripts/diff_prs.ps1
vendored
Normal file
100
.github/skills/release-note-generation/scripts/diff_prs.ps1
vendored
Normal file
@@ -0,0 +1,100 @@
|
||||
<#
|
||||
.SYNOPSIS
|
||||
Produce an incremental PR CSV containing rows present in a newer full export but absent from a baseline export.
|
||||
|
||||
.DESCRIPTION
|
||||
Compares two previously generated sorted PR CSV files (same schema). Any row whose key column value
|
||||
(defaults to 'Number') does not exist in the baseline file is emitted to a new incremental CSV, preserving
|
||||
the original column order. If no new rows are found, an empty CSV (with headers when determinable) is written.
|
||||
|
||||
.PARAMETER BaseCsv
|
||||
Path to the baseline (earlier) PR CSV.
|
||||
|
||||
.PARAMETER AllCsv
|
||||
Path to the newer full PR CSV containing superset (or equal set) of rows.
|
||||
|
||||
.PARAMETER OutCsv
|
||||
Path to write the incremental CSV containing only new rows.
|
||||
|
||||
.PARAMETER Key
|
||||
Column name used as unique identifier (defaults to 'Number'). Must exist in both CSVs.
|
||||
|
||||
.EXAMPLE
|
||||
pwsh ./diff_prs.ps1 -BaseCsv sorted_prs_prev.csv -AllCsv sorted_prs.csv -OutCsv sorted_prs_incremental.csv
|
||||
|
||||
.NOTES
|
||||
Requires: PowerShell 7+, both CSVs with identical column schemas.
|
||||
Exit code 0 on success (even if zero incremental rows). Throws on missing files.
|
||||
#>
|
||||
|
||||
[CmdletBinding()] param(
|
||||
[Parameter(Mandatory=$false)][string]$BaseCsv = "./sorted_prs_93_round1.csv",
|
||||
[Parameter(Mandatory=$false)][string]$AllCsv = "./sorted_prs.csv",
|
||||
[Parameter(Mandatory=$false)][string]$OutCsv = "./sorted_prs_93_incremental.csv",
|
||||
[Parameter(Mandatory=$false)][string]$Key = "Number"
|
||||
)
|
||||
|
||||
Set-StrictMode -Version Latest
|
||||
$ErrorActionPreference = 'Stop'
|
||||
|
||||
function Write-Info($m) { Write-Host "[info] $m" -ForegroundColor Cyan }
|
||||
function Write-Warn($m) { Write-Host "[warn] $m" -ForegroundColor Yellow }
|
||||
|
||||
if (-not (Test-Path -LiteralPath $BaseCsv)) { throw "Base CSV not found: $BaseCsv" }
|
||||
if (-not (Test-Path -LiteralPath $AllCsv)) { throw "All CSV not found: $AllCsv" }
|
||||
|
||||
# Load CSVs
|
||||
$baseRows = Import-Csv -LiteralPath $BaseCsv
|
||||
$allRows = Import-Csv -LiteralPath $AllCsv
|
||||
|
||||
if (-not $baseRows) { Write-Warn "Base CSV has no rows." }
|
||||
if (-not $allRows) { Write-Warn "All CSV has no rows." }
|
||||
|
||||
# Validate key presence
|
||||
if ($baseRows -and -not ($baseRows[0].PSObject.Properties.Name -contains $Key)) { throw "Key column '$Key' not found in base CSV." }
|
||||
if ($allRows -and -not ($allRows[0].PSObject.Properties.Name -contains $Key)) { throw "Key column '$Key' not found in all CSV." }
|
||||
|
||||
# Build a set of existing keys from base
|
||||
$set = New-Object 'System.Collections.Generic.HashSet[string]'
|
||||
foreach ($row in $baseRows) {
|
||||
$val = [string]($row.$Key)
|
||||
if ($null -ne $val) { [void]$set.Add($val) }
|
||||
}
|
||||
|
||||
# Filter rows in AllCsv whose key is not in base (these are the new / incremental rows)
|
||||
$incremental = @()
|
||||
foreach ($row in $allRows) {
|
||||
$val = [string]($row.$Key)
|
||||
if (-not $set.Contains($val)) { $incremental += $row }
|
||||
}
|
||||
|
||||
# Preserve column order from the All CSV
|
||||
$columns = @()
|
||||
if ($allRows.Count -gt 0) {
|
||||
$columns = $allRows[0].PSObject.Properties.Name
|
||||
}
|
||||
|
||||
try {
|
||||
if ($incremental.Count -gt 0) {
|
||||
if ($columns.Count -gt 0) {
|
||||
$incremental | Select-Object -Property $columns | Export-Csv -LiteralPath $OutCsv -NoTypeInformation -Encoding UTF8
|
||||
} else {
|
||||
$incremental | Export-Csv -LiteralPath $OutCsv -NoTypeInformation -Encoding UTF8
|
||||
}
|
||||
} else {
|
||||
# Write an empty CSV with headers if we know them (facilitates downstream tooling expecting header row)
|
||||
if ($columns.Count -gt 0) {
|
||||
$obj = [PSCustomObject]@{}
|
||||
foreach ($c in $columns) { $obj | Add-Member -NotePropertyName $c -NotePropertyValue $null }
|
||||
$obj | Select-Object -Property $columns | Export-Csv -LiteralPath $OutCsv -NoTypeInformation -Encoding UTF8
|
||||
} else {
|
||||
'' | Out-File -LiteralPath $OutCsv -Encoding UTF8
|
||||
}
|
||||
}
|
||||
Write-Info ("Incremental rows: {0}" -f $incremental.Count)
|
||||
Write-Info ("Output: {0}" -f (Resolve-Path -LiteralPath $OutCsv))
|
||||
}
|
||||
catch {
|
||||
Write-Host "[error] Failed writing output CSV: $_" -ForegroundColor Red
|
||||
exit 1
|
||||
}
|
||||
344
.github/skills/release-note-generation/scripts/dump-prs-since-commit.ps1
vendored
Normal file
344
.github/skills/release-note-generation/scripts/dump-prs-since-commit.ps1
vendored
Normal file
@@ -0,0 +1,344 @@
|
||||
<#
|
||||
.SYNOPSIS
|
||||
Export merged PR metadata between two commits (exclusive start, inclusive end) to JSON and CSV.
|
||||
|
||||
.DESCRIPTION
|
||||
Identifies merge/squash commits reachable from EndCommit but not StartCommit, extracts PR numbers,
|
||||
queries GitHub for metadata plus (optionally) Copilot review/comment summaries, filters labels, then
|
||||
emits a JSON artifact and a sorted CSV (first label alphabetical).
|
||||
|
||||
.PARAMETER StartCommit
|
||||
Exclusive starting commit (SHA, tag, or ref). Commits AFTER this one are considered.
|
||||
|
||||
.PARAMETER EndCommit
|
||||
Inclusive ending commit (SHA, tag, or ref). If not provided, uses origin/<Branch> when Branch is set; otherwise uses HEAD.
|
||||
|
||||
.PARAMETER Repo
|
||||
GitHub repository (owner/name). Default: microsoft/PowerToys.
|
||||
|
||||
.PARAMETER OutputCsv
|
||||
Destination CSV path. Default: sorted_prs.csv.
|
||||
|
||||
.PARAMETER OutputJson
|
||||
Destination JSON path containing raw PR objects. Default: milestone_prs.json.
|
||||
|
||||
.EXAMPLE
|
||||
pwsh ./dump-prs-since-commit.ps1 -StartCommit 0123abcd -Branch stable
|
||||
|
||||
.EXAMPLE
|
||||
pwsh ./dump-prs-since-commit.ps1 -StartCommit 0123abcd -EndCommit 89ef7654 -OutputCsv delta.csv
|
||||
|
||||
.NOTES
|
||||
Requires: git, gh (authenticated). No Set-StrictMode to keep parity with existing release scripts.
|
||||
#>
|
||||
[CmdletBinding()]
|
||||
param(
|
||||
[Parameter(Mandatory = $true)][string]$StartCommit, # exclusive start (commits AFTER this one)
|
||||
[string]$EndCommit,
|
||||
[string]$Branch,
|
||||
[string]$Repo = "microsoft/PowerToys",
|
||||
[string]$OutputDir,
|
||||
[string]$OutputCsv = "sorted_prs.csv",
|
||||
[string]$OutputJson = "milestone_prs.json"
|
||||
)
|
||||
|
||||
<#
|
||||
.SYNOPSIS
|
||||
Dump merged PR information whose merge commits are reachable from EndCommit but not from StartCommit.
|
||||
.DESCRIPTION
|
||||
Uses git rev-list to compute commits in the (StartCommit, EndCommit] range, extracts PR numbers from merge commit messages,
|
||||
queries GitHub (gh CLI) for details, then outputs a CSV.
|
||||
|
||||
PR merge commit messages in PowerToys generally contain patterns like:
|
||||
Merge pull request #12345 from ...
|
||||
|
||||
.EXAMPLE
|
||||
pwsh ./dump-prs-since-commit.ps1 -StartCommit 0123abcd -Branch stable
|
||||
|
||||
.EXAMPLE
|
||||
pwsh ./dump-prs-since-commit.ps1 -StartCommit 0123abcd -EndCommit 89ef7654 -OutputCsv changes.csv
|
||||
|
||||
.NOTES
|
||||
Requires: gh CLI authenticated; git available in working directory (must be inside PowerToys repo clone).
|
||||
CopilotSummary behavior:
|
||||
- Attempts to locate the latest GitHub Copilot authored review (preferred).
|
||||
- If no review is found, lazily fetches PR comments to look for a Copilot-authored comment.
|
||||
- Normalizes whitespace and strips newlines. Empty when no Copilot activity detected.
|
||||
- Run with -Verbose to see whether the summary came from a 'review' or 'comment' source.
|
||||
#>
|
||||
|
||||
function Write-Info($msg) { Write-Host $msg -ForegroundColor Cyan }
|
||||
function Write-Warn($msg) { Write-Host $msg -ForegroundColor Yellow }
|
||||
function Write-Err($msg) { Write-Host $msg -ForegroundColor Red }
|
||||
function Write-DebugMsg($msg) { if ($PSBoundParameters.ContainsKey('Verbose') -or $VerbosePreference -eq 'Continue') { Write-Host "[VERBOSE] $msg" -ForegroundColor DarkGray } }
|
||||
|
||||
# Load member list from Generated Files/ReleaseNotes/MemberList.md (internal team - no thanks needed)
|
||||
$scriptDir = Split-Path -Parent $MyInvocation.MyCommand.Path
|
||||
$repoRoot = Resolve-Path (Join-Path $scriptDir "..\..\..\..")
|
||||
$defaultMemberListPath = Join-Path $repoRoot "Generated Files\ReleaseNotes\MemberList.md"
|
||||
$memberListPath = $defaultMemberListPath
|
||||
if ($OutputDir) {
|
||||
$memberListFromOutputDir = Join-Path $OutputDir "MemberList.md"
|
||||
if (Test-Path $memberListFromOutputDir) {
|
||||
$memberListPath = $memberListFromOutputDir
|
||||
}
|
||||
}
|
||||
$memberList = @()
|
||||
if (Test-Path $memberListPath) {
|
||||
$memberListContent = Get-Content $memberListPath -Raw
|
||||
# Extract usernames - skip markdown code fence lines, get all non-empty lines
|
||||
$memberList = ($memberListContent -split "`n") | Where-Object { $_ -notmatch '^\s*```' -and $_.Trim() -ne '' } | ForEach-Object { $_.Trim() }
|
||||
if (-not $memberList -or $memberList.Count -eq 0) {
|
||||
Write-Err "MemberList.md is empty at $memberListPath"
|
||||
exit 1
|
||||
}
|
||||
Write-DebugMsg "Loaded $($memberList.Count) members from MemberList.md"
|
||||
} else {
|
||||
Write-Err "MemberList.md not found at $memberListPath"
|
||||
exit 1
|
||||
}
|
||||
|
||||
# Validate we are in a git repo
|
||||
#if (-not (Test-Path .git)) {
|
||||
# Write-Err "Current directory does not appear to be the root of a git repository."
|
||||
# exit 1
|
||||
#}
|
||||
|
||||
# Resolve output directory (if specified)
|
||||
if ($OutputDir) {
|
||||
if (-not (Test-Path $OutputDir)) {
|
||||
New-Item -ItemType Directory -Path $OutputDir -Force | Out-Null
|
||||
}
|
||||
if (-not [System.IO.Path]::IsPathRooted($OutputCsv)) {
|
||||
$OutputCsv = Join-Path $OutputDir $OutputCsv
|
||||
}
|
||||
if (-not [System.IO.Path]::IsPathRooted($OutputJson)) {
|
||||
$OutputJson = Join-Path $OutputDir $OutputJson
|
||||
}
|
||||
}
|
||||
|
||||
# Resolve commits
|
||||
try {
|
||||
if ($Branch) {
|
||||
Write-Info "Fetching latest '$Branch' from origin (with tags)..."
|
||||
git fetch origin $Branch --tags | Out-Null
|
||||
if ($LASTEXITCODE -ne 0) { throw "git fetch origin $Branch --tags failed" }
|
||||
}
|
||||
|
||||
$startSha = (git rev-parse --verify $StartCommit) 2>$null
|
||||
if (-not $startSha) { throw "StartCommit '$StartCommit' not found" }
|
||||
if ($Branch) {
|
||||
$branchRef = $Branch
|
||||
$branchSha = (git rev-parse --verify $branchRef) 2>$null
|
||||
if (-not $branchSha) {
|
||||
$branchRef = "origin/$Branch"
|
||||
$branchSha = (git rev-parse --verify $branchRef) 2>$null
|
||||
}
|
||||
if (-not $branchSha) { throw "Branch '$Branch' not found" }
|
||||
if (-not $PSBoundParameters.ContainsKey('EndCommit') -or [string]::IsNullOrWhiteSpace($EndCommit)) {
|
||||
$EndCommit = $branchRef
|
||||
}
|
||||
}
|
||||
if (-not $PSBoundParameters.ContainsKey('EndCommit') -or [string]::IsNullOrWhiteSpace($EndCommit)) {
|
||||
$EndCommit = "HEAD"
|
||||
}
|
||||
$endSha = (git rev-parse --verify $EndCommit) 2>$null
|
||||
if (-not $endSha) { throw "EndCommit '$EndCommit' not found" }
|
||||
}
|
||||
catch {
|
||||
Write-Err $_
|
||||
exit 1
|
||||
}
|
||||
|
||||
Write-Info "Collecting commits between $startSha..$endSha (excluding start, including end)."
|
||||
# Get list of commits reachable from end but not from start.
|
||||
# IMPORTANT: In PowerShell, the .. operator creates a numeric/char range. If $startSha and $endSha look like hex strings,
|
||||
# `$startSha..$endSha` must be passed as a single string argument.
|
||||
$rangeArg = "$startSha..$endSha"
|
||||
$commitList = git rev-list $rangeArg
|
||||
|
||||
# Normalize list (filter out empty strings)
|
||||
$normalizedCommits = $commitList | Where-Object { $_ -and $_.Trim() -ne '' }
|
||||
$commitCount = ($normalizedCommits | Measure-Object).Count
|
||||
Write-DebugMsg ("Raw commitList length (including blanks): {0}" -f (($commitList | Measure-Object).Count))
|
||||
Write-DebugMsg ("Normalized commit count: {0}" -f $commitCount)
|
||||
if ($commitCount -eq 0) {
|
||||
Write-Warn "No commits found in specified range ($startSha..$endSha)."; exit 0
|
||||
}
|
||||
Write-DebugMsg ("First 5 commits: {0}" -f (($normalizedCommits | Select-Object -First 5) -join ', '))
|
||||
|
||||
<#
|
||||
Extract PR numbers from commits.
|
||||
Patterns handled:
|
||||
1. Merge commits: 'Merge pull request #12345 from ...'
|
||||
2. Squash commits: 'Some feature change (#12345)' (GitHub default squash format)
|
||||
We collect both. If a commit matches both (unlikely), it's deduped later.
|
||||
#>
|
||||
# Extract PR numbers from merge or squash commits
|
||||
$mergeCommits = @()
|
||||
foreach ($c in $normalizedCommits) {
|
||||
$subject = git show -s --format=%s $c
|
||||
$matched = $false
|
||||
# Pattern 1: Traditional merge commit
|
||||
if ($subject -match 'Merge pull request #([0-9]+) ') {
|
||||
$prNumber = [int]$matches[1]
|
||||
$mergeCommits += [PSCustomObject]@{ Sha = $c; Pr = $prNumber; Subject = $subject; Pattern = 'merge' }
|
||||
Write-DebugMsg "Matched merge PR #$prNumber in commit $c"
|
||||
$matched = $true
|
||||
}
|
||||
# Pattern 2: Squash merge subject line with ' (#12345)' at end (allow possible whitespace before paren)
|
||||
if ($subject -match '\(#([0-9]+)\)$') {
|
||||
$prNumber2 = [int]$matches[1]
|
||||
# Avoid duplicate object if pattern 1 already captured same number for same commit
|
||||
if (-not ($mergeCommits | Where-Object { $_.Sha -eq $c -and $_.Pr -eq $prNumber2 })) {
|
||||
$mergeCommits += [PSCustomObject]@{ Sha = $c; Pr = $prNumber2; Subject = $subject; Pattern = 'squash' }
|
||||
Write-DebugMsg "Matched squash PR #$prNumber2 in commit $c"
|
||||
}
|
||||
$matched = $true
|
||||
}
|
||||
if (-not $matched) {
|
||||
Write-DebugMsg "No PR pattern in commit $c : $subject"
|
||||
}
|
||||
}
|
||||
|
||||
if (-not $mergeCommits -or $mergeCommits.Count -eq 0) {
|
||||
Write-Warn "No merge commits with PR numbers found in range."; exit 0
|
||||
}
|
||||
|
||||
# Deduplicate PR numbers (in case of revert or merges across branches)
|
||||
$prNumbers = $mergeCommits | Select-Object -ExpandProperty Pr -Unique | Sort-Object
|
||||
Write-Info ("Found {0} unique PRs: {1}" -f $prNumbers.Count, ($prNumbers -join ', '))
|
||||
Write-DebugMsg ("Total merge commits examined: {0}" -f $mergeCommits.Count)
|
||||
|
||||
# Query GitHub for each PR
|
||||
$prDetails = @()
|
||||
function Get-CopilotSummaryFromPrJson {
|
||||
param(
|
||||
[Parameter(Mandatory=$true)]$PrJson,
|
||||
[switch]$VerboseMode
|
||||
)
|
||||
# Returns a hashtable with Summary and Source keys.
|
||||
$result = @{ Summary = ""; Source = "" }
|
||||
if (-not $PrJson) { return $result }
|
||||
|
||||
$candidateAuthors = @(
|
||||
'github-copilot[bot]', 'github-copilot', 'copilot'
|
||||
)
|
||||
|
||||
# 1. Reviews (preferred) – pick the LONGEST valid Copilot body, not the most recent
|
||||
$reviews = $PrJson.reviews
|
||||
if ($reviews) {
|
||||
$copilotReviews = $reviews | Where-Object {
|
||||
($candidateAuthors -contains $_.author.login -or $_.author.login -like '*copilot*') -and $_.body -and $_.body.Trim() -ne ''
|
||||
}
|
||||
if ($copilotReviews) {
|
||||
$longest = $copilotReviews | Sort-Object { $_.body.Length } -Descending | Select-Object -First 1
|
||||
if ($longest) {
|
||||
$body = $longest.body
|
||||
$norm = ($body -replace "`r", '') -replace "`n", ' '
|
||||
$norm = $norm -replace '\s+', ' '
|
||||
$result.Summary = $norm
|
||||
$result.Source = 'review'
|
||||
if ($VerboseMode) { Write-DebugMsg "Selected Copilot review length=$($body.Length) (longest)." }
|
||||
return $result
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
# 2. Comments fallback (some repos surface Copilot summaries as PR comments rather than review objects)
|
||||
if ($null -eq $PrJson.comments) {
|
||||
try {
|
||||
# Lazy fetch comments only if needed
|
||||
$commentsJson = gh pr view $PrJson.number --repo $Repo --json comments 2>$null | ConvertFrom-Json
|
||||
if ($commentsJson -and $commentsJson.comments) {
|
||||
$PrJson | Add-Member -NotePropertyName comments -NotePropertyValue $commentsJson.comments -Force
|
||||
}
|
||||
} catch {
|
||||
if ($VerboseMode) { Write-DebugMsg "Failed to fetch comments for PR #$($PrJson.number): $_" }
|
||||
}
|
||||
}
|
||||
if ($PrJson.comments) {
|
||||
$copilotComments = $PrJson.comments | Where-Object {
|
||||
($candidateAuthors -contains $_.author.login -or $_.author.login -like '*copilot*') -and $_.body -and $_.body.Trim() -ne ''
|
||||
}
|
||||
if ($copilotComments) {
|
||||
$longestC = $copilotComments | Sort-Object { $_.body.Length } -Descending | Select-Object -First 1
|
||||
if ($longestC) {
|
||||
$body = $longestC.body
|
||||
$norm = ($body -replace "`r", '') -replace "`n", ' '
|
||||
$norm = $norm -replace '\s+', ' '
|
||||
$result.Summary = $norm
|
||||
$result.Source = 'comment'
|
||||
if ($VerboseMode) { Write-DebugMsg "Selected Copilot comment length=$($body.Length) (longest)." }
|
||||
return $result
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
return $result
|
||||
}
|
||||
|
||||
foreach ($pr in $prNumbers) {
|
||||
Write-Info "Fetching PR #$pr ..."
|
||||
try {
|
||||
# Include comments only if Verbose asked; if not, we lazily pull when reviews are missing
|
||||
$fields = 'number,title,labels,author,url,body,reviews'
|
||||
if ($PSBoundParameters.ContainsKey('Verbose')) { $fields += ',comments' }
|
||||
$json = gh pr view $pr --repo $Repo --json $fields 2>$null | ConvertFrom-Json
|
||||
if ($null -eq $json) { throw "Empty response" }
|
||||
|
||||
$copilot = Get-CopilotSummaryFromPrJson -PrJson $json -VerboseMode:($PSBoundParameters.ContainsKey('Verbose'))
|
||||
if ($copilot.Summary -and $copilot.Source -and $PSBoundParameters.ContainsKey('Verbose')) {
|
||||
Write-DebugMsg "Copilot summary source=$($copilot.Source) chars=$($copilot.Summary.Length)"
|
||||
} elseif (-not $copilot.Summary) {
|
||||
Write-DebugMsg "No Copilot summary found for PR #$pr"
|
||||
}
|
||||
|
||||
# Filter labels
|
||||
$filteredLabels = $json.labels | Where-Object {
|
||||
($_.name -like "Product-*") -or
|
||||
($_.name -like "Area-*") -or
|
||||
($_.name -like "GitHub*") -or
|
||||
($_.name -like "*Plugin") -or
|
||||
($_.name -like "Issue-*")
|
||||
}
|
||||
$labelNames = ($filteredLabels | ForEach-Object { $_.name }) -join ", "
|
||||
|
||||
$bodyValue = if ($json.body) { ($json.body -replace "`r", '') -replace "`n", ' ' } else { '' }
|
||||
$bodyValue = $bodyValue -replace '\s+', ' '
|
||||
|
||||
# Determine if author needs thanks (not in member list)
|
||||
$authorLogin = $json.author.login
|
||||
$needThanks = $true
|
||||
if ($memberList.Count -gt 0 -and $authorLogin) {
|
||||
$needThanks = -not ($memberList -contains $authorLogin)
|
||||
}
|
||||
|
||||
$prDetails += [PSCustomObject]@{
|
||||
Id = $json.number
|
||||
Title = $json.title
|
||||
Labels = $labelNames
|
||||
Author = $authorLogin
|
||||
Url = $json.url
|
||||
Body = $bodyValue
|
||||
CopilotSummary = $copilot.Summary
|
||||
NeedThanks = $needThanks
|
||||
}
|
||||
}
|
||||
catch {
|
||||
$err = $_
|
||||
Write-Warn ("Failed to fetch PR #{0}: {1}" -f $pr, $err)
|
||||
}
|
||||
}
|
||||
|
||||
if (-not $prDetails) { Write-Warn "No PR details fetched."; exit 0 }
|
||||
|
||||
# Sort by Labels like original script (first label alphabetical)
|
||||
$sorted = $prDetails | Sort-Object { ($_.Labels -split ',')[0] }
|
||||
|
||||
# Output JSON raw (optional)
|
||||
$sorted | ConvertTo-Json -Depth 6 | Out-File -Encoding UTF8 $OutputJson
|
||||
|
||||
Write-Info "Saving CSV to $OutputCsv ..."
|
||||
$sorted | Export-Csv $OutputCsv -NoTypeInformation
|
||||
Write-Host "✅ Done. Generated $($prDetails.Count) PR rows." -ForegroundColor Green
|
||||
80
.github/skills/release-note-generation/scripts/find-commit-by-title.ps1
vendored
Normal file
80
.github/skills/release-note-generation/scripts/find-commit-by-title.ps1
vendored
Normal file
@@ -0,0 +1,80 @@
|
||||
<#
|
||||
.SYNOPSIS
|
||||
Find a commit on a branch that has the same subject line as a reference commit.
|
||||
|
||||
.DESCRIPTION
|
||||
Given a commit SHA (often from a release tag) and a branch name, this script
|
||||
resolves the reference commit's subject, then searches the branch history for
|
||||
commits with the exact same subject line. Useful when the release tag commit
|
||||
is not reachable from your current branch history.
|
||||
|
||||
.PARAMETER Commit
|
||||
The reference commit SHA or ref (e.g., v0.96.1 or a full SHA).
|
||||
|
||||
.PARAMETER Branch
|
||||
The branch to search (e.g., stable or main). Defaults to stable.
|
||||
|
||||
.PARAMETER RepoPath
|
||||
Path to the local repo. Defaults to current directory.
|
||||
|
||||
.EXAMPLE
|
||||
pwsh ./find-commit-by-title.ps1 -Commit b62f6421845f7e5c92b8186868d98f46720db442 -Branch stable
|
||||
#>
|
||||
[CmdletBinding()]
|
||||
param(
|
||||
[Parameter(Mandatory = $true)][string]$Commit,
|
||||
[string]$Branch = "stable",
|
||||
[string]$RepoPath = "."
|
||||
)
|
||||
|
||||
function Write-Info($msg) { Write-Host $msg -ForegroundColor Cyan }
|
||||
function Write-Err($msg) { Write-Host $msg -ForegroundColor Red }
|
||||
|
||||
Push-Location $RepoPath
|
||||
try {
|
||||
Write-Info "Fetching latest '$Branch' from origin (with tags)..."
|
||||
git fetch origin $Branch --tags | Out-Null
|
||||
if ($LASTEXITCODE -ne 0) { throw "git fetch origin $Branch --tags failed" }
|
||||
|
||||
$commitSha = (git rev-parse --verify $Commit) 2>$null
|
||||
if (-not $commitSha) { throw "Commit '$Commit' not found" }
|
||||
|
||||
$subject = (git show -s --format=%s $commitSha) 2>$null
|
||||
if (-not $subject) { throw "Unable to read subject for '$commitSha'" }
|
||||
|
||||
$branchRef = $Branch
|
||||
$branchSha = (git rev-parse --verify $branchRef) 2>$null
|
||||
if (-not $branchSha) {
|
||||
$branchRef = "origin/$Branch"
|
||||
$branchSha = (git rev-parse --verify $branchRef) 2>$null
|
||||
}
|
||||
if (-not $branchSha) { throw "Branch '$Branch' not found" }
|
||||
|
||||
Write-Info "Reference commit: $commitSha"
|
||||
Write-Info "Reference title: $subject"
|
||||
Write-Info "Searching branch: $branchRef"
|
||||
|
||||
$matches = git log $branchRef --format="%H|%s" | Where-Object { $_ -match '\|' }
|
||||
$results = @()
|
||||
foreach ($line in $matches) {
|
||||
$parts = $line -split '\|', 2
|
||||
if ($parts.Count -eq 2 -and $parts[1] -eq $subject) {
|
||||
$results += [PSCustomObject]@{ Sha = $parts[0]; Title = $parts[1] }
|
||||
}
|
||||
}
|
||||
|
||||
if (-not $results -or $results.Count -eq 0) {
|
||||
Write-Info "No matching commit found on $branchRef for the given title."
|
||||
exit 0
|
||||
}
|
||||
|
||||
Write-Info ("Found {0} matching commit(s):" -f $results.Count)
|
||||
$results | ForEach-Object { Write-Host ("{0} {1}" -f $_.Sha, $_.Title) }
|
||||
}
|
||||
catch {
|
||||
Write-Err $_
|
||||
exit 1
|
||||
}
|
||||
finally {
|
||||
Pop-Location
|
||||
}
|
||||
85
.github/skills/release-note-generation/scripts/group-prs-by-label.ps1
vendored
Normal file
85
.github/skills/release-note-generation/scripts/group-prs-by-label.ps1
vendored
Normal file
@@ -0,0 +1,85 @@
|
||||
<#
|
||||
.SYNOPSIS
|
||||
Group PR rows by their Labels column and emit per-label CSV files.
|
||||
|
||||
.DESCRIPTION
|
||||
Reads a milestone PR CSV (usually produced by dump-prs-information / dump-prs-since-commit scripts),
|
||||
splits rows by label list, normalizes/sorts individual labels, and writes one CSV per unique label combination.
|
||||
Each output preserves the original row ordering within that subset and column order from the source.
|
||||
|
||||
.PARAMETER CsvPath
|
||||
Input CSV containing PR rows with a 'Labels' column (comma-separated list).
|
||||
|
||||
.PARAMETER OutDir
|
||||
Output directory to place grouped CSVs (created if missing). Default: 'grouped_csv'.
|
||||
|
||||
.NOTES
|
||||
Label combinations are joined using ' | ' when multiple labels present. Filenames are sanitized (invalid characters,
|
||||
whitespace collapsed) and truncated to <= 120 characters.
|
||||
#>
|
||||
param(
|
||||
[string]$CsvPath = "sorted_prs.csv",
|
||||
[string]$OutDir = "grouped_csv"
|
||||
)
|
||||
|
||||
$ErrorActionPreference = 'Stop'
|
||||
|
||||
function Write-Info($msg) { Write-Host "[info] $msg" -ForegroundColor Cyan }
|
||||
function Write-Warn($msg) { Write-Host "[warn] $msg" -ForegroundColor Yellow }
|
||||
|
||||
if (-not (Test-Path -LiteralPath $CsvPath)) { throw "CSV not found: $CsvPath" }
|
||||
|
||||
Write-Info "Reading CSV: $CsvPath"
|
||||
$rows = Import-Csv -LiteralPath $CsvPath
|
||||
Write-Info ("Loaded {0} rows" -f $rows.Count)
|
||||
|
||||
function ConvertTo-SafeFileName {
|
||||
[CmdletBinding()]
|
||||
param(
|
||||
[Parameter(Mandatory=$true)][string]$Name
|
||||
)
|
||||
if ([string]::IsNullOrWhiteSpace($Name)) { return 'Unnamed' }
|
||||
$s = $Name -replace '[<>:"/\\|?*]', '-' # invalid path chars
|
||||
$s = $s -replace '\s+', '-' # spaces to dashes
|
||||
$s = $s -replace '-{2,}', '-' # collapse dashes
|
||||
$s = $s.Trim('-')
|
||||
if ($s.Length -gt 120) { $s = $s.Substring(0,120).Trim('-') }
|
||||
if ([string]::IsNullOrWhiteSpace($s)) { return 'Unnamed' }
|
||||
return $s
|
||||
}
|
||||
|
||||
# Build groups keyed by normalized, sorted label combinations. Preserve original CSV row order.
|
||||
$groups = @{}
|
||||
foreach ($row in $rows) {
|
||||
$labelsRaw = $row.Labels
|
||||
if ([string]::IsNullOrWhiteSpace($labelsRaw)) {
|
||||
$labelParts = @('Unlabeled')
|
||||
} else {
|
||||
$parts = $labelsRaw -split ',' | ForEach-Object { $_.Trim() } | Where-Object { $_ }
|
||||
if (-not $parts -or $parts.Count -eq 0) { $labelParts = @('Unlabeled') }
|
||||
else { $labelParts = $parts | Sort-Object }
|
||||
}
|
||||
|
||||
$key = ($labelParts -join ' | ')
|
||||
if (-not $groups.ContainsKey($key)) { $groups[$key] = New-Object System.Collections.ArrayList }
|
||||
[void]$groups[$key].Add($row)
|
||||
}
|
||||
|
||||
if (-not (Test-Path -LiteralPath $OutDir)) {
|
||||
Write-Info "Creating output directory: $OutDir"
|
||||
New-Item -ItemType Directory -Path $OutDir | Out-Null
|
||||
}
|
||||
|
||||
Write-Info ("Generating {0} grouped CSV file(s) into: {1}" -f $groups.Count, $OutDir)
|
||||
|
||||
foreach ($key in $groups.Keys) {
|
||||
$labelParts = if ($key -eq 'Unlabeled') { @('Unlabeled') } else { $key -split '\s\|\s' }
|
||||
$safeName = ($labelParts | ForEach-Object { ConvertTo-SafeFileName -Name $_ }) -join '-'
|
||||
$filePath = Join-Path $OutDir ("$safeName.csv")
|
||||
|
||||
# Keep same columns and order
|
||||
$groups[$key] | Export-Csv -LiteralPath $filePath -NoTypeInformation -Encoding UTF8
|
||||
}
|
||||
|
||||
Write-Info "Done. Sample output files:"
|
||||
Get-ChildItem -LiteralPath $OutDir | Select-Object -First 10 Name | Format-Table -HideTableHeaders
|
||||
13
.vscode/mcp.json
vendored
Normal file
13
.vscode/mcp.json
vendored
Normal file
@@ -0,0 +1,13 @@
|
||||
{
|
||||
"servers": {
|
||||
"github-artifacts": {
|
||||
"command": "node",
|
||||
"args": [
|
||||
"tools/mcp/github-artifacts/launch.js"
|
||||
],
|
||||
"env": {
|
||||
"GITHUB_TOKEN": "${env:GITHUB_TOKEN}"
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
2
tools/mcp/github-artifacts/.gitignore
vendored
Normal file
2
tools/mcp/github-artifacts/.gitignore
vendored
Normal file
@@ -0,0 +1,2 @@
|
||||
node_modules/
|
||||
package-lock.json
|
||||
14
tools/mcp/github-artifacts/launch.js
Normal file
14
tools/mcp/github-artifacts/launch.js
Normal file
@@ -0,0 +1,14 @@
|
||||
import { existsSync } from "fs";
|
||||
import { execSync } from "child_process";
|
||||
import { fileURLToPath } from "url";
|
||||
import { dirname } from "path";
|
||||
|
||||
const __dirname = dirname(fileURLToPath(import.meta.url));
|
||||
process.chdir(__dirname);
|
||||
|
||||
if (!existsSync("node_modules")) {
|
||||
console.log("[MCP] Installing dependencies...");
|
||||
execSync("npm install", { stdio: "inherit" });
|
||||
}
|
||||
|
||||
import("./server.js");
|
||||
19
tools/mcp/github-artifacts/package.json
Normal file
19
tools/mcp/github-artifacts/package.json
Normal file
@@ -0,0 +1,19 @@
|
||||
{
|
||||
"name": "github-artifacts",
|
||||
"version": "1.0.0",
|
||||
"description": "",
|
||||
"main": "server.js",
|
||||
"scripts": {
|
||||
"test": "node test-github_issue_images.js && node test-github_issue_attachments.js",
|
||||
"start": "node server.js"
|
||||
},
|
||||
"keywords": [],
|
||||
"author": "",
|
||||
"license": "ISC",
|
||||
"type": "module",
|
||||
"dependencies": {
|
||||
"@modelcontextprotocol/sdk": "^1.25.2",
|
||||
"jszip": "^3.10.1",
|
||||
"zod": "^3.25.76"
|
||||
}
|
||||
}
|
||||
385
tools/mcp/github-artifacts/server.js
Normal file
385
tools/mcp/github-artifacts/server.js
Normal file
@@ -0,0 +1,385 @@
|
||||
import { McpServer } from "@modelcontextprotocol/sdk/server/mcp.js";
|
||||
import { StdioServerTransport } from "@modelcontextprotocol/sdk/server/stdio.js";
|
||||
import { z } from "zod";
|
||||
import fs from "fs/promises";
|
||||
import path from "path";
|
||||
import { execFile } from "child_process";
|
||||
|
||||
const server = new McpServer({
|
||||
name: "issue-images",
|
||||
version: "0.1.0"
|
||||
});
|
||||
|
||||
const GH_API_PER_PAGE = 100;
|
||||
const MAX_TEXT_FILE_BYTES = 100000;
|
||||
// Limit to common text files to avoid binary blobs, huge payloads, and non-UTF8 noise.
|
||||
const TEXT_EXTENSIONS = new Set([
|
||||
".txt", ".log", ".json", ".xml", ".yaml", ".yml", ".md", ".csv",
|
||||
".ini", ".config", ".conf", ".bat", ".ps1", ".sh", ".reg", ".etl"
|
||||
]);
|
||||
|
||||
function extractImageUrls(markdownOrHtml) {
|
||||
const urls = new Set();
|
||||
|
||||
// Markdown images: 
|
||||
for (const m of markdownOrHtml.matchAll(/!\[[^\]]*?\]\((https?:\/\/[^\s)]+)\)/g)) {
|
||||
urls.add(m[1]);
|
||||
}
|
||||
|
||||
// HTML <img src="...">
|
||||
for (const m of markdownOrHtml.matchAll(/<img[^>]+src="(https?:\/\/[^">]+)"/g)) {
|
||||
urls.add(m[1]);
|
||||
}
|
||||
|
||||
return [...urls];
|
||||
}
|
||||
|
||||
function extractZipUrls(markdownOrHtml) {
|
||||
const urls = new Set();
|
||||
|
||||
// Markdown links to .zip files: [text](url.zip)
|
||||
for (const m of markdownOrHtml.matchAll(/\[[^\]]*?\]\((https?:\/\/[^\s)]+\.zip)\)/gi)) {
|
||||
urls.add(m[1]);
|
||||
}
|
||||
|
||||
// Plain URLs ending in .zip
|
||||
for (const m of markdownOrHtml.matchAll(/(https?:\/\/[^\s<>"]+\.zip)/gi)) {
|
||||
urls.add(m[1]);
|
||||
}
|
||||
|
||||
return [...urls];
|
||||
}
|
||||
|
||||
async function fetchJson(url, token) {
|
||||
const res = await fetch(url, {
|
||||
headers: {
|
||||
"Accept": "application/vnd.github+json",
|
||||
...(token ? { "Authorization": `Bearer ${token}` } : {}),
|
||||
"X-GitHub-Api-Version": "2022-11-28",
|
||||
"User-Agent": "issue-images-mcp"
|
||||
}
|
||||
});
|
||||
if (!res.ok) throw new Error(`GitHub API failed: ${res.status} ${res.statusText}`);
|
||||
return await res.json();
|
||||
}
|
||||
|
||||
async function downloadBytes(url, token) {
|
||||
const res = await fetch(url, {
|
||||
headers: {
|
||||
...(token ? { "Authorization": `Bearer ${token}` } : {}),
|
||||
"User-Agent": "issue-images-mcp"
|
||||
}
|
||||
});
|
||||
if (!res.ok) throw new Error(`Image download failed: ${res.status} ${res.statusText}`);
|
||||
const buf = new Uint8Array(await res.arrayBuffer());
|
||||
const ct = res.headers.get("content-type") || "image/png";
|
||||
return { buf, mimeType: ct };
|
||||
}
|
||||
|
||||
async function downloadZipBytes(url, token) {
|
||||
const zipUrl = url.includes("?") ? url : `${url}?download=1`;
|
||||
|
||||
const tryFetch = async (useAuth) => {
|
||||
const res = await fetch(zipUrl, {
|
||||
headers: {
|
||||
"Accept": "application/octet-stream",
|
||||
...(useAuth && token ? { "Authorization": `Bearer ${token}` } : {}),
|
||||
"User-Agent": "issue-images-mcp"
|
||||
},
|
||||
redirect: "follow"
|
||||
});
|
||||
|
||||
if (!res.ok) throw new Error(`ZIP download failed: ${res.status} ${res.statusText}`);
|
||||
|
||||
const contentType = (res.headers.get("content-type") || "").toLowerCase();
|
||||
const buf = new Uint8Array(await res.arrayBuffer());
|
||||
|
||||
return { buf, contentType };
|
||||
};
|
||||
|
||||
let { buf, contentType } = await tryFetch(true);
|
||||
const isZip = buf.length >= 4 && buf[0] === 0x50 && buf[1] === 0x4b;
|
||||
|
||||
if (!isZip) {
|
||||
({ buf, contentType } = await tryFetch(false));
|
||||
}
|
||||
|
||||
const isZipRetry = buf.length >= 4 && buf[0] === 0x50 && buf[1] === 0x4b;
|
||||
|
||||
if (!isZipRetry || contentType.includes("text/html") || buf.length < 100) {
|
||||
throw new Error("ZIP download returned HTML or invalid data. Check permissions or rate limits.");
|
||||
}
|
||||
|
||||
return buf;
|
||||
}
|
||||
|
||||
function execFileAsync(file, args) {
|
||||
return new Promise((resolve, reject) => {
|
||||
execFile(file, args, (error, stdout, stderr) => {
|
||||
if (error) {
|
||||
reject(new Error(stderr || error.message));
|
||||
} else {
|
||||
resolve(stdout);
|
||||
}
|
||||
});
|
||||
});
|
||||
}
|
||||
|
||||
async function extractZipToFolder(zipPath, extractPath) {
|
||||
if (process.platform === "win32") {
|
||||
await execFileAsync("powershell", [
|
||||
"-NoProfile",
|
||||
"-NonInteractive",
|
||||
"-Command",
|
||||
`$ProgressPreference='SilentlyContinue'; Expand-Archive -Path \"${zipPath}\" -DestinationPath \"${extractPath}\" -Force -ErrorAction Stop | Out-Null`
|
||||
]);
|
||||
return;
|
||||
}
|
||||
|
||||
await execFileAsync("unzip", ["-o", zipPath, "-d", extractPath]);
|
||||
}
|
||||
|
||||
async function listFilesRecursively(dir) {
|
||||
const entries = await fs.readdir(dir, { withFileTypes: true });
|
||||
const files = [];
|
||||
for (const entry of entries) {
|
||||
const fullPath = path.join(dir, entry.name);
|
||||
if (entry.isDirectory()) {
|
||||
files.push(...await listFilesRecursively(fullPath));
|
||||
} else {
|
||||
files.push(fullPath);
|
||||
}
|
||||
}
|
||||
return files;
|
||||
}
|
||||
|
||||
async function pathExists(p) {
|
||||
try {
|
||||
await fs.access(p);
|
||||
return true;
|
||||
} catch {
|
||||
return false;
|
||||
}
|
||||
}
|
||||
|
||||
async function fetchAllComments(owner, repo, issueNumber, token) {
|
||||
let comments = [];
|
||||
let page = 1;
|
||||
|
||||
while (true) {
|
||||
const pageComments = await fetchJson(
|
||||
`https://api.github.com/repos/${owner}/${repo}/issues/${issueNumber}/comments?per_page=${GH_API_PER_PAGE}&page=${page}`,
|
||||
token
|
||||
);
|
||||
comments = comments.concat(pageComments);
|
||||
if (pageComments.length < GH_API_PER_PAGE) break;
|
||||
page++;
|
||||
}
|
||||
|
||||
return comments;
|
||||
}
|
||||
|
||||
async function fetchIssueAndComments(owner, repo, issueNumber, token) {
|
||||
const issue = await fetchJson(`https://api.github.com/repos/${owner}/${repo}/issues/${issueNumber}`, token);
|
||||
const comments = issue.comments > 0 ? await fetchAllComments(owner, repo, issueNumber, token) : [];
|
||||
return { issue, comments };
|
||||
}
|
||||
|
||||
function buildBlobs(issue, comments) {
|
||||
return [issue.body || "", ...comments.map(c => c.body || "")].join("\n\n---\n\n");
|
||||
}
|
||||
|
||||
server.registerTool(
|
||||
"github_issue_images",
|
||||
{
|
||||
title: "GitHub Issue Images",
|
||||
description: `Download and return images from a GitHub issue or pull request.
|
||||
|
||||
USE THIS TOOL WHEN:
|
||||
- User asks about a GitHub issue/PR that contains screenshots, images, or visual content
|
||||
- User wants to understand a bug report with attached images
|
||||
- User asks to analyze, describe, or review images in an issue/PR
|
||||
- User references a GitHub issue/PR URL and the context suggests images are relevant
|
||||
- User asks about UI bugs, visual glitches, design issues, or anything visual in nature
|
||||
|
||||
WHAT IT DOES:
|
||||
- Fetches all images from the issue/PR body and all comments
|
||||
- Returns actual image data (not just URLs) so the LLM can see and analyze the images
|
||||
- Supports PNG, JPEG, GIF, and other common image formats
|
||||
|
||||
EXAMPLES OF WHEN TO USE:
|
||||
- "What does the bug in issue #123 look like?"
|
||||
- "Can you see the screenshot in this PR?"
|
||||
- "Analyze the images in microsoft/PowerToys#25595"
|
||||
- "What UI problem is shown in this issue?"`,
|
||||
inputSchema: {
|
||||
owner: z.string(),
|
||||
repo: z.string(),
|
||||
issueNumber: z.number(),
|
||||
maxImages: z.number().min(1).max(20).optional()
|
||||
},
|
||||
outputSchema: {
|
||||
images: z.number(),
|
||||
comments: z.number()
|
||||
}
|
||||
},
|
||||
async ({ owner, repo, issueNumber, maxImages = 20 }) => {
|
||||
try {
|
||||
const token = process.env.GITHUB_TOKEN;
|
||||
const { issue, comments } = await fetchIssueAndComments(owner, repo, issueNumber, token);
|
||||
const blobs = buildBlobs(issue, comments);
|
||||
const urls = extractImageUrls(blobs).slice(0, maxImages);
|
||||
|
||||
const content = [
|
||||
{ type: "text", text: `Found ${urls.length} image(s) in issue #${issueNumber} (from ${comments.length} comments). Returning as image parts.` }
|
||||
];
|
||||
|
||||
for (const url of urls) {
|
||||
const { buf, mimeType } = await downloadBytes(url, token);
|
||||
const b64 = Buffer.from(buf).toString("base64");
|
||||
content.push({ type: "image", data: b64, mimeType });
|
||||
content.push({ type: "text", text: `Image source: ${url}` });
|
||||
}
|
||||
|
||||
const output = { images: urls.length, comments: comments.length };
|
||||
return { content, structuredContent: output };
|
||||
} catch (err) {
|
||||
const message = err instanceof Error ? err.message : String(err);
|
||||
return { content: [{ type: "text", text: `Error: ${message}` }], isError: true };
|
||||
}
|
||||
}
|
||||
);
|
||||
|
||||
server.registerTool(
|
||||
"github_issue_attachments",
|
||||
{
|
||||
title: "GitHub Issue Attachments",
|
||||
description: `Download and extract ZIP file attachments from a GitHub issue or pull request.
|
||||
|
||||
USE THIS TOOL WHEN:
|
||||
- User asks about diagnostic logs, crash reports, or debug information in an issue
|
||||
- Issue contains ZIP attachments like PowerToysReport_*.zip, logs.zip, debug.zip
|
||||
- User wants to analyze log files, configuration files, or system info from an issue
|
||||
- User asks about error logs, stack traces, or diagnostic data attached to an issue
|
||||
- Issue mentions attached files that need to be examined
|
||||
|
||||
WHAT IT DOES:
|
||||
- Finds all ZIP file attachments in the issue body and comments
|
||||
- Downloads and extracts each ZIP to a local folder
|
||||
- Returns file listing and contents of text files (logs, json, xml, txt, etc.)
|
||||
- Each ZIP is extracted to: {extractFolder}/{zipFileName}/
|
||||
|
||||
EXAMPLES OF WHEN TO USE:
|
||||
- "What's in the diagnostic report attached to issue #39476?"
|
||||
- "Can you check the logs in the PowerToysReport zip?"
|
||||
- "Analyze the crash dump attached to this issue"
|
||||
- "What error is shown in the attached log files?"`,
|
||||
inputSchema: {
|
||||
owner: z.string(),
|
||||
repo: z.string(),
|
||||
issueNumber: z.number(),
|
||||
extractFolder: z.string(),
|
||||
maxFiles: z.number().min(1).optional()
|
||||
},
|
||||
outputSchema: {
|
||||
zips: z.number(),
|
||||
extracted: z.number(),
|
||||
extractedTo: z.array(z.string())
|
||||
}
|
||||
},
|
||||
async ({ owner, repo, issueNumber, extractFolder, maxFiles = 50 }) => {
|
||||
try {
|
||||
const token = process.env.GITHUB_TOKEN;
|
||||
const { issue, comments } = await fetchIssueAndComments(owner, repo, issueNumber, token);
|
||||
const blobs = buildBlobs(issue, comments);
|
||||
const zipUrls = extractZipUrls(blobs);
|
||||
|
||||
if (zipUrls.length === 0) {
|
||||
return {
|
||||
content: [{ type: "text", text: `No ZIP attachments found in issue #${issueNumber}.` }],
|
||||
structuredContent: { zips: 0, extracted: 0, extractedTo: [] }
|
||||
};
|
||||
}
|
||||
|
||||
await fs.mkdir(extractFolder, { recursive: true });
|
||||
|
||||
const content = [
|
||||
{ type: "text", text: `Found ${zipUrls.length} ZIP attachment(s) in issue #${issueNumber}. Extracting to: ${extractFolder}` }
|
||||
];
|
||||
|
||||
let totalFilesReturned = 0;
|
||||
const extractedPaths = [];
|
||||
let extractedCount = 0;
|
||||
|
||||
for (const zipUrl of zipUrls) {
|
||||
try {
|
||||
const urlPath = new URL(zipUrl).pathname;
|
||||
const zipFileName = path.basename(urlPath, ".zip");
|
||||
const extractPath = path.join(extractFolder, zipFileName);
|
||||
const zipPath = path.join(extractFolder, `${zipFileName}.zip`);
|
||||
|
||||
let extractedFiles = [];
|
||||
const extractPathExists = await pathExists(extractPath);
|
||||
|
||||
if (extractPathExists) {
|
||||
extractedFiles = await listFilesRecursively(extractPath);
|
||||
}
|
||||
|
||||
if (!extractPathExists || extractedFiles.length === 0) {
|
||||
const zipExists = await pathExists(zipPath);
|
||||
if (!zipExists) {
|
||||
const buf = await downloadZipBytes(zipUrl, token);
|
||||
await fs.writeFile(zipPath, buf);
|
||||
}
|
||||
|
||||
await fs.mkdir(extractPath, { recursive: true });
|
||||
await extractZipToFolder(zipPath, extractPath);
|
||||
extractedFiles = await listFilesRecursively(extractPath);
|
||||
}
|
||||
|
||||
extractedPaths.push(extractPath);
|
||||
extractedCount++;
|
||||
|
||||
const fileList = [];
|
||||
const textContents = [];
|
||||
|
||||
for (const fullPath of extractedFiles) {
|
||||
const relPath = path.relative(extractPath, fullPath).replace(/\\/g, "/");
|
||||
const ext = path.extname(relPath).toLowerCase();
|
||||
const stat = await fs.stat(fullPath);
|
||||
const sizeKB = Math.round(stat.size / 1024);
|
||||
fileList.push(` ${relPath} (${sizeKB} KB)`);
|
||||
|
||||
if (TEXT_EXTENSIONS.has(ext) && totalFilesReturned < maxFiles && stat.size < MAX_TEXT_FILE_BYTES) {
|
||||
try {
|
||||
const textContent = await fs.readFile(fullPath, "utf-8");
|
||||
textContents.push({ path: relPath, content: textContent });
|
||||
totalFilesReturned++;
|
||||
} catch {
|
||||
// Not valid UTF-8, skip
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
content.push({ type: "text", text: `\n📦 ${zipFileName}.zip extracted to: ${extractPath}\nFiles:\n${fileList.join("\n")}` });
|
||||
|
||||
for (const { path: fPath, content: fContent } of textContents) {
|
||||
content.push({ type: "text", text: `\n--- ${fPath} ---\n${fContent}` });
|
||||
}
|
||||
} catch (err) {
|
||||
const message = err instanceof Error ? err.message : String(err);
|
||||
content.push({ type: "text", text: `❌ Failed to extract ${zipUrl}: ${message}` });
|
||||
}
|
||||
}
|
||||
|
||||
const output = { zips: zipUrls.length, extracted: extractedCount, extractedTo: extractedPaths };
|
||||
return { content, structuredContent: output };
|
||||
} catch (err) {
|
||||
const message = err instanceof Error ? err.message : String(err);
|
||||
return { content: [{ type: "text", text: `Error: ${message}` }], isError: true };
|
||||
}
|
||||
}
|
||||
);
|
||||
|
||||
const transport = new StdioServerTransport();
|
||||
await server.connect(transport);
|
||||
141
tools/mcp/github-artifacts/test-github_issue_attachments.js
Normal file
141
tools/mcp/github-artifacts/test-github_issue_attachments.js
Normal file
@@ -0,0 +1,141 @@
|
||||
// Test script for github_issue_attachments tool
|
||||
// Run with: node test-github_issue_attachments.js
|
||||
// Make sure GITHUB_TOKEN is set in environment
|
||||
|
||||
import { spawn } from "child_process";
|
||||
import fs from "fs/promises";
|
||||
import path from "path";
|
||||
import { fileURLToPath } from "url";
|
||||
|
||||
const __dirname = path.dirname(fileURLToPath(import.meta.url));
|
||||
|
||||
const extractFolder = path.join(__dirname, "test-extracts");
|
||||
|
||||
const server = spawn("node", ["server.js"], {
|
||||
stdio: ["pipe", "pipe", "inherit"],
|
||||
env: { ...process.env }
|
||||
});
|
||||
|
||||
// Send initialize request
|
||||
const initRequest = {
|
||||
jsonrpc: "2.0",
|
||||
id: 1,
|
||||
method: "initialize",
|
||||
params: {
|
||||
protocolVersion: "2024-11-05",
|
||||
capabilities: {},
|
||||
clientInfo: { name: "test-client", version: "1.0.0" }
|
||||
}
|
||||
};
|
||||
|
||||
server.stdin.write(JSON.stringify(initRequest) + "\n");
|
||||
|
||||
// Send list tools request
|
||||
setTimeout(() => {
|
||||
const listToolsRequest = {
|
||||
jsonrpc: "2.0",
|
||||
id: 2,
|
||||
method: "tools/list",
|
||||
params: {}
|
||||
};
|
||||
server.stdin.write(JSON.stringify(listToolsRequest) + "\n");
|
||||
}, 500);
|
||||
|
||||
// Send call tool request - test with PowerToys issue that has ZIP attachments
|
||||
setTimeout(() => {
|
||||
const callToolRequest = {
|
||||
jsonrpc: "2.0",
|
||||
id: 3,
|
||||
method: "tools/call",
|
||||
params: {
|
||||
name: "github_issue_attachments",
|
||||
arguments: {
|
||||
owner: "microsoft",
|
||||
repo: "PowerToys",
|
||||
issueNumber: 39476, // Has PowerToysReport_*.zip attachment
|
||||
extractFolder: extractFolder,
|
||||
maxFiles: 20
|
||||
}
|
||||
}
|
||||
};
|
||||
server.stdin.write(JSON.stringify(callToolRequest) + "\n");
|
||||
}, 1000);
|
||||
|
||||
// Track summary
|
||||
let summary = { zips: 0, files: 0, extractPath: "" };
|
||||
let buffer = "";
|
||||
|
||||
// Read responses
|
||||
server.stdout.on("data", (data) => {
|
||||
buffer += data.toString();
|
||||
|
||||
// Try to parse complete JSON objects from buffer
|
||||
const lines = buffer.split("\n");
|
||||
buffer = lines.pop() || ""; // Keep incomplete line in buffer
|
||||
|
||||
for (const line of lines) {
|
||||
if (!line.trim()) continue;
|
||||
try {
|
||||
const response = JSON.parse(line);
|
||||
console.log("\n=== Response ===");
|
||||
console.log("ID:", response.id);
|
||||
if (response.result?.tools) {
|
||||
console.log("Tools:", response.result.tools.map(t => t.name));
|
||||
} else if (response.result?.content) {
|
||||
for (const item of response.result.content) {
|
||||
if (item.type === "text") {
|
||||
// Truncate long file contents for display
|
||||
const text = item.text;
|
||||
if (text.startsWith("---") && text.length > 500) {
|
||||
console.log(text.substring(0, 500) + "\n... [truncated]");
|
||||
} else {
|
||||
console.log(text);
|
||||
}
|
||||
|
||||
// Track stats
|
||||
if (text.includes("ZIP attachment")) {
|
||||
const match = text.match(/Found (\d+) ZIP/);
|
||||
if (match) summary.zips = parseInt(match[1]);
|
||||
}
|
||||
if (text.includes("extracted to:")) {
|
||||
summary.extractPath = text.match(/extracted to: (.+)/)?.[1] || "";
|
||||
}
|
||||
if (text.includes("Files:")) {
|
||||
summary.files = (text.match(/ /g) || []).length;
|
||||
}
|
||||
}
|
||||
}
|
||||
} else {
|
||||
console.log("Result:", JSON.stringify(response.result, null, 2));
|
||||
}
|
||||
} catch (e) {
|
||||
// Likely incomplete JSON, will be in next chunk
|
||||
}
|
||||
}
|
||||
});
|
||||
|
||||
// Exit after 60 seconds (ZIP download may take time)
|
||||
setTimeout(() => {
|
||||
console.log("\n" + "=".repeat(50));
|
||||
console.log("=== Test Summary ===");
|
||||
console.log("=".repeat(50));
|
||||
console.log(`ZIP files found: ${summary.zips}`);
|
||||
console.log(`Files extracted: ${summary.files}`);
|
||||
if (summary.extractPath) {
|
||||
console.log(`Extract location: ${summary.extractPath}`);
|
||||
}
|
||||
console.log("=".repeat(50));
|
||||
console.log("Cleaning up extracted files...");
|
||||
fs.rm(extractFolder, { recursive: true, force: true })
|
||||
.then(() => {
|
||||
console.log("Cleanup done.");
|
||||
})
|
||||
.catch((err) => {
|
||||
console.log(`Cleanup failed: ${err.message}`);
|
||||
})
|
||||
.finally(() => {
|
||||
console.log("Test complete!");
|
||||
server.kill();
|
||||
process.exit(0);
|
||||
});
|
||||
}, 60000);
|
||||
118
tools/mcp/github-artifacts/test-github_issue_images.js
Normal file
118
tools/mcp/github-artifacts/test-github_issue_images.js
Normal file
@@ -0,0 +1,118 @@
|
||||
// Simple test script - run with: node test-github_issue_images.js
|
||||
// Make sure GITHUB_TOKEN is set in environment
|
||||
|
||||
import { spawn } from "child_process";
|
||||
|
||||
const server = spawn("node", ["server.js"], {
|
||||
stdio: ["pipe", "pipe", "inherit"],
|
||||
env: { ...process.env }
|
||||
});
|
||||
|
||||
// Send initialize request
|
||||
const initRequest = {
|
||||
jsonrpc: "2.0",
|
||||
id: 1,
|
||||
method: "initialize",
|
||||
params: {
|
||||
protocolVersion: "2024-11-05",
|
||||
capabilities: {},
|
||||
clientInfo: { name: "test-client", version: "1.0.0" }
|
||||
}
|
||||
};
|
||||
|
||||
server.stdin.write(JSON.stringify(initRequest) + "\n");
|
||||
|
||||
// Send list tools request
|
||||
setTimeout(() => {
|
||||
const listToolsRequest = {
|
||||
jsonrpc: "2.0",
|
||||
id: 2,
|
||||
method: "tools/list",
|
||||
params: {}
|
||||
};
|
||||
server.stdin.write(JSON.stringify(listToolsRequest) + "\n");
|
||||
}, 500);
|
||||
|
||||
// Send call tool request (test with a real issue)
|
||||
setTimeout(() => {
|
||||
const callToolRequest = {
|
||||
jsonrpc: "2.0",
|
||||
id: 3,
|
||||
method: "tools/call",
|
||||
params: {
|
||||
name: "github_issue_images",
|
||||
arguments: {
|
||||
owner: "microsoft",
|
||||
repo: "PowerToys",
|
||||
issueNumber: 25595, // 315 comments, many images - tests pagination!
|
||||
maxImages: 5
|
||||
}
|
||||
}
|
||||
};
|
||||
server.stdin.write(JSON.stringify(callToolRequest) + "\n");
|
||||
}, 1000);
|
||||
|
||||
// Track summary
|
||||
let summary = { images: 0, totalKB: 0, text: "" };
|
||||
let gotToolResponse = false;
|
||||
let buffer = "";
|
||||
|
||||
// Read responses
|
||||
server.stdout.on("data", (data) => {
|
||||
buffer += data.toString();
|
||||
|
||||
// Try to parse complete JSON objects from buffer
|
||||
const lines = buffer.split("\n");
|
||||
buffer = lines.pop() || ""; // Keep incomplete line in buffer
|
||||
|
||||
for (const line of lines) {
|
||||
if (!line.trim()) continue;
|
||||
try {
|
||||
const response = JSON.parse(line);
|
||||
console.log("\n=== Response ===");
|
||||
console.log("ID:", response.id);
|
||||
if (response.result?.tools) {
|
||||
console.log("Tools:", response.result.tools.map(t => t.name));
|
||||
} else if (response.result?.content) {
|
||||
gotToolResponse = true;
|
||||
let imageCount = 0;
|
||||
for (const item of response.result.content) {
|
||||
if (item.type === "text") {
|
||||
console.log("Text:", item.text);
|
||||
summary.text = item.text;
|
||||
} else if (item.type === "image") {
|
||||
imageCount++;
|
||||
const sizeKB = Math.round(item.data.length * 0.75 / 1024); // base64 to actual size
|
||||
console.log(` [Image ${imageCount}] ${item.mimeType} - ${sizeKB} KB downloaded`);
|
||||
summary.images++;
|
||||
summary.totalKB += sizeKB;
|
||||
}
|
||||
}
|
||||
} else {
|
||||
console.log("Result:", JSON.stringify(response.result, null, 2));
|
||||
}
|
||||
} catch (e) {
|
||||
// Likely incomplete JSON, will be in next chunk
|
||||
}
|
||||
}
|
||||
});
|
||||
|
||||
// Exit after 60 seconds (more time for downloads)
|
||||
setTimeout(() => {
|
||||
console.log("\n" + "=".repeat(50));
|
||||
console.log("=== Test Summary ===");
|
||||
console.log("=".repeat(50));
|
||||
if (summary.text) {
|
||||
console.log(summary.text);
|
||||
}
|
||||
if (summary.images > 0) {
|
||||
console.log(`Total images downloaded: ${summary.images}`);
|
||||
console.log(`Total size: ${summary.totalKB} KB`);
|
||||
} else if (!gotToolResponse) {
|
||||
console.log("No tool response received yet. The request may still be running or was rate-limited.");
|
||||
}
|
||||
console.log("=".repeat(50));
|
||||
console.log("Test complete!");
|
||||
server.kill();
|
||||
process.exit(0);
|
||||
}, 60000);
|
||||
Reference in New Issue
Block a user