Compare commits

...

10 Commits

Author SHA1 Message Date
Shawn Yuan (from Dev Box)
b2ff69e4c5 update 2026-01-23 15:52:21 +08:00
Shawn Yuan (from Dev Box)
74511b3814 added telemetry 2026-01-23 15:29:00 +08:00
Shawn Yuan
086c63b6af [Settings] Fix right click menu display issue (#44982)
<!-- Enter a brief description/summary of your PR here. What does it
fix/what does it change/how was it tested (even manually, if necessary)?
-->
## Summary of the Pull Request
This pull request updates the tray icon context menu logic to better
reflect the state of the "Quick Access" feature. The menu now
dynamically updates its items and labels based on whether Quick Access
is enabled or disabled, improving clarity for users.

**Menu behavior improvements:**

* The tray icon menu now reloads itself when the Quick Access setting
changes, ensuring the menu always matches the current state.
* The "Settings" menu item label changes to "Settings\tLeft-click" when
Quick Access is disabled, providing clearer instructions to users.
[[1]](diffhunk://#diff-e5efbda4c356e159a6ca82a425db84438ab4014d1d90377b98a2eb6d9632d32dR176-R179)
[[2]](diffhunk://#diff-7139ecb2cf76e472c574a155268c19e919e2cce05d9d345c50c1f1bffc939e1aR198-R248)
* The Quick Access menu item is removed from the context menu when the
feature is disabled, preventing confusion.

**Internal state tracking:**

* Added a new variable `last_quick_access_state` to track the previous
Quick Access state and trigger menu reloads only when necessary.
<!-- Please review the items on the PR checklist before submitting-->
## PR Checklist

- [x] Closes: #44810
<!-- - [ ] Closes: #yyy (add separate lines for additional resolved
issues) -->
- [x] **Communication:** I've discussed this with core contributors
already. If the work hasn't been agreed, this work might be rejected
- [x] **Tests:** Added/updated and all pass
- [x] **Localization:** All end-user-facing strings can be localized
- [ ] **Dev docs:** Added/updated
- [ ] **New binaries:** Added on the required places
- [ ] [JSON for
signing](https://github.com/microsoft/PowerToys/blob/main/.pipelines/ESRPSigning_core.json)
for new binaries
- [ ] [WXS for
installer](https://github.com/microsoft/PowerToys/blob/main/installer/PowerToysSetup/Product.wxs)
for new binaries and localization folder
- [ ] [YML for CI
pipeline](https://github.com/microsoft/PowerToys/blob/main/.pipelines/ci/templates/build-powertoys-steps.yml)
for new test projects
- [ ] [YML for signed
pipeline](https://github.com/microsoft/PowerToys/blob/main/.pipelines/release.yml)
- [ ] **Documentation updated:** If checked, please file a pull request
on [our docs
repo](https://github.com/MicrosoftDocs/windows-uwp/tree/docs/hub/powertoys)
and link it here: #xxx

<!-- Provide a more detailed description of the PR, other things fixed,
or any additional comments/features here -->
## Detailed Description of the Pull Request / Additional comments
- When Quick Access is disabled

<img width="1537" height="312" alt="image"
src="https://github.com/user-attachments/assets/5d51f24e-ccb4-4973-afaa-8b64cc35db87"
/>

- When Quick Access is enabled
<img width="1601" height="201" alt="image"
src="https://github.com/user-attachments/assets/56366d10-bcec-4892-b2d2-f8213ad726aa"
/>

<!-- Describe how you validated the behavior. Add automated tests
wherever possible, but list manual validation steps taken as well -->
## Validation Steps Performed
2026-01-23 14:47:35 +08:00
moooyo
d192672c74 fix: Improve Unicode normalization and add regex metachar tests (#44944)
Enhanced SanitizeAndNormalize to handle Unicode normalization more
robustly, ensuring correct buffer sizing and error handling. Added unit
tests for regex metacharacters `$` and `^` to verify correct replacement
behavior at string boundaries. Improves Unicode support and test
coverage for regex edge cases.

<!-- Enter a brief description/summary of your PR here. What does it
fix/what does it change/how was it tested (even manually, if necessary)?
-->
## Summary of the Pull Request

<!-- Please review the items on the PR checklist before submitting-->
## PR Checklist

- [x] Closes: #44942 #44892
<!-- - [ ] Closes: #yyy (add separate lines for additional resolved
issues) -->
- [ ] **Communication:** I've discussed this with core contributors
already. If the work hasn't been agreed, this work might be rejected
- [ ] **Tests:** Added/updated and all pass
- [ ] **Localization:** All end-user-facing strings can be localized
- [ ] **Dev docs:** Added/updated
- [ ] **New binaries:** Added on the required places
- [ ] [JSON for
signing](https://github.com/microsoft/PowerToys/blob/main/.pipelines/ESRPSigning_core.json)
for new binaries
- [ ] [WXS for
installer](https://github.com/microsoft/PowerToys/blob/main/installer/PowerToysSetup/Product.wxs)
for new binaries and localization folder
- [ ] [YML for CI
pipeline](https://github.com/microsoft/PowerToys/blob/main/.pipelines/ci/templates/build-powertoys-steps.yml)
for new test projects
- [ ] [YML for signed
pipeline](https://github.com/microsoft/PowerToys/blob/main/.pipelines/release.yml)
- [ ] **Documentation updated:** If checked, please file a pull request
on [our docs
repo](https://github.com/MicrosoftDocs/windows-uwp/tree/docs/hub/powertoys)
and link it here: #xxx

<!-- Provide a more detailed description of the PR, other things fixed,
or any additional comments/features here -->
## Detailed Description of the Pull Request / Additional comments

<!-- Describe how you validated the behavior. Add automated tests
wherever possible, but list manual validation steps taken as well -->
## Validation Steps Performed

---------

Co-authored-by: Yu Leng <yuleng@microsoft.com>
2026-01-23 14:43:21 +08:00
Kai Tao
60b8419366 Runner TrayIcon: Monochrome icon should adapt to windows theme instead of the app theme (#44931)
<!-- Enter a brief description/summary of your PR here. What does it
fix/what does it change/how was it tested (even manually, if necessary)?
-->
## Summary of the Pull Request
As title
<!-- Please review the items on the PR checklist before submitting-->
## PR Checklist

- [X] Closes: #44891
<!-- - [ ] Closes: #yyy (add separate lines for additional resolved
issues) -->
- [ ] **Communication:** I've discussed this with core contributors
already. If the work hasn't been agreed, this work might be rejected
- [ ] **Tests:** Added/updated and all pass
- [ ] **Localization:** All end-user-facing strings can be localized
- [ ] **Dev docs:** Added/updated
- [ ] **New binaries:** Added on the required places
- [ ] [JSON for
signing](https://github.com/microsoft/PowerToys/blob/main/.pipelines/ESRPSigning_core.json)
for new binaries
- [ ] [WXS for
installer](https://github.com/microsoft/PowerToys/blob/main/installer/PowerToysSetup/Product.wxs)
for new binaries and localization folder
- [ ] [YML for CI
pipeline](https://github.com/microsoft/PowerToys/blob/main/.pipelines/ci/templates/build-powertoys-steps.yml)
for new test projects
- [ ] [YML for signed
pipeline](https://github.com/microsoft/PowerToys/blob/main/.pipelines/release.yml)
- [ ] **Documentation updated:** If checked, please file a pull request
on [our docs
repo](https://github.com/MicrosoftDocs/windows-uwp/tree/docs/hub/powertoys)
and link it here: #xxx

<!-- Provide a more detailed description of the PR, other things fixed,
or any additional comments/features here -->
## Detailed Description of the Pull Request / Additional comments

<!-- Describe how you validated the behavior. Add automated tests
wherever possible, but list manual validation steps taken as well -->
## Validation Steps Performed
============ System light + App Light
<img width="903" height="239" alt="image"
src="https://github.com/user-attachments/assets/581606fb-99b5-4df9-a520-545a0c04676c"
/>
============ System Light + App Dark
<img width="991" height="239" alt="image"
src="https://github.com/user-attachments/assets/822009e9-57cf-452b-b3aa-f1cbc25883f8"
/>
============ System Dark + App Light
<img width="932" height="236" alt="image"
src="https://github.com/user-attachments/assets/98a56d48-31f0-4f75-95a4-8c7dc83c3866"
/>
============ System Dark + App Dark
<img width="903" height="236" alt="image"
src="https://github.com/user-attachments/assets/2500a0d5-6b27-403e-89b4-69b7d3b91e79"
/>
============
2026-01-23 10:47:19 +08:00
Kai Tao
d46a996fcd Cmdpal: use latest msix to install (#44886)
<!-- Enter a brief description/summary of your PR here. What does it
fix/what does it change/how was it tested (even manually, if necessary)?
-->
## Summary of the Pull Request
We should install latest cmdpal msix
<!-- Please review the items on the PR checklist before submitting-->
## PR Checklist

- [ ] Closes: #44860
<!-- - [ ] Closes: #yyy (add separate lines for additional resolved
issues) -->
- [ ] **Communication:** I've discussed this with core contributors
already. If the work hasn't been agreed, this work might be rejected
- [ ] **Tests:** Added/updated and all pass
- [ ] **Localization:** All end-user-facing strings can be localized
- [ ] **Dev docs:** Added/updated
- [ ] **New binaries:** Added on the required places
- [ ] [JSON for
signing](https://github.com/microsoft/PowerToys/blob/main/.pipelines/ESRPSigning_core.json)
for new binaries
- [ ] [WXS for
installer](https://github.com/microsoft/PowerToys/blob/main/installer/PowerToysSetup/Product.wxs)
for new binaries and localization folder
- [ ] [YML for CI
pipeline](https://github.com/microsoft/PowerToys/blob/main/.pipelines/ci/templates/build-powertoys-steps.yml)
for new test projects
- [ ] [YML for signed
pipeline](https://github.com/microsoft/PowerToys/blob/main/.pipelines/release.yml)
- [ ] **Documentation updated:** If checked, please file a pull request
on [our docs
repo](https://github.com/MicrosoftDocs/windows-uwp/tree/docs/hub/powertoys)
and link it here: #xxx

<!-- Provide a more detailed description of the PR, other things fixed,
or any additional comments/features here -->
## Detailed Description of the Pull Request / Additional comments

<!-- Describe how you validated the behavior. Add automated tests
wherever possible, but list manual validation steps taken as well -->
## Validation Steps Performed
2026-01-23 09:58:53 +08:00
Jiří Polášek
395850389f CmdPal: Improve loading of application icons - part 1 (#44938)
## Summary of the Pull Request

This PR improves loading of application icons:

- Fixes loading of icons from internet shortcuts

## Pictures? Pictures!

<img width="683" height="399" alt="image"
src="https://github.com/user-attachments/assets/5e566648-7b1a-4254-8afd-557a321b19d6"
/>


<!-- Please review the items on the PR checklist before submitting-->
## PR Checklist

- [x] Closes: #44819
<!-- - [ ] Closes: #yyy (add separate lines for additional resolved
issues) -->
- [ ] **Communication:** I've discussed this with core contributors
already. If the work hasn't been agreed, this work might be rejected
- [ ] **Tests:** Added/updated and all pass
- [ ] **Localization:** All end-user-facing strings can be localized
- [ ] **Dev docs:** Added/updated
- [ ] **New binaries:** Added on the required places
- [ ] [JSON for
signing](https://github.com/microsoft/PowerToys/blob/main/.pipelines/ESRPSigning_core.json)
for new binaries
- [ ] [WXS for
installer](https://github.com/microsoft/PowerToys/blob/main/installer/PowerToysSetup/Product.wxs)
for new binaries and localization folder
- [ ] [YML for CI
pipeline](https://github.com/microsoft/PowerToys/blob/main/.pipelines/ci/templates/build-powertoys-steps.yml)
for new test projects
- [ ] [YML for signed
pipeline](https://github.com/microsoft/PowerToys/blob/main/.pipelines/release.yml)
- [ ] **Documentation updated:** If checked, please file a pull request
on [our docs
repo](https://github.com/MicrosoftDocs/windows-uwp/tree/docs/hub/powertoys)
and link it here: #xxx

<!-- Provide a more detailed description of the PR, other things fixed,
or any additional comments/features here -->
## Detailed Description of the Pull Request / Additional comments

<!-- Describe how you validated the behavior. Add automated tests
wherever possible, but list manual validation steps taken as well -->
## Validation Steps Performed
2026-01-22 19:25:24 +01:00
Jessica Dene Earley-Cha
fbabdffcfa Update DATA_AND_PRIVACY to reflect current status (#44844)
<!-- Enter a brief description/summary of your PR here. What does it
fix/what does it change/how was it tested (even manually, if necessary)?
-->
## Summary of the Pull Request
Updates the telemetry events documentation in `DATA_AND_PRIVACY.md` to
ensure completeness and improve organization.
- Added missing & new events
- Removed obsolete events (deprecated or unused event)
- Alphabetized all modules for easier navigation and maintenance
- Made all tables uniform with `table-layout:fixed`



<!-- Please review the items on the PR checklist before submitting-->
## PR Checklist

- [ ] Closes: #xxx
<!-- - [ ] Closes: #yyy (add separate lines for additional resolved
issues) -->
- [ ] **Communication:** I've discussed this with core contributors
already. If the work hasn't been agreed, this work might be rejected
- [ ] **Tests:** Added/updated and all pass
- [ ] **Localization:** All end-user-facing strings can be localized
- [ ] **Dev docs:** Added/updated
- [ ] **New binaries:** Added on the required places
- [ ] [JSON for
signing](https://github.com/microsoft/PowerToys/blob/main/.pipelines/ESRPSigning_core.json)
for new binaries
- [ ] [WXS for
installer](https://github.com/microsoft/PowerToys/blob/main/installer/PowerToysSetup/Product.wxs)
for new binaries and localization folder
- [ ] [YML for CI
pipeline](https://github.com/microsoft/PowerToys/blob/main/.pipelines/ci/templates/build-powertoys-steps.yml)
for new test projects
- [ ] [YML for signed
pipeline](https://github.com/microsoft/PowerToys/blob/main/.pipelines/release.yml)
- [ ] **Documentation updated:** If checked, please file a pull request
on [our docs
repo](https://github.com/MicrosoftDocs/windows-uwp/tree/docs/hub/powertoys)
and link it here: #xxx

<!-- Provide a more detailed description of the PR, other things fixed,
or any additional comments/features here -->
## Detailed Description of the Pull Request / Additional comments

<!-- Describe how you validated the behavior. Add automated tests
wherever possible, but list manual validation steps taken as well -->
## Validation Steps Performed
2026-01-21 11:07:12 -08:00
Mike Hall
7cfa4d443c add script parameters to add user, machine, and devices names, default is to not include. (#44880)
## Summary of the Pull Request
Update the screen capture/layout powershell script to NOT include user
name, machine name, and device names - these can be added into the
output JSON file using new command line parameters - also added '-help'.

<!-- Please review the items on the PR checklist before submitting-->

- [ ] Closes: #xxx
<!-- - [ ] Closes: #yyy (add separate lines for additional resolved
issues) -->
- [ ] **Communication:** I've discussed this with core contributors
already. If the work hasn't been agreed, this work might be rejected
- [ ] **Tests:** Added/updated and all pass
- [ ] **Localization:** All end-user-facing strings can be localized
- [ ] **Dev docs:** Added/updated
- [ ] **New binaries:** Added on the required places
- [ ] [JSON for
signing](https://github.com/microsoft/PowerToys/blob/main/.pipelines/ESRPSigning_core.json)
for new binaries
- [ ] [WXS for
installer](https://github.com/microsoft/PowerToys/blob/main/installer/PowerToysSetup/Product.wxs)
for new binaries and localization folder
- [ ] [YML for CI
pipeline](https://github.com/microsoft/PowerToys/blob/main/.pipelines/ci/templates/build-powertoys-steps.yml)
for new test projects
- [ ] [YML for signed
pipeline](https://github.com/microsoft/PowerToys/blob/main/.pipelines/release.yml)
- [ ] **Documentation updated:** If checked, please file a pull request
on [our docs
repo](https://github.com/MicrosoftDocs/windows-uwp/tree/docs/hub/powertoys)
and link it here: #xxx

<!-- Provide a more detailed description of the PR, other things fixed,
or any additional comments/features here -->
## Detailed Description of the Pull Request / Additional comments

<!-- Describe how you validated the behavior. Add automated tests
wherever possible, but list manual validation steps taken as well -->
## Validation Steps Performed
Ran the script locally with each of the new command line parameters and
validated the json output.
2026-01-21 12:58:00 +00:00
Gordon Lam
5422bc31bb docs(prompts): Enhance release notes generation and implement GitHub issue tools (#44762)
<!-- Enter a brief description/summary of your PR here. What does it
fix/what does it change/how was it tested (even manually, if necessary)?
-->
## Summary of the Pull Request
This update improves the release notes generation process by
restructuring documentation for clarity and enhancing workflows. It
introduces new tools for handling GitHub issue images and attachments,
along with automated tests to ensure functionality.

<!-- Please review the items on the PR checklist before submitting-->
## PR Checklist

- [ ] Closes: N/A
- [ ] **Communication:** I've discussed this with core contributors
already. If the work hasn't been agreed, this work might be rejected
- [x] **Tests:** Added/updated and all pass
- [ ] **Localization:** All end-user-facing strings can be localized
- [x] **Dev docs:** Added/updated
- [ ] **New binaries:** Added on the required places
- [ ] [JSON for
signing](https://github.com/microsoft/PowerToys/blob/main/.pipelines/ESRPSigning_core.json)
for new binaries
- [ ] [WXS for
installer](https://github.com/microsoft/PowerToys/blob/main/installer/PowerToysSetup/Product.wxs)
for new binaries and localization folder
- [ ] [YML for CI
pipeline](https://github.com/microsoft/PowerToys/blob/main/.pipelines/ci/templates/build-powertoys-steps.yml)
for new test projects
- [ ] [YML for signed
pipeline](https://github.com/microsoft/PowerToys/blob/main/.pipelines/release.yml)
- [x] **Documentation updated:** If checked, please file a pull request
on [our docs
repo](https://github.com/MicrosoftDocs/windows-uwp/tree/docs/hub/powertoys)
and link it here: N/A

<!-- Provide a more detailed description of the PR, other things fixed,
or any additional comments/features here -->
## Detailed Description of the Pull Request / Additional comments
The changes include updates to the release notes summarization process
and workflow documentation for better clarity. New scripts for managing
PR diffs, grouping, and milestone assignments have been added.
Additionally, tools for downloading images and attachments from GitHub
issues have been implemented, ensuring a more streamlined process.

<!-- Describe how you validated the behavior. Add automated tests
wherever possible, but list manual validation steps taken as well -->
## Validation Steps Performed
Automated tests were added for the new GitHub issue tools, and all tests
passed successfully. Manual validation included testing the new
workflows and ensuring the documentation accurately reflects the changes
made.
```
2026-01-21 16:18:57 +08:00
40 changed files with 3658 additions and 144 deletions

View File

@@ -32,6 +32,7 @@ advfirewall
AFeature
affordances
AFX
agentskills
AGGREGATABLE
AHK
AHybrid
@@ -102,7 +103,6 @@ ASYNCWINDOWPLACEMENT
ASYNCWINDOWPOS
atl
ATRIOX
ATX
aumid
authenticode
AUTOBUDDY
@@ -219,6 +219,7 @@ CIELCh
cim
CImage
cla
claude
CLASSDC
classmethod
CLASSNOTAVAILABLE
@@ -296,7 +297,6 @@ cpcontrols
cph
cplusplus
CPower
cppcoreguidelines
cpptools
cppvsdbg
cppwinrt
@@ -325,7 +325,7 @@ CURRENTDIR
CURSORINFO
cursorpos
CURSORSHOWING
CURSORWRAP
cursorwrap
customaction
CUSTOMACTIONTEST
CUSTOMFORMATPLACEHOLDER
@@ -886,6 +886,7 @@ Ldr
LEFTALIGN
LEFTSCROLLBAR
LEFTTEXT
leftclick
LError
LEVELID
LExit
@@ -1011,6 +1012,7 @@ MENUITEMINFO
MENUITEMINFOW
MERGECOPY
MERGEPAINT
Metacharacter
metadatamatters
Metadatas
metafile
@@ -1040,6 +1042,7 @@ mmi
mmsys
mobileredirect
mockapi
modelcontextprotocol
MODALFRAME
MODESPRUNED
MONITORENUMPROC
@@ -1737,6 +1740,7 @@ STICKYKEYS
sticpl
storelogo
stprintf
streamable
streamjsonrpc
STRINGIZE
stringtable
@@ -1962,6 +1966,7 @@ visualeffects
vkey
vmovl
VMs
vnd
vorrq
VOS
vpaddlq

92
.github/agents/FixIssue.agent.md vendored Normal file
View File

@@ -0,0 +1,92 @@
---
description: 'Implements fixes for GitHub issues based on implementation plans'
name: 'FixIssue'
tools: ['read', 'edit', 'search', 'execute', 'agent', 'usages', 'problems', 'changes', 'testFailure', 'github/*', 'github.vscode-pull-request-github/*']
argument-hint: 'GitHub issue number (e.g., #12345)'
infer: true
---
# FixIssue Agent
You are an **IMPLEMENTATION AGENT** specialized in executing implementation plans to fix GitHub issues.
## Identity & Expertise
- Expert at translating plans into working code
- Deep knowledge of PowerToys codebase patterns and conventions
- Skilled at writing tests, handling edge cases, and validating builds
- You follow plans precisely while handling ambiguity gracefully
## Goal
For the given **issue_number**, execute the implementation plan and produce:
1. Working code changes applied directly to the repository
2. `Generated Files/issueFix/{{issue_number}}/pr-description.md` — PR-ready description
3. `Generated Files/issueFix/{{issue_number}}/manual-steps.md` — Only if human action needed
## Core Directive
**Follow the implementation plan in `Generated Files/issueReview/{{issue_number}}/implementation-plan.md` as the single source of truth.**
If the plan doesn't exist, invoke PlanIssue agent first via `runSubagent`.
## Working Principles
- **Plan First**: Read and understand the entire implementation plan before coding
- **Validate Always**: For each change: Edit → Build → Verify → Commit. Never proceed if build fails.
- **Atomic Commits**: Each commit must be self-contained, buildable, and meaningful
- **Ask, Don't Guess**: When uncertain, insert `// TODO(Human input needed): <question>` and document in manual-steps.md
## Strategy
**Core Loop** — For every unit of work:
1. **Edit**: Make focused changes to implement one logical piece
2. **Build**: Run `tools\build\build.cmd` and check for exit code 0
3. **Verify**: Use `problems` tool for lint/compile errors; run relevant tests
4. **Commit**: Only after build passes — use `.github/prompts/create-commit-title.prompt.md`
Never skip steps. Never commit broken code. Never proceed if build fails.
**Feature-by-Feature E2E**: For big scenarios with multiple features, complete each feature end-to-end before moving to the next:
- Settings UI → Functionality → Logging → Tests (for Feature 1)
- Then repeat for Feature 2
- Benefits: Each feature is self-contained, testable, easier to review, can ship incrementally
**Large Changes** (3+ files or cross-module):
- Use `tools\build\New-WorktreeFromBranch.ps1` for isolated worktrees
- Create separate branches per feature (e.g., `issue/{{issue_number}}-export`, `issue/{{issue_number}}-import`)
- Merge feature branches back after each is validated
**Recovery**: If implementation goes wrong:
- Create a checkpoint branch before risky changes
- On failure: branch from last known-good state, cherry-pick working changes, abandon broken branch
- For complex changes, consider multiple smaller PRs
## Guidelines
**DO**:
- Follow the plan exactly
- Validate build before every commit — **NEVER commit broken code**
- Use `.github/prompts/create-commit-title.prompt.md` for commit messages
- Add comprehensive tests for changed behavior
- Use worktrees for large changes (3+ files or cross-module)
- Document deviations from plan
**DON'T**:
- Implement everything in a single massive commit
- Continue after a failed build without fixing
- Make drive-by refactors outside issue scope
- Skip tests for behavioral changes
- Add noisy logs in hot paths
- Break IPC/JSON contracts without updating both sides
- Introduce dependencies without documenting in NOTICE.md
## References
- [Build Guidelines](../../tools/build/BUILD-GUIDELINES.md) — Build commands and validation
- [Coding Style](../../doc/devdocs/development/style.md) — Formatting and conventions
- [AGENTS.md](../../AGENTS.md) — Full contributor guide
## Parameter
- **issue_number**: Extract from `#123`, `issue 123`, or plain number. If missing, ask user.

65
.github/agents/PlanIssue.agent.md vendored Normal file
View File

@@ -0,0 +1,65 @@
---
description: 'Analyzes GitHub issues to produce overview and implementation plans'
name: 'PlanIssue'
tools: ['execute', 'read', 'edit', 'search', 'web', 'github/*', 'agent', 'github-artifacts/*', 'todo']
argument-hint: 'GitHub issue number (e.g., #12345)'
handoffs:
- label: Start Implementation
agent: FixIssue
prompt: 'Fix issue #{{issue_number}} using the implementation plan'
- label: Open Plan in Editor
agent: agent
prompt: 'Open Generated Files/issueReview/{{issue_number}}/overview.md and implementation-plan.md'
showContinueOn: false
send: true
infer: true
---
# PlanIssue Agent
You are a **PLANNING AGENT** specialized in analyzing GitHub issues and producing comprehensive planning documentation.
## Identity & Expertise
- Expert at issue triage, priority scoring, and technical analysis
- Deep knowledge of PowerToys architecture and codebase patterns
- Skilled at breaking down problems into actionable implementation steps
- You research thoroughly before planning, gathering 80% confidence before drafting
## Goal
For the given **issue_number**, produce two deliverables:
1. `Generated Files/issueReview/{{issue_number}}/overview.md` — Issue analysis with scoring
2. `Generated Files/issueReview/{{issue_number}}/implementation-plan.md` — Technical implementation plan
Above is the core interaction with the end user. If you cannot produce the files above, you fail the task. Each time, you must check whether the files exist or have been modified by the end user, without assuming you know their contents.
3. `Generated Files/issueReview/{{issue_number}}/logs/**` — logs for your diagnostic of root cause, research steps, and reasoning
## Core Directive
**Follow the template in `.github/prompts/review-issue.prompt.md` exactly.** Read it first, then apply every section as specified.
- Fetch issue details: reactions, comments, linked PRs, images, logs
- Search related code and similar past fixes
- Ask clarifying questions when ambiguous
- Identify subject matter experts via git history
<stopping_rules>
You are a PLANNING agent, NOT an implementation agent.
STOP if you catch yourself:
- Writing code or editing source files outside `Generated Files/issueReview/`
- Making assumptions without researching
- Skipping the scoring/assessment phase
Plans describe what the USER or FixIssue agent will execute later.
</stopping_rules>
## References
- [Review Issue Prompt](../.github/prompts/review-issue.prompt.md) — Template for plan structure
- [Architecture Overview](../../doc/devdocs/core/architecture.md) — System design context
- [AGENTS.md](../../AGENTS.md) — Full contributor guide
## Parameter
- **issue_number**: Extract from `#123`, `issue 123`, or plain number. If missing, ask user.

View File

@@ -0,0 +1,261 @@
---
description: 'Guidelines for creating high-quality Agent Skills for GitHub Copilot'
applyTo: '**/.github/skills/**/SKILL.md, **/.claude/skills/**/SKILL.md'
---
# Agent Skills File Guidelines
Instructions for creating effective and portable Agent Skills that enhance GitHub Copilot with specialized capabilities, workflows, and bundled resources.
## What Are Agent Skills?
Agent Skills are self-contained folders with instructions and bundled resources that teach AI agents specialized capabilities. Unlike custom instructions (which define coding standards), skills enable task-specific workflows that can include scripts, examples, templates, and reference data.
Key characteristics:
- **Portable**: Works across VS Code, Copilot CLI, and Copilot coding agent
- **Progressive loading**: Only loaded when relevant to the user's request
- **Resource-bundled**: Can include scripts, templates, examples alongside instructions
- **On-demand**: Activated automatically based on prompt relevance
## Directory Structure
Skills are stored in specific locations:
| Location | Scope | Recommendation |
|----------|-------|----------------|
| `.github/skills/<skill-name>/` | Project/repository | Recommended for project skills |
| `.claude/skills/<skill-name>/` | Project/repository | Legacy, for backward compatibility |
| `~/.github/skills/<skill-name>/` | Personal (user-wide) | Recommended for personal skills |
| `~/.claude/skills/<skill-name>/` | Personal (user-wide) | Legacy, for backward compatibility |
Each skill **must** have its own subdirectory containing at minimum a `SKILL.md` file.
## Required SKILL.md Format
### Frontmatter (Required)
```yaml
---
name: webapp-testing
description: Toolkit for testing local web applications using Playwright. Use when asked to verify frontend functionality, debug UI behavior, capture browser screenshots, check for visual regressions, or view browser console logs. Supports Chrome, Firefox, and WebKit browsers.
license: Complete terms in LICENSE.txt
---
```
| Field | Required | Constraints |
|-------|----------|-------------|
| `name` | Yes | Lowercase, hyphens for spaces, max 64 characters (e.g., `webapp-testing`) |
| `description` | Yes | Clear description of capabilities AND use cases, max 1024 characters |
| `license` | No | Reference to LICENSE.txt (e.g., `Complete terms in LICENSE.txt`) or SPDX identifier |
### Description Best Practices
**CRITICAL**: The `description` field is the PRIMARY mechanism for automatic skill discovery. Copilot reads ONLY the `name` and `description` to decide whether to load a skill. If your description is vague, the skill will never be activated.
**What to include in description:**
1. **WHAT** the skill does (capabilities)
2. **WHEN** to use it (specific triggers, scenarios, file types, or user requests)
3. **Keywords** that users might mention in their prompts
**Good description:**
```yaml
description: Toolkit for testing local web applications using Playwright. Use when asked to verify frontend functionality, debug UI behavior, capture browser screenshots, check for visual regressions, or view browser console logs. Supports Chrome, Firefox, and WebKit browsers.
```
**Poor description:**
```yaml
description: Web testing helpers
```
The poor description fails because:
- No specific triggers (when should Copilot load this?)
- No keywords (what user prompts would match?)
- No capabilities (what can it actually do?)
### Body Content
The body contains detailed instructions that Copilot loads AFTER the skill is activated. Recommended sections:
| Section | Purpose |
|---------|---------|
| `# Title` | Brief overview of what this skill enables |
| `## When to Use This Skill` | List of scenarios (reinforces description triggers) |
| `## Prerequisites` | Required tools, dependencies, environment setup |
| `## Step-by-Step Workflows` | Numbered steps for common tasks |
| `## Troubleshooting` | Common issues and solutions table |
| `## References` | Links to bundled docs or external resources |
## Bundling Resources
Skills can include additional files that Copilot accesses on-demand:
### Supported Resource Types
| Folder | Purpose | Loaded into Context? | Example Files |
|--------|---------|---------------------|---------------|
| `scripts/` | Executable automation that performs specific operations | When executed | `helper.py`, `validate.sh`, `build.ts` |
| `references/` | Documentation the AI agent reads to inform decisions | Yes, when referenced | `api_reference.md`, `schema.md`, `workflow_guide.md` |
| `assets/` | **Static files used AS-IS** in output (not modified by the AI agent) | No | `logo.png`, `brand-template.pptx`, `custom-font.ttf` |
| `templates/` | **Starter code/scaffolds that the AI agent MODIFIES** and builds upon | Yes, when referenced | `viewer.html` (insert algorithm), `hello-world/` (extend) |
### Directory Structure Example
```
.github/skills/my-skill/
├── SKILL.md # Required: Main instructions
├── LICENSE.txt # Recommended: License terms (Apache 2.0 typical)
├── scripts/ # Optional: Executable automation
│ ├── helper.py # Python script
│ └── helper.ps1 # PowerShell script
├── references/ # Optional: Documentation loaded into context
│ ├── api_reference.md
│ ├── step1-setup.md # Detailed workflow (>3 steps)
│ └── step2-deployment.md
├── assets/ # Optional: Static files used AS-IS in output
│ ├── baseline.png # Reference image for comparison
│ └── report-template.html
└── templates/ # Optional: Starter code the AI agent modifies
├── scaffold.py # Code scaffold the AI agent customizes
└── config.template # Config template the AI agent fills in
```
> **LICENSE.txt**: When creating a skill, download the Apache 2.0 license text from https://www.apache.org/licenses/LICENSE-2.0.txt and save as `LICENSE.txt`. Update the copyright year and owner in the appendix section.
### Assets vs Templates: Key Distinction
**Assets** are static resources **consumed unchanged** in the output:
- A `logo.png` that gets embedded into a generated document
- A `report-template.html` copied as output format
- A `custom-font.ttf` applied to text rendering
**Templates** are starter code/scaffolds that **the AI agent actively modifies**:
- A `scaffold.py` where the AI agent inserts logic
- A `config.template` where the AI agent fills in values based on user requirements
- A `hello-world/` project directory that the AI agent extends with new features
**Rule of thumb**: If the AI agent reads and builds upon the file content → `templates/`. If the file is used as-is in output → `assets/`.
### Referencing Resources in SKILL.md
Use relative paths to reference files within the skill directory:
```markdown
## Available Scripts
Run the [helper script](./scripts/helper.py) to automate common tasks.
See [API reference](./references/api_reference.md) for detailed documentation.
Use the [scaffold](./templates/scaffold.py) as a starting point.
```
## Progressive Loading Architecture
Skills use three-level loading for efficiency:
| Level | What Loads | When |
|-------|------------|------|
| 1. Discovery | `name` and `description` only | Always (lightweight metadata) |
| 2. Instructions | Full `SKILL.md` body | When request matches description |
| 3. Resources | Scripts, examples, docs | Only when Copilot references them |
This means:
- Install many skills without consuming context
- Only relevant content loads per task
- Resources don't load until explicitly needed
## Content Guidelines
### Writing Style
- Use imperative mood: "Run", "Create", "Configure" (not "You should run")
- Be specific and actionable
- Include exact commands with parameters
- Show expected outputs where helpful
- Keep sections focused and scannable
### Script Requirements
When including scripts, prefer cross-platform languages:
| Language | Use Case |
|----------|----------|
| Python | Complex automation, data processing |
| pwsh | PowerShell Core scripting |
| Node.js | JavaScript-based tooling |
| Bash/Shell | Simple automation tasks |
Best practices:
- Include help/usage documentation (`--help` flag)
- Handle errors gracefully with clear messages
- Avoid storing credentials or secrets
- Use relative paths where possible
### When to Bundle Scripts
Include scripts in your skill when:
- The same code would be rewritten repeatedly by the agent
- Deterministic reliability is critical (e.g., file manipulation, API calls)
- Complex logic benefits from being pre-tested rather than generated each time
- The operation has a self-contained purpose that can evolve independently
- Testability matters — scripts can be unit tested and validated
- Predictable behavior is preferred over dynamic generation
Scripts enable evolution: even simple operations benefit from being implemented as scripts when they may grow in complexity, need consistent behavior across invocations, or require future extensibility.
### Security Considerations
- Scripts rely on existing credential helpers (no credential storage)
- Include `--force` flags only for destructive operations
- Warn users before irreversible actions
- Document any network operations or external calls
## Common Patterns
### Parameter Table Pattern
Document parameters clearly:
```markdown
| Parameter | Required | Default | Description |
|-----------|----------|---------|-------------|
| `--input` | Yes | - | Input file or URL to process |
| `--action` | Yes | - | Action to perform |
| `--verbose` | No | `false` | Enable verbose output |
```
## Validation Checklist
Before publishing a skill:
- [ ] `SKILL.md` has valid frontmatter with `name` and `description`
- [ ] `name` is lowercase with hyphens, ≤64 characters
- [ ] `description` clearly states **WHAT** it does, **WHEN** to use it, and relevant **KEYWORDS**
- [ ] Body includes when to use, prerequisites, and step-by-step workflows
- [ ] SKILL.md body kept under 500 lines (split large content into `references/` folder)
- [ ] Large workflows (>5 steps) split into `references/` folder with clear links from SKILL.md
- [ ] Scripts include help documentation and error handling
- [ ] Relative paths used for all resource references
- [ ] No hardcoded credentials or secrets
## Workflow Execution Pattern
When executing multi-step workflows, create a TODO list where each step references the relevant documentation:
```markdown
## TODO
- [ ] Step 1: Configure environment - see [workflow-setup.md](./references/workflow-setup.md#environment)
- [ ] Step 2: Build project - see [workflow-setup.md](./references/workflow-setup.md#build)
- [ ] Step 3: Deploy to staging - see [workflow-deployment.md](./references/workflow-deployment.md#staging)
- [ ] Step 4: Run validation - see [workflow-deployment.md](./references/workflow-deployment.md#validation)
- [ ] Step 5: Deploy to production - see [workflow-deployment.md](./references/workflow-deployment.md#production)
```
This ensures traceability and allows resuming workflows if interrupted.
## Related Resources
- [Agent Skills Specification](https://agentskills.io/)
- [VS Code Agent Skills Documentation](https://code.visualstudio.com/docs/copilot/customization/agent-skills)
- [Reference Skills Repository](https://github.com/anthropics/skills)
- [Awesome Copilot Skills](https://github.com/github/awesome-copilot/blob/main/docs/README.skills.md)

View File

@@ -0,0 +1,228 @@
---
description: 'Instructions for building Model Context Protocol (MCP) servers using the TypeScript SDK'
applyTo: '**/*.ts, **/*.js, **/package.json'
---
# TypeScript MCP Server Development
## Instructions
- Use the **@modelcontextprotocol/sdk** npm package: `npm install @modelcontextprotocol/sdk`
- Import from specific paths: `@modelcontextprotocol/sdk/server/mcp.js`, `@modelcontextprotocol/sdk/server/stdio.js`, etc.
- Use `McpServer` class for high-level server implementation with automatic protocol handling
- Use `Server` class for low-level control with manual request handlers
- Use **zod** for input/output schema validation: `npm install zod@3`
- Always provide `title` field for tools, resources, and prompts for better UI display
- Use `registerTool()`, `registerResource()`, and `registerPrompt()` methods (recommended over older APIs)
- Define schemas using zod: `{ inputSchema: { param: z.string() }, outputSchema: { result: z.string() } }`
- Return both `content` (for display) and `structuredContent` (for structured data) from tools
- For HTTP servers, use `StreamableHTTPServerTransport` with Express or similar frameworks
- For local integrations, use `StdioServerTransport` for stdio-based communication
- Create new transport instances per request to prevent request ID collisions (stateless mode)
- Use session management with `sessionIdGenerator` for stateful servers
- Enable DNS rebinding protection for local servers: `enableDnsRebindingProtection: true`
- Configure CORS headers and expose `Mcp-Session-Id` for browser-based clients
- Use `ResourceTemplate` for dynamic resources with URI parameters: `new ResourceTemplate('resource://{param}', { list: undefined })`
- Support completions for better UX using `completable()` wrapper from `@modelcontextprotocol/sdk/server/completable.js`
- Implement sampling with `server.server.createMessage()` to request LLM completions from clients
- Use `server.server.elicitInput()` to request additional user input during tool execution
- Enable notification debouncing for bulk updates: `debouncedNotificationMethods: ['notifications/tools/list_changed']`
- Dynamic updates: call `.enable()`, `.disable()`, `.update()`, or `.remove()` on registered items to emit `listChanged` notifications
- Use `getDisplayName()` from `@modelcontextprotocol/sdk/shared/metadataUtils.js` for UI display names
- Test servers with MCP Inspector: `npx @modelcontextprotocol/inspector`
## Best Practices
- Keep tool implementations focused on single responsibilities
- Provide clear, descriptive titles and descriptions for LLM understanding
- Use proper TypeScript types for all parameters and return values
- Implement comprehensive error handling with try-catch blocks
- Return `isError: true` in tool results for error conditions
- Use async/await for all asynchronous operations
- Close database connections and clean up resources properly
- Validate input parameters before processing
- Use structured logging for debugging without polluting stdout/stderr
- Consider security implications when exposing file system or network access
- Implement proper resource cleanup on transport close events
- Use environment variables for configuration (ports, API keys, etc.)
- Document tool capabilities and limitations clearly
- Test with multiple clients to ensure compatibility
## Common Patterns
### Basic Server Setup (HTTP)
```typescript
import { McpServer } from '@modelcontextprotocol/sdk/server/mcp.js';
import { StreamableHTTPServerTransport } from '@modelcontextprotocol/sdk/server/streamableHttp.js';
import express from 'express';
const server = new McpServer({
name: 'my-server',
version: '1.0.0'
});
const app = express();
app.use(express.json());
app.post('/mcp', async (req, res) => {
const transport = new StreamableHTTPServerTransport({
sessionIdGenerator: undefined,
enableJsonResponse: true
});
res.on('close', () => transport.close());
await server.connect(transport);
await transport.handleRequest(req, res, req.body);
});
app.listen(3000);
```
### Basic Server Setup (stdio)
```typescript
import { McpServer } from '@modelcontextprotocol/sdk/server/mcp.js';
import { StdioServerTransport } from '@modelcontextprotocol/sdk/server/stdio.js';
const server = new McpServer({
name: 'my-server',
version: '1.0.0'
});
// ... register tools, resources, prompts ...
const transport = new StdioServerTransport();
await server.connect(transport);
```
### Simple Tool
```typescript
import { z } from 'zod';
server.registerTool(
'calculate',
{
title: 'Calculator',
description: 'Perform basic calculations',
inputSchema: { a: z.number(), b: z.number(), op: z.enum(['+', '-', '*', '/']) },
outputSchema: { result: z.number() }
},
async ({ a, b, op }) => {
const result = op === '+' ? a + b : op === '-' ? a - b :
op === '*' ? a * b : a / b;
const output = { result };
return {
content: [{ type: 'text', text: JSON.stringify(output) }],
structuredContent: output
};
}
);
```
### Dynamic Resource
```typescript
import { ResourceTemplate } from '@modelcontextprotocol/sdk/server/mcp.js';
server.registerResource(
'user',
new ResourceTemplate('users://{userId}', { list: undefined }),
{
title: 'User Profile',
description: 'Fetch user profile data'
},
async (uri, { userId }) => ({
contents: [{
uri: uri.href,
text: `User ${userId} data here`
}]
})
);
```
### Tool with Sampling
```typescript
server.registerTool(
'summarize',
{
title: 'Text Summarizer',
description: 'Summarize text using LLM',
inputSchema: { text: z.string() },
outputSchema: { summary: z.string() }
},
async ({ text }) => {
const response = await server.server.createMessage({
messages: [{
role: 'user',
content: { type: 'text', text: `Summarize: ${text}` }
}],
maxTokens: 500
});
const summary = response.content.type === 'text' ?
response.content.text : 'Unable to summarize';
const output = { summary };
return {
content: [{ type: 'text', text: JSON.stringify(output) }],
structuredContent: output
};
}
);
```
### Prompt with Completion
```typescript
import { completable } from '@modelcontextprotocol/sdk/server/completable.js';
server.registerPrompt(
'review',
{
title: 'Code Review',
description: 'Review code with specific focus',
argsSchema: {
language: completable(z.string(), value =>
['typescript', 'python', 'javascript', 'java']
.filter(l => l.startsWith(value))
),
code: z.string()
}
},
({ language, code }) => ({
messages: [{
role: 'user',
content: {
type: 'text',
text: `Review this ${language} code:\n\n${code}`
}
}]
})
);
```
### Error Handling
```typescript
server.registerTool(
'risky-operation',
{
title: 'Risky Operation',
description: 'An operation that might fail',
inputSchema: { input: z.string() },
outputSchema: { result: z.string() }
},
async ({ input }) => {
try {
const result = await performRiskyOperation(input);
const output = { result };
return {
content: [{ type: 'text', text: JSON.stringify(output) }],
structuredContent: output
};
} catch (err: unknown) {
const error = err as Error;
return {
content: [{ type: 'text', text: `Error: ${error.message}` }],
isError: true
};
}
}
);
```

View File

@@ -15,8 +15,14 @@ For **#{{issue_number}}** produce:
Figure out required inputs {{issue_number}} from the invocation context; if anything is missing, ask for the value or note it as a gap.
# CONTEXT (brief)
Ground evidence using `gh issue view {{issue_number}} --json number,title,body,author,createdAt,updatedAt,state,labels,milestone,reactions,comments,linkedPullRequests`, and download images to better understand the issue context.
Locate source code in the current workspace; feel free to use `rg`/`git grep`. Link related issues and PRs.
Ground evidence using `gh issue view {{issue_number}} --json number,title,body,author,createdAt,updatedAt,state,labels,milestone,reactions,comments,linkedPullRequests`, download images via MCP `github_issue_images` to better understand the issue context. Finally, use MCP `github_issue_attachments` to download logs with parameter `extractFolder` as `Generated Files/issueReview/{{issue_number}}/logs`, and analyze the downloaded logs if available to identify relevant issues. Locate the source code in the current workspace (use `rg`/`git grep` as needed). Link related issues and PRs.
## When to call MCP tools
If the following MCP "github-artifacts" tools are available in the environment, use them:
- `github_issue_images`: use when the issue/PR likely contains screenshots or other visual evidence (UI bugs, glitches, design problems).
- `github_issue_attachments`: use when the issue/PR mentions attached ZIPs (PowerToysReport_*.zip, logs.zip, debug.zip) or asks to analyze logs/diagnostics. Always provide `extractFolder` as `Generated Files/issueReview/{{issue_number}}/logs`
If these tools are not available (not listed by the runtime), start the MCP server "github-artifacts" first.
# OVERVIEW.MD
## Summary

View File

@@ -0,0 +1,201 @@
Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common
control with that entity. For the purposes of this definition,
"control" means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity
exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation
source, and configuration files.
"Object" form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but
not limited to compiled object code, generated documentation,
and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or
Object form, made available under the License, as indicated by a
copyright notice that is included in or attached to the work
(an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the purposes
of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of,
the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including
the original version of the Work and any modifications or additions
to that Work or Derivative Works thereof, that is intentionally
submitted to the Licensor for inclusion in the Work by the copyright owner
or by an individual or Legal Entity authorized to submit on behalf of
the copyright owner. For the purposes of this definition, "submitted"
means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems,
and issue tracking systems that are managed by, or on behalf of, the
Licensor for the purpose of discussing and improving the Work, but
excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity
on behalf of whom a Contribution has been received by Licensor and
subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the
Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
(except as stated in this section) patent license to make, have made,
use, offer to sell, sell, import, and otherwise transfer the Work,
where such license applies only to those patent claims licensable
by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s)
with the Work to which such Contribution(s) was submitted. If You
institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work
or a Contribution incorporated within the Work constitutes direct
or contributory patent infringement, then any patent licenses
granted to You under this License for that Work shall terminate
as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the
Work or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You
meet the following conditions:
(a) You must give any other recipients of the Work or
Derivative Works a copy of this License; and
(b) You must cause any modified files to carry prominent notices
stating that You changed the files; and
(c) You must retain, in the Source form of any Derivative Works
that You distribute, all copyright, patent, trademark, and
attribution notices from the Source form of the Work,
excluding those notices that do not pertain to any part of
the Derivative Works; and
(d) If the Work includes a "NOTICE" text file as part of its
distribution, then any Derivative Works that You distribute must
include a readable copy of the attribution notices contained
within such NOTICE file, excluding those notices that do not
pertain to any part of the Derivative Works, in at least one
of the following places: within a NOTICE text file distributed
as part of the Derivative Works; within the Source form or
documentation, if provided along with the Derivative Works; or,
within a display generated by the Derivative Works, if and
wherever such third-party notices normally appear. The contents
of the NOTICE file are for informational purposes only and
do not modify the License. You may add Your own attribution
notices within Derivative Works that You distribute, alongside
or as an addendum to the NOTICE text from the Work, provided
that such additional attribution notices cannot be construed
as modifying the License.
You may add Your own copyright statement to Your modifications and
may provide additional or different license terms and conditions
for use, reproduction, or distribution of Your modifications, or
for any such Derivative Works as a whole, provided Your use,
reproduction, and distribution of the Work otherwise complies with
the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade
names, trademarks, service marks, or product names of the Licensor,
except as required for reasonable and customary use in describing the
origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or
agreed to in writing, Licensor provides the Work (and each
Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any
risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise,
unless required by applicable law (such as deliberate and grossly
negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special,
incidental, or consequential damages of any character arising as a
result of this License or out of the use or inability to use the
Work (including but not limited to damages for loss of goodwill,
work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses), even if such Contributor
has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing
the Work or Derivative Works thereof, You may choose to offer,
and charge a fee for, acceptance of support, warranty, indemnity,
or other liability obligations and/or rights consistent with this
License. However, in accepting such obligations, You may act only
on Your own behalf and on Your sole responsibility, not on behalf
of any other Contributor, and only if You agree to indemnify,
defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason
of your accepting any such warranty or additional liability.
END OF TERMS AND CONDITIONS
APPENDIX: How to apply the Apache License to your work.
To apply the Apache License to your work, attach the following
boilerplate notice, with the fields enclosed by brackets "[]"
replaced with your own identifying information. (Don't include
the brackets!) The text should be enclosed in the appropriate
comment syntax for the file format. We also recommend that a
file or class name and description of purpose be included on the
same "printed page" as the copyright notice for easier
identification within third-party archives.
Copyright 2026 Microsoft Corporation
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.

View File

@@ -0,0 +1,132 @@
---
name: release-note-generation
description: Toolkit for generating PowerToys release notes from GitHub milestone PRs or commit ranges. Use when asked to create release notes, summarize milestone PRs, generate changelog, prepare release documentation, request Copilot reviews for PRs, update README for a new release, manage PR milestones, or collect PRs between commits/tags. Supports PR collection by milestone or commit range, milestone assignment, grouping by label, summarization with external contributor attribution, and README version bumping.
license: Complete terms in LICENSE.txt
---
# Release Note Generation Skill
Generate professional release notes for PowerToys milestones by collecting merged PRs, requesting Copilot code reviews, grouping by label, and producing user-facing summaries.
## Output Directory
All generated artifacts are placed under `Generated Files/ReleaseNotes/` at the repository root (gitignored).
```
Generated Files/ReleaseNotes/
├── milestone_prs.json # Raw PR data from GitHub
├── sorted_prs.csv # Sorted PR list with Copilot summaries
├── prs_with_milestone.csv # Milestone assignment tracking
├── grouped_csv/ # PRs grouped by label (one CSV per label)
├── grouped_md/ # Generated markdown summaries per label
└── v{VERSION}-release-notes.md # Final consolidated release notes
```
## When to Use This Skill
- Generate release notes for a milestone
- Summarize PRs merged in a release
- Request Copilot reviews for milestone PRs
- Assign milestones to PRs missing them
- Collect PRs between two commits/tags
- Update README.md for a new version
## Prerequisites
- GitHub CLI (`gh`) installed and authenticated
- MCP Server: github-mcp-server installed
- GitHub Copilot code review enabled for the org/repo
## Required Variables
⚠️ **Before starting**, confirm `{{ReleaseVersion}}` with the user. If not provided, **ASK**: "What release version are we generating notes for? (e.g., 0.98)"
| Variable | Description | Example |
|----------|-------------|---------|
| `{{ReleaseVersion}}` | Target release version | `0.98` |
## Workflow Overview
```
┌────────────────────────────────┐
│ 1.1 Collect PRs (stable range) │
└────────────────────────────────┘
┌────────────────────────────────┐
│ 1.2 Assign Milestones │
└────────────────────────────────┘
┌────────────────────────────────┐
│ 2.12.4 Label PRs (auto+human) │
└────────────────────────────────┘
┌────────────────────────────────┐
│ 3.1 Request Reviews (Copilot) │
└────────────────────────────────┘
┌────────────────────────────────┐
│ 3.2 Refresh PR data │
│ (CopilotSummary) │
└────────────────────────────────┘
┌────────────────────────────────┐
│ 3.3 Group by label │
│ (grouped_csv) │
└────────────────────────────────┘
┌────────────────────────────────┐
│ 4.1 Summarize (grouped_md) │
└────────────────────────────────┘
┌────────────────────────────────┐
│ 4.2 Final notes (v{VERSION}.md) │
└────────────────────────────────┘
```
| Step | Action | Details |
|------|--------|---------|
| 1.1 | Collect PRs | From previous release tag on `stable` branch → `sorted_prs.csv` |
| 1.2 | Assign Milestones | Ensure all PRs have correct milestone |
| 2.12.4 | Label PRs | Auto-suggest + human label low-confidence |
| 3.13.3 | Reviews & Grouping | Request Copilot reviews → refresh → group by label |
| 4.14.2 | Summaries & Final | Generate grouped summaries, then consolidate |
## Detailed workflow docs
Do not read all steps at once—only read the step you are executing.
- [Step 1: Collection & Milestones](./references/step1-collection.md)
- [Step 2: Labeling PRs](./references/step2-labeling.md)
- [Step 3: Reviews & Grouping](./references/step3-review-grouping.md)
- [Step 4: Summarization](./references/step4-summarization.md)
## Available Scripts
| Script | Purpose |
|--------|---------|
| [dump-prs-since-commit.ps1](./scripts/dump-prs-since-commit.ps1) | Fetch PRs between commits/tags |
| [group-prs-by-label.ps1](./scripts/group-prs-by-label.ps1) | Group PRs into CSVs |
| [collect-or-apply-milestones.ps1](./scripts/collect-or-apply-milestones.ps1) | Assign milestones |
| [diff_prs.ps1](./scripts/diff_prs.ps1) | Incremental PR diff |
## References
- [Sample Output](./references/SampleOutput.md) - Example summary formatting
- [Detailed Instructions](./references/Instruction.md) - Legacy full documentation
## Conventions
- **Terminal usage**: Disabled by default; only run scripts when user explicitly requests
- **Batch generation**: Generate ALL grouped_md files in one pass, then human reviews
- **PR order**: Preserve order from `sorted_prs.csv` in all outputs
- **Label filtering**: Keeps `Product-*`, `Area-*`, `GitHub*`, `*Plugin`, `Issue-*`
## Troubleshooting
| Issue | Solution |
|-------|----------|
| `gh` command not found | Install GitHub CLI and add to PATH |
| No PRs returned | Verify milestone title matches exactly |
| Empty CopilotSummary | Request Copilot reviews first, then re-run dump |
| Many unlabeled PRs | Return to labeling step before grouping |

View File

@@ -0,0 +1,9 @@
- Added mouse button actions so you can choose what left, right, or middle click does. Thanks [@PesBandi](https://github.com/PesBandi)!
- Aligned window styling with current Windows theme for a cleaner look. Thanks [@sadirano](https://github.com/sadirano)!
- Ensured screen readers are notified when the selected item in the list changes for better accessibility.
- Implemented configurable UI test pipeline that can use pre-built official releases instead of building everything from scratch, reducing test execution time from 2+ hours.
- Fixed Alt+Left Arrow navigation not working when search box contains text. Thanks [@jiripolasek](https://github.com/jiripolasek)!

View File

@@ -0,0 +1,143 @@
# Step 1: Collection and Milestones
## 1.0 To-do
- 1.0.1 Generate MemberList.md (REQUIRED)
- 1.1 Collect PRs
- 1.2 Assign Milestones (REQUIRED)
## Required Variables
⚠️ **Before starting**, confirm these values with the user:
| Variable | Description | Example |
|----------|-------------|---------|
| `{{ReleaseVersion}}` | Target release version | `0.97` |
| `{{PreviousReleaseTag}}` | Previous release tag from releases page | `v0.96.1` |
**If user hasn't specified `{{ReleaseVersion}}`, ASK:** "What release version are we generating notes for? (e.g., 0.97)"
**`{{PreviousReleaseTag}}` is derived from the releases page, not user input.** Use the latest published release tag (top of the page). You will use its tag name and tag commit SHA in Step 1.
---
## 1.0.1 Generate MemberList.md (REQUIRED)
Create `Generated Files/ReleaseNotes/MemberList.md` from the **PowerToys core team** section in [COMMUNITY.md](../../../COMMUNITY.md).
Rules:
- One GitHub username per line, **no** `@` prefix.
- Use the usernames exactly as listed in the core team section.
- Do not include former team members or other sections.
Example (format only):
```
example-user
another-user
```
---
## 1.1 Collect PRs
### 1.1.1 Get the previous release commit
1. Open the [PowerToys releases page](https://github.com/microsoft/PowerToys/releases/)
2. Find the latest release (e.g., v0.96.1, which should be at the top)
3. Set `{{PreviousReleaseTag}}` to that tag name (e.g., `v0.96.1`)
4. Copy the full tag commit SHA as `{{SHALastRelease}}`
**If the release SHA is not in your branch history:** Use the helper script to find an equivalent commit on the target branch by matching the commit title:
```powershell
pwsh ./.github/skills/release-note-generation/scripts/find-commit-by-title.ps1 `
-Commit '{{SHALastRelease}}' `
-Branch 'stable'
```
### 1.1.2 Run collection script against stable branch
```powershell
# Collect PRs from previous release to current HEAD of stable branch
pwsh ./.github/skills/release-note-generation/scripts/dump-prs-since-commit.ps1 `
-StartCommit '{{SHALastRelease}}' `
-Branch 'stable' `
-OutputDir 'Generated Files/ReleaseNotes'
```
**Parameters:**
- `-StartCommit` - Previous release tag or commit SHA (exclusive)
- `-Branch` - Always use `stable` branch, not `main` (script uses `origin/stable` as the end ref)
- `-EndCommit` - Optional override if you need a custom end ref
- `-OutputDir` - Output directory for generated files
**Reliability check:** If the script reports “No commits found”, the stable branch has not moved since the last release. In that case, either:
- Confirm this is expected and stop (no new release notes), or
- Re-run against `main` to gather pending changes for the next release cycle.
The script detects both merge commits (`Merge pull request #12345`) and squash commits (`Feature (#12345)`).
**Output** (in `Generated Files/ReleaseNotes/`):
- `milestone_prs.json` - raw PR data
- `sorted_prs.csv` - sorted PR list with columns: Id, Title, Labels, Author, Url, Body, CopilotSummary, NeedThanks
---
## 1.2 Assign Milestones (REQUIRED)
**Before generating release notes**, ensure all collected PRs have the correct milestone assigned.
⚠️ **CRITICAL:** Do NOT proceed to labeling until all PRs have milestones assigned.
### 1.2.1 Check current milestone status (dry run)
```powershell
# Dry run first to see what would be changed:
pwsh ./.github/skills/release-note-generation/scripts/collect-or-apply-milestones.ps1 `
-InputCsv 'Generated Files/ReleaseNotes/sorted_prs.csv' `
-OutputCsv 'Generated Files/ReleaseNotes/prs_with_milestone.csv' `
-DefaultMilestone 'PowerToys {{ReleaseVersion}}' `
-ApplyMissing -WhatIf
```
This queries GitHub for each PR's current milestone and shows which PRs would be updated.
### 1.2.2 Apply milestones to PRs missing them
```powershell
# Apply for real:
pwsh ./.github/skills/release-note-generation/scripts/collect-or-apply-milestones.ps1 `
-InputCsv 'Generated Files/ReleaseNotes/sorted_prs.csv' `
-OutputCsv 'Generated Files/ReleaseNotes/prs_with_milestone.csv' `
-DefaultMilestone 'PowerToys {{ReleaseVersion}}' `
-ApplyMissing
```
**Script Behavior:**
- Queries each PR's current milestone from GitHub
- PRs that already have a milestone are **skipped** (not overwritten)
- PRs missing a milestone get the default milestone applied
- Outputs `prs_with_milestone.csv` with (Id, Milestone) columns
- Produces summary: `Updated=X Skipped=Y Failed=Z`
**Validation:** After assignment, all PRs in `prs_with_milestone.csv` should have the target milestone.
---
## Additional Commands
### Collect milestones only (no changes to GitHub)
```powershell
pwsh ./.github/skills/release-note-generation/scripts/collect-or-apply-milestones.ps1 `
-InputCsv 'Generated Files/ReleaseNotes/sorted_prs.csv' `
-OutputCsv 'Generated Files/ReleaseNotes/prs_with_milestone.csv'
```
### Local assignment only (fill blanks in CSV, no GitHub changes)
```powershell
pwsh ./.github/skills/release-note-generation/scripts/collect-or-apply-milestones.ps1 `
-InputCsv 'Generated Files/ReleaseNotes/sorted_prs.csv' `
-OutputCsv 'Generated Files/ReleaseNotes/prs_with_milestone.csv' `
-DefaultMilestone 'PowerToys {{ReleaseVersion}}' `
-LocalAssign
```

View File

@@ -0,0 +1,131 @@
# Step 2: Label Unlabeled PRs
## 2.0 To-do
- 2.1 Identify unlabeled PRs (Agent Mode)
- 2.2 Suggest labels (Agent Mode)
- 2.3 Human label low-confidence PRs
- 2.4 Recheck labels, delete Unlabeled.csv, and re-collect
**Before grouping**, ensure all PRs have appropriate labels for categorization.
⚠️ **CRITICAL:** Do NOT proceed to grouping until all PRs have labels assigned. PRs without labels will end up in `Unlabeled.csv` and won't appear in the correct release note sections.
## 2.1 Identify unlabeled PRs (Agent Mode)
Read `sorted_prs.csv` and identify PRs with empty or missing `Labels` column.
For each unlabeled PR, analyze:
- **Title** - Often contains module name or feature
- **Body** - PR description with context
- **CopilotSummary** - AI-generated summary of changes
## 2.2 Suggest labels (Agent Mode)
For each unlabeled PR, suggest an appropriate label based on the content analysis.
**Output:** Create `Generated Files/ReleaseNotes/prs_label_review.md` with the following format:
```markdown
# PR Label Review
Generated: YYYY-MM-DD HH:mm:ss
## Summary
- Total unlabeled PRs: X
- High confidence: X
- Medium confidence: X
- Low confidence: X
---
## PRs Needing Review (sorted by confidence, low first)
| PR | Title | Suggested Label | Confidence | Reason |
|----|-------|-----------------|------------|--------|
| [#12347](url) | Some generic fix | ??? | Low | Unclear from content |
| [#12346](url) | Update dependencies | `Area-Build` | Medium | Body mentions NuGet packages |
```
Sort by confidence (low first) so human reviews uncertain ones first.
After writing `prs_label_review.md`, **generate `prs_to_label.csv`, apply labels, and re-run collection** so the CSV/labels stay in sync:
```powershell
# Generate CSV from suggestions (agent)
# Apply labels
pwsh ./.github/skills/release-note-generation/scripts/apply-labels.ps1 `
-InputCsv 'Generated Files/ReleaseNotes/prs_to_label.csv'
# Refresh collection
pwsh ./.github/skills/release-note-generation/scripts/dump-prs-since-commit.ps1 `
-StartCommit '{{PreviousReleaseTag}}' -Branch 'stable' `
-OutputDir 'Generated Files/ReleaseNotes'
```
## 2.3 Human label low-confidence PRs
Ask the human to label **low-confidence** PRs directly (in GitHub). Skip any they decide not to label.
## 2.4 Recheck labels, delete Unlabeled.csv, and re-collect
Recheck that all PRs now have labels. Delete `Unlabeled.csv` (if present), then re-run the collection script to update `sorted_prs.csv`:
```powershell
# Remove stale unlabeled output if it exists
Remove-Item 'Generated Files/ReleaseNotes/Unlabeled.csv' -ErrorAction SilentlyContinue
```
```powershell
pwsh ./.github/skills/release-note-generation/scripts/dump-prs-since-commit.ps1 `
-StartCommit '{{PreviousReleaseTag}}' -Branch 'stable' `
-OutputDir 'Generated Files/ReleaseNotes'
```
---
## Common Label Mappings
| Keywords/Patterns | Suggested Label |
| ----------------- | --------------- |
| Advanced Paste, AP, clipboard, paste | `Product-Advanced Paste` |
| CmdPal, Command Palette, cmdpal | `Product-Command Palette` |
| FancyZones, zones, layout | `Product-FancyZones` |
| ZoomIt, zoom, screen annotation | `Product-ZoomIt` |
| Settings, settings-ui, Quick Access, flyout | `Product-Settings` |
| Installer, setup, MSI, MSIX, WiX | `Area-Setup/Install` |
| Build, pipeline, CI/CD, msbuild | `Area-Build` |
| Test, unit test, UI test, fuzz | `Area-Tests` |
| Localization, loc, translation, resw | `Area-Localization` |
| Foundry, AI, LLM | `Product-Advanced Paste` (AI features) |
| Mouse Without Borders, MWB | `Product-Mouse Without Borders` |
| PowerRename, rename, regex | `Product-PowerRename` |
| Peek, preview, file preview | `Product-Peek` |
| Image Resizer, resize | `Product-Image Resizer` |
| LightSwitch, theme, dark mode | `Product-LightSwitch` |
| Quick Accent, accent, diacritics | `Product-Quick Accent` |
| Awake, keep awake, caffeine | `Product-Awake` |
| ColorPicker, color picker, eyedropper | `Product-ColorPicker` |
| Hosts, hosts file | `Product-Hosts` |
| Keyboard Manager, remap | `Product-Keyboard Manager` |
| Mouse Highlighter | `Product-Mouse Highlighter` |
| Mouse Jump | `Product-Mouse Jump` |
| Find My Mouse | `Product-Find My Mouse` |
| Mouse Pointer Crosshairs | `Product-Mouse Pointer Crosshairs` |
| Shortcut Guide | `Product-Shortcut Guide` |
| Text Extractor, OCR, PowerOCR | `Product-Text Extractor` |
| Workspaces | `Product-Workspaces` |
| File Locksmith | `Product-File Locksmith` |
| Crop And Lock | `Product-CropAndLock` |
| Environment Variables | `Product-Environment Variables` |
| New+ | `Product-New+` |
## Label Filtering Rules
The grouping script keeps labels matching these patterns:
- `Product-*`
- `Area-*`
- `GitHub*`
- `*Plugin`
- `Issue-*`
Other labels are ignored for grouping purposes.

View File

@@ -0,0 +1,37 @@
# Step 3: Copilot Reviews and Grouping
## 3.0 To-do
- 3.1 Request Copilot Reviews (Agent Mode)
- 3.2 Refresh PR Data
- 3.3 Group PRs by Label
## 3.1 Request Copilot Reviews (Agent Mode)
Use MCP tools to request Copilot reviews for all PRs in `Generated Files/ReleaseNotes/sorted_prs.csv`:
- Use `mcp_github_request_copilot_review` for each PR ID
- Do NOT generate or run scripts for this step
---
## 3.2 Refresh PR Data
Re-run the collection script to capture Copilot review summaries into the `CopilotSummary` column:
```powershell
pwsh ./.github/skills/release-note-generation/scripts/dump-prs-since-commit.ps1 `
-StartCommit '{{PreviousReleaseTag}}' -Branch 'stable' `
-OutputDir 'Generated Files/ReleaseNotes'
```
---
## 3.3 Group PRs by Label
```powershell
pwsh ./.github/skills/release-note-generation/scripts/group-prs-by-label.ps1 -CsvPath 'Generated Files/ReleaseNotes/sorted_prs.csv' -OutDir 'Generated Files/ReleaseNotes/grouped_csv'
```
Creates `Generated Files/ReleaseNotes/grouped_csv/` with one CSV per label combination.
**Validation:** The `Unlabeled.csv` file should be minimal (ideally empty). If many PRs remain unlabeled, return to Step 2 (see [step2-labeling.md](./step2-labeling.md)).

View File

@@ -0,0 +1,88 @@
# Step 4: Summaries and Final Release Notes
## 4.0 To-do
- 4.1 Generate Summary Markdown (Agent Mode)
- 4.2 Produce Final Release Notes File
## 4.1 Generate Summary Markdown (Agent Mode)
For each CSV in `Generated Files/ReleaseNotes/grouped_csv/`, create a markdown file in `Generated Files/ReleaseNotes/grouped_md/`.
⚠️ **IMPORTANT:** Generate **ALL** markdown files first. Do NOT pause between files or ask for feedback during generation. Complete the entire batch, then human reviews afterwards.
### Structure per file
**1. Bullet list** - one concise, user-facing line per PR:
- Use the “Verb-ed + Scenario + Impact” sentence structure—make readers think, “Thats exactly what I need” or “Yes, thats an awesome fix.”; The "impact" can be end-user focused (written to convey user excitement) or technical (performance/stability) when user-facing impact is minimal.
- If nothing special on impact or unclear impact, mark as needing human summary
- Source from Title, Body, and CopilotSummary (prefer CopilotSummary when available)
- If the column `NeedThanks` in CSV is `True`, append: `Thanks [@Author](https://github.com/Author)!`
- Do NOT include PR numbers in bullet lines
- Do NOT mention “security” or “privacy” issues, since these are not known and could be leveraged by attackers in earlier versions. Instead, describe the user-facing scenario, usage, or impact.
- If confidence < 70%, write: `Human Summary Needed: <PR full link>`
**See [SampleOutput.md](./SampleOutput.md) for examples of well-written bullet summaries.**
**2. Three-column table** (same PR order):
- Column 1: Concise summary (same as bullet)
- Column 2: PR link `[#ID](URL)`
- Column 3: Confidence level (High/Medium/Low)
### Review Process (AFTER all files generated)
- Human reviews each `grouped_md/*.md` file and requests rewrites as needed
- Human may say "rewrite Product-X" or "combine these bullets"—apply changes to that specific file
- Do NOT interrupt generation to ask for feedback
---
## 4.2 Produce Final Release Notes File
Once all `grouped_md/*.md` files are reviewed and approved, consolidate into a single release notes file.
**Output:** `Generated Files/ReleaseNotes/v{{ReleaseVersion}}-release-notes.md`
### Structure
**1. Highlights section** (top):
- 8-12 bullets covering the most user-visible features and impactful fixes
- Pattern: `**Module**: brief description`
- Avoid internal refactors; focus on what users will notice
**2. Module sections** (alphabetical order):
- One section per product (Advanced Paste, Awake, Command Palette, etc.)
- Migrate bullet summaries from the approved `grouped_md/Product-*.md` files
- One section 'Development' for all the rest summaries from the approved `grouped_md/Area-*.md` files
- Re-review E2E, group release improvements by section, and move the most important items to the top of each section.
Some items in the Development section may overlap and should be moved to the Module section where more applicable.
### Example Final Structure
```markdown
# PowerToys v{{ReleaseVersion}} Release Notes
## Highlights
- **Command Palette**: Added theme customization and drag-and-drop support
- **Advanced Paste**: Image input for AI, color detection in clipboard history
- **FancyZones**: New CLI tool for command-line layout management
...
---
## Advanced Paste
- Wrapped paste option lists in a single ScrollViewer
- Added image input handling for AI-powered transformations
...
## Awake
- Fixed timed mode expiration. Thanks [@daverayment](https://github.com/daverayment)!
...
---
## Development
...
```

View File

@@ -0,0 +1,90 @@
<#
.SYNOPSIS
Apply labels to PRs from a CSV file.
.DESCRIPTION
Reads a CSV with Id and Label columns and applies the specified label to each PR via GitHub CLI.
Supports dry-run mode to preview changes before applying.
.PARAMETER InputCsv
CSV file with Id and Label columns. Default: prs_to_label.csv
.PARAMETER Repo
GitHub repository (owner/name). Default: microsoft/PowerToys
.PARAMETER WhatIf
Dry run - show what would be applied without making changes.
.EXAMPLE
pwsh ./apply-labels.ps1 -InputCsv 'Generated Files/ReleaseNotes/prs_to_label.csv'
.EXAMPLE
pwsh ./apply-labels.ps1 -InputCsv 'Generated Files/ReleaseNotes/prs_to_label.csv' -WhatIf
.NOTES
Requires: gh CLI authenticated with repo write access.
Input CSV format:
Id,Label
12345,Product-Advanced Paste
12346,Product-Settings
#>
[CmdletBinding()] param(
[Parameter(Mandatory=$false)][string]$InputCsv = 'prs_to_label.csv',
[Parameter(Mandatory=$false)][string]$Repo = 'microsoft/PowerToys',
[switch]$WhatIf
)
$ErrorActionPreference = 'Stop'
function Write-Info($m){ Write-Host "[info] $m" -ForegroundColor Cyan }
function Write-Warn($m){ Write-Host "[warn] $m" -ForegroundColor Yellow }
function Write-Err($m){ Write-Host "[error] $m" -ForegroundColor Red }
function Write-OK($m){ Write-Host "[ok] $m" -ForegroundColor Green }
if (-not (Get-Command gh -ErrorAction SilentlyContinue)) { Write-Err "GitHub CLI 'gh' not found in PATH"; exit 1 }
if (-not (Test-Path -LiteralPath $InputCsv)) { Write-Err "Input CSV not found: $InputCsv"; exit 1 }
$rows = Import-Csv -LiteralPath $InputCsv
if (-not $rows) { Write-Info "No rows in CSV."; exit 0 }
$firstCols = $rows[0].PSObject.Properties.Name
if (-not ($firstCols -contains 'Id' -and $firstCols -contains 'Label')) {
Write-Err "CSV must contain 'Id' and 'Label' columns"; exit 1
}
Write-Info "Processing $($rows.Count) label assignments..."
if ($WhatIf) { Write-Warn "DRY RUN - no changes will be made" }
$applied = 0
$skipped = 0
$failed = 0
foreach ($row in $rows) {
$id = $row.Id
$label = $row.Label
if ([string]::IsNullOrWhiteSpace($id) -or [string]::IsNullOrWhiteSpace($label)) {
Write-Warn "Skipping row with empty Id or Label"
$skipped++
continue
}
if ($WhatIf) {
Write-Info "Would apply label '$label' to PR #$id"
$applied++
continue
}
try {
gh pr edit $id --repo $Repo --add-label $label 2>&1 | Out-Null
Write-OK "Applied '$label' to PR #$id"
$applied++
} catch {
Write-Warn "Failed to apply label to PR #${id}: $_"
$failed++
}
}
Write-Info ""
Write-Info "Summary: Applied=$applied Skipped=$skipped Failed=$failed"

View File

@@ -0,0 +1,172 @@
<#
.SYNOPSIS
Collect existing PR milestones or (optionally) assign/apply a milestone to missing PRs in one script.
.DESCRIPTION
This unified script merges the behaviors of the previous add-milestone-column (collector) and
set-milestones-missing (remote updater) scripts.
Modes (controlled by switches):
1. Collect (default) For each PR Id in the input CSV, queries GitHub for the current milestone and
outputs a two-column CSV (Id,Milestone) leaving blanks where none are set.
2. LocalAssign Same as Collect, but for rows that end up blank assigns the value of -DefaultMilestone
in memory (does NOT touch GitHub). Useful for quickly preparing a fully populated CSV.
3. ApplyMissing After determining which PRs have no milestone, call GitHub API to set their milestone
to -DefaultMilestone. Requires milestone to already exist (open). Network + write.
You can combine LocalAssign and ApplyMissing: the remote update uses the existing live state; LocalAssign only
affects the output CSV/pipeline objects.
.PARAMETER InputCsv
Source CSV with at least an Id column. Default: sorted_prs.csv
.PARAMETER OutputCsv
Destination CSV for collected (and optionally locally assigned) milestones. Default: prs_with_milestone.csv
.PARAMETER Repo
GitHub repository (owner/name). Default: microsoft/PowerToys
.PARAMETER DefaultMilestone
Milestone title used when -LocalAssign or -ApplyMissing is specified. Default: 'PowerToys 0.97'
.PARAMETER Offline
Skip ALL GitHub lookups / updates. Implies Collect-only with all Milestone cells blank (unless LocalAssign).
.PARAMETER LocalAssign
Populate empty Milestone cells in the output with -DefaultMilestone (does not modify GitHub).
.PARAMETER ApplyMissing
For PRs which currently have no milestone (live on GitHub), set them to -DefaultMilestone via Issues API.
.PARAMETER WhatIf
Dry run for ApplyMissing: show intended remote changes without performing PATCH requests.
.EXAMPLE
# Collect only
pwsh ./collect-or-apply-milestones.ps1
.EXAMPLE
# Collect and fill blanks locally in the output only
pwsh ./collect-or-apply-milestones.ps1 -LocalAssign
.EXAMPLE
# Collect and remotely apply milestone to missing PRs
pwsh ./collect-or-apply-milestones.ps1 -ApplyMissing
.EXAMPLE
# Dry run remote application
pwsh ./collect-or-apply-milestones.ps1 -ApplyMissing -WhatIf
.EXAMPLE
# Offline local assignment
pwsh ./collect-or-apply-milestones.ps1 -Offline -LocalAssign -DefaultMilestone 'PowerToys 0.96'
.NOTES
Requires gh CLI unless -Offline AND -ApplyMissing not specified.
Remote apply path queries milestones to resolve numeric ID.
#>
[CmdletBinding()] param(
[Parameter(Mandatory=$false)][string]$InputCsv = 'sorted_prs.csv',
[Parameter(Mandatory=$false)][string]$OutputCsv = 'prs_with_milestone.csv',
[Parameter(Mandatory=$false)][string]$Repo = 'microsoft/PowerToys',
[Parameter(Mandatory=$false)][string]$DefaultMilestone = 'PowerToys 0.97',
[switch]$Offline,
[switch]$LocalAssign,
[switch]$ApplyMissing,
[switch]$WhatIf
)
$ErrorActionPreference = 'Stop'
function Write-Info($m){ Write-Host "[info] $m" -ForegroundColor Cyan }
function Write-Warn($m){ Write-Host "[warn] $m" -ForegroundColor Yellow }
function Write-Err($m){ Write-Host "[error] $m" -ForegroundColor Red }
if (-not (Test-Path -LiteralPath $InputCsv)) { Write-Err "Input CSV not found: $InputCsv"; exit 1 }
$rows = Import-Csv -LiteralPath $InputCsv
if (-not $rows) { Write-Warn "Input CSV has no rows."; @() | Export-Csv -NoTypeInformation -LiteralPath $OutputCsv; exit 0 }
if (-not ($rows[0].PSObject.Properties.Name -contains 'Id')) { Write-Err "Input CSV missing 'Id' column."; exit 1 }
$needGh = (-not $Offline) -and ($ApplyMissing -or -not $Offline)
if ($needGh -and -not (Get-Command gh -ErrorAction SilentlyContinue)) { Write-Err "GitHub CLI 'gh' not found. Use -Offline or install gh."; exit 1 }
# Step 1: Collect current milestone titles
$milestoneCache = @{}
$collected = New-Object System.Collections.Generic.List[object]
$idx = 0
foreach ($row in $rows) {
$idx++
$id = $row.Id
if (-not $id) { Write-Warn "Row $idx missing Id; skipping"; continue }
$ms = ''
if (-not $Offline) {
if ($milestoneCache.ContainsKey($id)) { $ms = $milestoneCache[$id] }
else {
try {
$json = gh pr view $id --repo $Repo --json milestone 2>$null | ConvertFrom-Json
if ($json -and $json.milestone -and $json.milestone.title) { $ms = $json.milestone.title }
} catch {
Write-Warn "Failed to fetch PR #$id milestone: $_"
}
$milestoneCache[$id] = $ms
}
}
$collected.Add([PSCustomObject]@{ Id = $id; Milestone = $ms }) | Out-Null
}
# Step 2: Remote apply (if requested)
$applySummary = @()
if ($ApplyMissing) {
if ($Offline) { Write-Err "Cannot use -ApplyMissing with -Offline."; exit 1 }
Write-Info "Resolving milestone id for '$DefaultMilestone' ..."
$milestonesRaw = gh api repos/$Repo/milestones --paginate --jq '.[] | {number,title,state}'
$msObj = $milestonesRaw | ConvertFrom-Json | Where-Object { $_.title -eq $DefaultMilestone -and $_.state -eq 'open' } | Select-Object -First 1
if (-not $msObj) { Write-Err "Milestone '$DefaultMilestone' not found/open."; exit 1 }
$msNumber = $msObj.number
$targets = $collected | Where-Object { [string]::IsNullOrWhiteSpace($_.Milestone) }
Write-Info ("ApplyMissing: {0} PR(s) without milestone." -f $targets.Count)
foreach ($t in $targets) {
$id = $t.Id
try {
# Verify still missing live
$current = gh pr view $id --repo $Repo --json milestone --jq '.milestone.title // ""'
if ($current) {
$applySummary += [PSCustomObject]@{ Id=$id; Action='Skip (already has)'; Milestone=$current; Status='OK' }
continue
}
if ($WhatIf) {
$applySummary += [PSCustomObject]@{ Id=$id; Action='Would set'; Milestone=$DefaultMilestone; Status='DRY RUN' }
continue
}
gh api -X PATCH -H 'Accept: application/vnd.github+json' repos/$Repo/issues/$id -f milestone=$msNumber | Out-Null
$applySummary += [PSCustomObject]@{ Id=$id; Action='Set'; Milestone=$DefaultMilestone; Status='OK' }
# Reflect in collected object for CSV output if not LocalAssign already doing so
$t.Milestone = $DefaultMilestone
} catch {
$errText = $_ | Out-String
$applySummary += [PSCustomObject]@{ Id=$id; Action='Failed'; Milestone=$DefaultMilestone; Status=$errText.Trim() }
Write-Warn ("Failed to set milestone for PR #{0}: {1}" -f $id, ($errText.Trim()))
}
}
}
# Step 3: Local assignment (purely for output) AFTER remote so remote actual result not overwritten accidentally
if ($LocalAssign) {
foreach ($item in $collected) {
if ([string]::IsNullOrWhiteSpace($item.Milestone)) { $item.Milestone = $DefaultMilestone }
}
}
# Step 4: Export CSV
$collected | Export-Csv -LiteralPath $OutputCsv -NoTypeInformation -Encoding UTF8
Write-Info ("Wrote collected CSV -> {0}" -f (Resolve-Path -LiteralPath $OutputCsv))
# Step 5: Summaries
if ($ApplyMissing) {
$updated = ($applySummary | Where-Object { $_.Action -eq 'Set' }).Count
$skipped = ($applySummary | Where-Object { $_.Action -like 'Skip*' }).Count
$failed = ($applySummary | Where-Object { $_.Action -eq 'Failed' }).Count
Write-Info ("ApplyMissing summary: Updated={0} Skipped={1} Failed={2}" -f $updated, $skipped, $failed)
}
# Emit objects (final collected set)
return $collected

View File

@@ -0,0 +1,100 @@
<#
.SYNOPSIS
Produce an incremental PR CSV containing rows present in a newer full export but absent from a baseline export.
.DESCRIPTION
Compares two previously generated sorted PR CSV files (same schema). Any row whose key column value
(defaults to 'Number') does not exist in the baseline file is emitted to a new incremental CSV, preserving
the original column order. If no new rows are found, an empty CSV (with headers when determinable) is written.
.PARAMETER BaseCsv
Path to the baseline (earlier) PR CSV.
.PARAMETER AllCsv
Path to the newer full PR CSV containing superset (or equal set) of rows.
.PARAMETER OutCsv
Path to write the incremental CSV containing only new rows.
.PARAMETER Key
Column name used as unique identifier (defaults to 'Number'). Must exist in both CSVs.
.EXAMPLE
pwsh ./diff_prs.ps1 -BaseCsv sorted_prs_prev.csv -AllCsv sorted_prs.csv -OutCsv sorted_prs_incremental.csv
.NOTES
Requires: PowerShell 7+, both CSVs with identical column schemas.
Exit code 0 on success (even if zero incremental rows). Throws on missing files.
#>
[CmdletBinding()] param(
[Parameter(Mandatory=$false)][string]$BaseCsv = "./sorted_prs_93_round1.csv",
[Parameter(Mandatory=$false)][string]$AllCsv = "./sorted_prs.csv",
[Parameter(Mandatory=$false)][string]$OutCsv = "./sorted_prs_93_incremental.csv",
[Parameter(Mandatory=$false)][string]$Key = "Number"
)
Set-StrictMode -Version Latest
$ErrorActionPreference = 'Stop'
function Write-Info($m) { Write-Host "[info] $m" -ForegroundColor Cyan }
function Write-Warn($m) { Write-Host "[warn] $m" -ForegroundColor Yellow }
if (-not (Test-Path -LiteralPath $BaseCsv)) { throw "Base CSV not found: $BaseCsv" }
if (-not (Test-Path -LiteralPath $AllCsv)) { throw "All CSV not found: $AllCsv" }
# Load CSVs
$baseRows = Import-Csv -LiteralPath $BaseCsv
$allRows = Import-Csv -LiteralPath $AllCsv
if (-not $baseRows) { Write-Warn "Base CSV has no rows." }
if (-not $allRows) { Write-Warn "All CSV has no rows." }
# Validate key presence
if ($baseRows -and -not ($baseRows[0].PSObject.Properties.Name -contains $Key)) { throw "Key column '$Key' not found in base CSV." }
if ($allRows -and -not ($allRows[0].PSObject.Properties.Name -contains $Key)) { throw "Key column '$Key' not found in all CSV." }
# Build a set of existing keys from base
$set = New-Object 'System.Collections.Generic.HashSet[string]'
foreach ($row in $baseRows) {
$val = [string]($row.$Key)
if ($null -ne $val) { [void]$set.Add($val) }
}
# Filter rows in AllCsv whose key is not in base (these are the new / incremental rows)
$incremental = @()
foreach ($row in $allRows) {
$val = [string]($row.$Key)
if (-not $set.Contains($val)) { $incremental += $row }
}
# Preserve column order from the All CSV
$columns = @()
if ($allRows.Count -gt 0) {
$columns = $allRows[0].PSObject.Properties.Name
}
try {
if ($incremental.Count -gt 0) {
if ($columns.Count -gt 0) {
$incremental | Select-Object -Property $columns | Export-Csv -LiteralPath $OutCsv -NoTypeInformation -Encoding UTF8
} else {
$incremental | Export-Csv -LiteralPath $OutCsv -NoTypeInformation -Encoding UTF8
}
} else {
# Write an empty CSV with headers if we know them (facilitates downstream tooling expecting header row)
if ($columns.Count -gt 0) {
$obj = [PSCustomObject]@{}
foreach ($c in $columns) { $obj | Add-Member -NotePropertyName $c -NotePropertyValue $null }
$obj | Select-Object -Property $columns | Export-Csv -LiteralPath $OutCsv -NoTypeInformation -Encoding UTF8
} else {
'' | Out-File -LiteralPath $OutCsv -Encoding UTF8
}
}
Write-Info ("Incremental rows: {0}" -f $incremental.Count)
Write-Info ("Output: {0}" -f (Resolve-Path -LiteralPath $OutCsv))
}
catch {
Write-Host "[error] Failed writing output CSV: $_" -ForegroundColor Red
exit 1
}

View File

@@ -0,0 +1,344 @@
<#
.SYNOPSIS
Export merged PR metadata between two commits (exclusive start, inclusive end) to JSON and CSV.
.DESCRIPTION
Identifies merge/squash commits reachable from EndCommit but not StartCommit, extracts PR numbers,
queries GitHub for metadata plus (optionally) Copilot review/comment summaries, filters labels, then
emits a JSON artifact and a sorted CSV (first label alphabetical).
.PARAMETER StartCommit
Exclusive starting commit (SHA, tag, or ref). Commits AFTER this one are considered.
.PARAMETER EndCommit
Inclusive ending commit (SHA, tag, or ref). If not provided, uses origin/<Branch> when Branch is set; otherwise uses HEAD.
.PARAMETER Repo
GitHub repository (owner/name). Default: microsoft/PowerToys.
.PARAMETER OutputCsv
Destination CSV path. Default: sorted_prs.csv.
.PARAMETER OutputJson
Destination JSON path containing raw PR objects. Default: milestone_prs.json.
.EXAMPLE
pwsh ./dump-prs-since-commit.ps1 -StartCommit 0123abcd -Branch stable
.EXAMPLE
pwsh ./dump-prs-since-commit.ps1 -StartCommit 0123abcd -EndCommit 89ef7654 -OutputCsv delta.csv
.NOTES
Requires: git, gh (authenticated). No Set-StrictMode to keep parity with existing release scripts.
#>
[CmdletBinding()]
param(
[Parameter(Mandatory = $true)][string]$StartCommit, # exclusive start (commits AFTER this one)
[string]$EndCommit,
[string]$Branch,
[string]$Repo = "microsoft/PowerToys",
[string]$OutputDir,
[string]$OutputCsv = "sorted_prs.csv",
[string]$OutputJson = "milestone_prs.json"
)
<#
.SYNOPSIS
Dump merged PR information whose merge commits are reachable from EndCommit but not from StartCommit.
.DESCRIPTION
Uses git rev-list to compute commits in the (StartCommit, EndCommit] range, extracts PR numbers from merge commit messages,
queries GitHub (gh CLI) for details, then outputs a CSV.
PR merge commit messages in PowerToys generally contain patterns like:
Merge pull request #12345 from ...
.EXAMPLE
pwsh ./dump-prs-since-commit.ps1 -StartCommit 0123abcd -Branch stable
.EXAMPLE
pwsh ./dump-prs-since-commit.ps1 -StartCommit 0123abcd -EndCommit 89ef7654 -OutputCsv changes.csv
.NOTES
Requires: gh CLI authenticated; git available in working directory (must be inside PowerToys repo clone).
CopilotSummary behavior:
- Attempts to locate the latest GitHub Copilot authored review (preferred).
- If no review is found, lazily fetches PR comments to look for a Copilot-authored comment.
- Normalizes whitespace and strips newlines. Empty when no Copilot activity detected.
- Run with -Verbose to see whether the summary came from a 'review' or 'comment' source.
#>
function Write-Info($msg) { Write-Host $msg -ForegroundColor Cyan }
function Write-Warn($msg) { Write-Host $msg -ForegroundColor Yellow }
function Write-Err($msg) { Write-Host $msg -ForegroundColor Red }
function Write-DebugMsg($msg) { if ($PSBoundParameters.ContainsKey('Verbose') -or $VerbosePreference -eq 'Continue') { Write-Host "[VERBOSE] $msg" -ForegroundColor DarkGray } }
# Load member list from Generated Files/ReleaseNotes/MemberList.md (internal team - no thanks needed)
$scriptDir = Split-Path -Parent $MyInvocation.MyCommand.Path
$repoRoot = Resolve-Path (Join-Path $scriptDir "..\..\..\..")
$defaultMemberListPath = Join-Path $repoRoot "Generated Files\ReleaseNotes\MemberList.md"
$memberListPath = $defaultMemberListPath
if ($OutputDir) {
$memberListFromOutputDir = Join-Path $OutputDir "MemberList.md"
if (Test-Path $memberListFromOutputDir) {
$memberListPath = $memberListFromOutputDir
}
}
$memberList = @()
if (Test-Path $memberListPath) {
$memberListContent = Get-Content $memberListPath -Raw
# Extract usernames - skip markdown code fence lines, get all non-empty lines
$memberList = ($memberListContent -split "`n") | Where-Object { $_ -notmatch '^\s*```' -and $_.Trim() -ne '' } | ForEach-Object { $_.Trim() }
if (-not $memberList -or $memberList.Count -eq 0) {
Write-Err "MemberList.md is empty at $memberListPath"
exit 1
}
Write-DebugMsg "Loaded $($memberList.Count) members from MemberList.md"
} else {
Write-Err "MemberList.md not found at $memberListPath"
exit 1
}
# Validate we are in a git repo
#if (-not (Test-Path .git)) {
# Write-Err "Current directory does not appear to be the root of a git repository."
# exit 1
#}
# Resolve output directory (if specified)
if ($OutputDir) {
if (-not (Test-Path $OutputDir)) {
New-Item -ItemType Directory -Path $OutputDir -Force | Out-Null
}
if (-not [System.IO.Path]::IsPathRooted($OutputCsv)) {
$OutputCsv = Join-Path $OutputDir $OutputCsv
}
if (-not [System.IO.Path]::IsPathRooted($OutputJson)) {
$OutputJson = Join-Path $OutputDir $OutputJson
}
}
# Resolve commits
try {
if ($Branch) {
Write-Info "Fetching latest '$Branch' from origin (with tags)..."
git fetch origin $Branch --tags | Out-Null
if ($LASTEXITCODE -ne 0) { throw "git fetch origin $Branch --tags failed" }
}
$startSha = (git rev-parse --verify $StartCommit) 2>$null
if (-not $startSha) { throw "StartCommit '$StartCommit' not found" }
if ($Branch) {
$branchRef = $Branch
$branchSha = (git rev-parse --verify $branchRef) 2>$null
if (-not $branchSha) {
$branchRef = "origin/$Branch"
$branchSha = (git rev-parse --verify $branchRef) 2>$null
}
if (-not $branchSha) { throw "Branch '$Branch' not found" }
if (-not $PSBoundParameters.ContainsKey('EndCommit') -or [string]::IsNullOrWhiteSpace($EndCommit)) {
$EndCommit = $branchRef
}
}
if (-not $PSBoundParameters.ContainsKey('EndCommit') -or [string]::IsNullOrWhiteSpace($EndCommit)) {
$EndCommit = "HEAD"
}
$endSha = (git rev-parse --verify $EndCommit) 2>$null
if (-not $endSha) { throw "EndCommit '$EndCommit' not found" }
}
catch {
Write-Err $_
exit 1
}
Write-Info "Collecting commits between $startSha..$endSha (excluding start, including end)."
# Get list of commits reachable from end but not from start.
# IMPORTANT: In PowerShell, the .. operator creates a numeric/char range. If $startSha and $endSha look like hex strings,
# `$startSha..$endSha` must be passed as a single string argument.
$rangeArg = "$startSha..$endSha"
$commitList = git rev-list $rangeArg
# Normalize list (filter out empty strings)
$normalizedCommits = $commitList | Where-Object { $_ -and $_.Trim() -ne '' }
$commitCount = ($normalizedCommits | Measure-Object).Count
Write-DebugMsg ("Raw commitList length (including blanks): {0}" -f (($commitList | Measure-Object).Count))
Write-DebugMsg ("Normalized commit count: {0}" -f $commitCount)
if ($commitCount -eq 0) {
Write-Warn "No commits found in specified range ($startSha..$endSha)."; exit 0
}
Write-DebugMsg ("First 5 commits: {0}" -f (($normalizedCommits | Select-Object -First 5) -join ', '))
<#
Extract PR numbers from commits.
Patterns handled:
1. Merge commits: 'Merge pull request #12345 from ...'
2. Squash commits: 'Some feature change (#12345)' (GitHub default squash format)
We collect both. If a commit matches both (unlikely), it's deduped later.
#>
# Extract PR numbers from merge or squash commits
$mergeCommits = @()
foreach ($c in $normalizedCommits) {
$subject = git show -s --format=%s $c
$matched = $false
# Pattern 1: Traditional merge commit
if ($subject -match 'Merge pull request #([0-9]+) ') {
$prNumber = [int]$matches[1]
$mergeCommits += [PSCustomObject]@{ Sha = $c; Pr = $prNumber; Subject = $subject; Pattern = 'merge' }
Write-DebugMsg "Matched merge PR #$prNumber in commit $c"
$matched = $true
}
# Pattern 2: Squash merge subject line with ' (#12345)' at end (allow possible whitespace before paren)
if ($subject -match '\(#([0-9]+)\)$') {
$prNumber2 = [int]$matches[1]
# Avoid duplicate object if pattern 1 already captured same number for same commit
if (-not ($mergeCommits | Where-Object { $_.Sha -eq $c -and $_.Pr -eq $prNumber2 })) {
$mergeCommits += [PSCustomObject]@{ Sha = $c; Pr = $prNumber2; Subject = $subject; Pattern = 'squash' }
Write-DebugMsg "Matched squash PR #$prNumber2 in commit $c"
}
$matched = $true
}
if (-not $matched) {
Write-DebugMsg "No PR pattern in commit $c : $subject"
}
}
if (-not $mergeCommits -or $mergeCommits.Count -eq 0) {
Write-Warn "No merge commits with PR numbers found in range."; exit 0
}
# Deduplicate PR numbers (in case of revert or merges across branches)
$prNumbers = $mergeCommits | Select-Object -ExpandProperty Pr -Unique | Sort-Object
Write-Info ("Found {0} unique PRs: {1}" -f $prNumbers.Count, ($prNumbers -join ', '))
Write-DebugMsg ("Total merge commits examined: {0}" -f $mergeCommits.Count)
# Query GitHub for each PR
$prDetails = @()
function Get-CopilotSummaryFromPrJson {
param(
[Parameter(Mandatory=$true)]$PrJson,
[switch]$VerboseMode
)
# Returns a hashtable with Summary and Source keys.
$result = @{ Summary = ""; Source = "" }
if (-not $PrJson) { return $result }
$candidateAuthors = @(
'github-copilot[bot]', 'github-copilot', 'copilot'
)
# 1. Reviews (preferred) pick the LONGEST valid Copilot body, not the most recent
$reviews = $PrJson.reviews
if ($reviews) {
$copilotReviews = $reviews | Where-Object {
($candidateAuthors -contains $_.author.login -or $_.author.login -like '*copilot*') -and $_.body -and $_.body.Trim() -ne ''
}
if ($copilotReviews) {
$longest = $copilotReviews | Sort-Object { $_.body.Length } -Descending | Select-Object -First 1
if ($longest) {
$body = $longest.body
$norm = ($body -replace "`r", '') -replace "`n", ' '
$norm = $norm -replace '\s+', ' '
$result.Summary = $norm
$result.Source = 'review'
if ($VerboseMode) { Write-DebugMsg "Selected Copilot review length=$($body.Length) (longest)." }
return $result
}
}
}
# 2. Comments fallback (some repos surface Copilot summaries as PR comments rather than review objects)
if ($null -eq $PrJson.comments) {
try {
# Lazy fetch comments only if needed
$commentsJson = gh pr view $PrJson.number --repo $Repo --json comments 2>$null | ConvertFrom-Json
if ($commentsJson -and $commentsJson.comments) {
$PrJson | Add-Member -NotePropertyName comments -NotePropertyValue $commentsJson.comments -Force
}
} catch {
if ($VerboseMode) { Write-DebugMsg "Failed to fetch comments for PR #$($PrJson.number): $_" }
}
}
if ($PrJson.comments) {
$copilotComments = $PrJson.comments | Where-Object {
($candidateAuthors -contains $_.author.login -or $_.author.login -like '*copilot*') -and $_.body -and $_.body.Trim() -ne ''
}
if ($copilotComments) {
$longestC = $copilotComments | Sort-Object { $_.body.Length } -Descending | Select-Object -First 1
if ($longestC) {
$body = $longestC.body
$norm = ($body -replace "`r", '') -replace "`n", ' '
$norm = $norm -replace '\s+', ' '
$result.Summary = $norm
$result.Source = 'comment'
if ($VerboseMode) { Write-DebugMsg "Selected Copilot comment length=$($body.Length) (longest)." }
return $result
}
}
}
return $result
}
foreach ($pr in $prNumbers) {
Write-Info "Fetching PR #$pr ..."
try {
# Include comments only if Verbose asked; if not, we lazily pull when reviews are missing
$fields = 'number,title,labels,author,url,body,reviews'
if ($PSBoundParameters.ContainsKey('Verbose')) { $fields += ',comments' }
$json = gh pr view $pr --repo $Repo --json $fields 2>$null | ConvertFrom-Json
if ($null -eq $json) { throw "Empty response" }
$copilot = Get-CopilotSummaryFromPrJson -PrJson $json -VerboseMode:($PSBoundParameters.ContainsKey('Verbose'))
if ($copilot.Summary -and $copilot.Source -and $PSBoundParameters.ContainsKey('Verbose')) {
Write-DebugMsg "Copilot summary source=$($copilot.Source) chars=$($copilot.Summary.Length)"
} elseif (-not $copilot.Summary) {
Write-DebugMsg "No Copilot summary found for PR #$pr"
}
# Filter labels
$filteredLabels = $json.labels | Where-Object {
($_.name -like "Product-*") -or
($_.name -like "Area-*") -or
($_.name -like "GitHub*") -or
($_.name -like "*Plugin") -or
($_.name -like "Issue-*")
}
$labelNames = ($filteredLabels | ForEach-Object { $_.name }) -join ", "
$bodyValue = if ($json.body) { ($json.body -replace "`r", '') -replace "`n", ' ' } else { '' }
$bodyValue = $bodyValue -replace '\s+', ' '
# Determine if author needs thanks (not in member list)
$authorLogin = $json.author.login
$needThanks = $true
if ($memberList.Count -gt 0 -and $authorLogin) {
$needThanks = -not ($memberList -contains $authorLogin)
}
$prDetails += [PSCustomObject]@{
Id = $json.number
Title = $json.title
Labels = $labelNames
Author = $authorLogin
Url = $json.url
Body = $bodyValue
CopilotSummary = $copilot.Summary
NeedThanks = $needThanks
}
}
catch {
$err = $_
Write-Warn ("Failed to fetch PR #{0}: {1}" -f $pr, $err)
}
}
if (-not $prDetails) { Write-Warn "No PR details fetched."; exit 0 }
# Sort by Labels like original script (first label alphabetical)
$sorted = $prDetails | Sort-Object { ($_.Labels -split ',')[0] }
# Output JSON raw (optional)
$sorted | ConvertTo-Json -Depth 6 | Out-File -Encoding UTF8 $OutputJson
Write-Info "Saving CSV to $OutputCsv ..."
$sorted | Export-Csv $OutputCsv -NoTypeInformation
Write-Host "✅ Done. Generated $($prDetails.Count) PR rows." -ForegroundColor Green

View File

@@ -0,0 +1,80 @@
<#
.SYNOPSIS
Find a commit on a branch that has the same subject line as a reference commit.
.DESCRIPTION
Given a commit SHA (often from a release tag) and a branch name, this script
resolves the reference commit's subject, then searches the branch history for
commits with the exact same subject line. Useful when the release tag commit
is not reachable from your current branch history.
.PARAMETER Commit
The reference commit SHA or ref (e.g., v0.96.1 or a full SHA).
.PARAMETER Branch
The branch to search (e.g., stable or main). Defaults to stable.
.PARAMETER RepoPath
Path to the local repo. Defaults to current directory.
.EXAMPLE
pwsh ./find-commit-by-title.ps1 -Commit b62f6421845f7e5c92b8186868d98f46720db442 -Branch stable
#>
[CmdletBinding()]
param(
[Parameter(Mandatory = $true)][string]$Commit,
[string]$Branch = "stable",
[string]$RepoPath = "."
)
function Write-Info($msg) { Write-Host $msg -ForegroundColor Cyan }
function Write-Err($msg) { Write-Host $msg -ForegroundColor Red }
Push-Location $RepoPath
try {
Write-Info "Fetching latest '$Branch' from origin (with tags)..."
git fetch origin $Branch --tags | Out-Null
if ($LASTEXITCODE -ne 0) { throw "git fetch origin $Branch --tags failed" }
$commitSha = (git rev-parse --verify $Commit) 2>$null
if (-not $commitSha) { throw "Commit '$Commit' not found" }
$subject = (git show -s --format=%s $commitSha) 2>$null
if (-not $subject) { throw "Unable to read subject for '$commitSha'" }
$branchRef = $Branch
$branchSha = (git rev-parse --verify $branchRef) 2>$null
if (-not $branchSha) {
$branchRef = "origin/$Branch"
$branchSha = (git rev-parse --verify $branchRef) 2>$null
}
if (-not $branchSha) { throw "Branch '$Branch' not found" }
Write-Info "Reference commit: $commitSha"
Write-Info "Reference title: $subject"
Write-Info "Searching branch: $branchRef"
$matches = git log $branchRef --format="%H|%s" | Where-Object { $_ -match '\|' }
$results = @()
foreach ($line in $matches) {
$parts = $line -split '\|', 2
if ($parts.Count -eq 2 -and $parts[1] -eq $subject) {
$results += [PSCustomObject]@{ Sha = $parts[0]; Title = $parts[1] }
}
}
if (-not $results -or $results.Count -eq 0) {
Write-Info "No matching commit found on $branchRef for the given title."
exit 0
}
Write-Info ("Found {0} matching commit(s):" -f $results.Count)
$results | ForEach-Object { Write-Host ("{0} {1}" -f $_.Sha, $_.Title) }
}
catch {
Write-Err $_
exit 1
}
finally {
Pop-Location
}

View File

@@ -0,0 +1,85 @@
<#
.SYNOPSIS
Group PR rows by their Labels column and emit per-label CSV files.
.DESCRIPTION
Reads a milestone PR CSV (usually produced by dump-prs-information / dump-prs-since-commit scripts),
splits rows by label list, normalizes/sorts individual labels, and writes one CSV per unique label combination.
Each output preserves the original row ordering within that subset and column order from the source.
.PARAMETER CsvPath
Input CSV containing PR rows with a 'Labels' column (comma-separated list).
.PARAMETER OutDir
Output directory to place grouped CSVs (created if missing). Default: 'grouped_csv'.
.NOTES
Label combinations are joined using ' | ' when multiple labels present. Filenames are sanitized (invalid characters,
whitespace collapsed) and truncated to <= 120 characters.
#>
param(
[string]$CsvPath = "sorted_prs.csv",
[string]$OutDir = "grouped_csv"
)
$ErrorActionPreference = 'Stop'
function Write-Info($msg) { Write-Host "[info] $msg" -ForegroundColor Cyan }
function Write-Warn($msg) { Write-Host "[warn] $msg" -ForegroundColor Yellow }
if (-not (Test-Path -LiteralPath $CsvPath)) { throw "CSV not found: $CsvPath" }
Write-Info "Reading CSV: $CsvPath"
$rows = Import-Csv -LiteralPath $CsvPath
Write-Info ("Loaded {0} rows" -f $rows.Count)
function ConvertTo-SafeFileName {
[CmdletBinding()]
param(
[Parameter(Mandatory=$true)][string]$Name
)
if ([string]::IsNullOrWhiteSpace($Name)) { return 'Unnamed' }
$s = $Name -replace '[<>:"/\\|?*]', '-' # invalid path chars
$s = $s -replace '\s+', '-' # spaces to dashes
$s = $s -replace '-{2,}', '-' # collapse dashes
$s = $s.Trim('-')
if ($s.Length -gt 120) { $s = $s.Substring(0,120).Trim('-') }
if ([string]::IsNullOrWhiteSpace($s)) { return 'Unnamed' }
return $s
}
# Build groups keyed by normalized, sorted label combinations. Preserve original CSV row order.
$groups = @{}
foreach ($row in $rows) {
$labelsRaw = $row.Labels
if ([string]::IsNullOrWhiteSpace($labelsRaw)) {
$labelParts = @('Unlabeled')
} else {
$parts = $labelsRaw -split ',' | ForEach-Object { $_.Trim() } | Where-Object { $_ }
if (-not $parts -or $parts.Count -eq 0) { $labelParts = @('Unlabeled') }
else { $labelParts = $parts | Sort-Object }
}
$key = ($labelParts -join ' | ')
if (-not $groups.ContainsKey($key)) { $groups[$key] = New-Object System.Collections.ArrayList }
[void]$groups[$key].Add($row)
}
if (-not (Test-Path -LiteralPath $OutDir)) {
Write-Info "Creating output directory: $OutDir"
New-Item -ItemType Directory -Path $OutDir | Out-Null
}
Write-Info ("Generating {0} grouped CSV file(s) into: {1}" -f $groups.Count, $OutDir)
foreach ($key in $groups.Keys) {
$labelParts = if ($key -eq 'Unlabeled') { @('Unlabeled') } else { $key -split '\s\|\s' }
$safeName = ($labelParts | ForEach-Object { ConvertTo-SafeFileName -Name $_ }) -join '-'
$filePath = Join-Path $OutDir ("$safeName.csv")
# Keep same columns and order
$groups[$key] | Export-Csv -LiteralPath $filePath -NoTypeInformation -Encoding UTF8
}
Write-Info "Done. Sample output files:"
Get-ChildItem -LiteralPath $OutDir | Select-Object -First 10 Name | Format-Table -HideTableHeaders

13
.vscode/mcp.json vendored Normal file
View File

@@ -0,0 +1,13 @@
{
"servers": {
"github-artifacts": {
"command": "node",
"args": [
"tools/mcp/github-artifacts/launch.js"
],
"env": {
"GITHUB_TOKEN": "${env:GITHUB_TOKEN}"
}
}
}
}

View File

@@ -30,7 +30,11 @@ _If you want to find diagnostic data events in the source code, these two links
- [C++ events](https://github.com/search?q=repo%3Amicrosoft%2FPowerToys+ProjectTelemetryPrivacyDataTag&type=code)
### General
<table style="width:100%">
<table style="width:100%; table-layout:fixed">
<colgroup>
<col style="width:40%">
<col style="width:60%">
</colgroup>
<tr>
<th>Event Name</th>
<th>Description</th>
@@ -43,6 +47,18 @@ _If you want to find diagnostic data events in the source code, these two links
<td>Microsoft.PowerToys.GeneralSettingsChanged</td>
<td>Logs changes made to general settings within PowerToys.</td>
</tr>
<tr>
<td>Microsoft.PowerToys.Install_Fail</td>
<td>Triggered when the PowerToys installation process encounters an error and fails to complete.</td>
</tr>
<tr>
<td>Microsoft.PowerToys.Repair_Cancel</td>
<td>Triggered when a PowerToys repair operation is cancelled by the user before completion.</td>
</tr>
<tr>
<td>Microsoft.PowerToys.Repair_Fail</td>
<td>Triggered when the PowerToys repair operation fails to complete successfully due to an error.</td>
</tr>
<tr>
<td>Microsoft.PowerToys.Runner_Launch</td>
<td>Indicates when the PowerToys Runner is launched.</td>
@@ -59,6 +75,18 @@ _If you want to find diagnostic data events in the source code, these two links
<td>Microsoft.PowerToys.ScoobeStartedEvent</td>
<td>Triggered when SCOOBE (Secondary Out-of-box experience) starts.</td>
</tr>
<tr>
<td>Microsoft.PowerToys.ShortcutConflictControlClickedEvent</td>
<td>Triggered when a user clicks on the Shortcut Conflict Control button in the PowerToys Settings UI Dashboard.</td>
</tr>
<tr>
<td>Microsoft.PowerToys.ShortcutConflictDetectedEvent</td>
<td>Triggered when keyboard shortcut conflicts are detected in the PowerToys Settings UI Dashboard.</td>
</tr>
<tr>
<td>Microsoft.PowerToys.ShortcutConflictResolvedEvent</td>
<td>Triggered when a keyboard shortcut conflict is resolved in the PowerToys Settings UI.</td>
</tr>
<tr>
<td>Microsoft.PowerToys.TrayFlyoutActivatedEvent</td>
<td>Indicates when the tray flyout menu is activated.</td>
@@ -67,6 +95,14 @@ _If you want to find diagnostic data events in the source code, these two links
<td>Microsoft.PowerToys.TrayFlyoutModuleRunEvent</td>
<td>Logs when a utility from the tray flyout menu is run.</td>
</tr>
<tr>
<td>Microsoft.PowerToys.UnInstall_Cancel</td>
<td>Triggered when the PowerToys uninstallation process is cancelled by the user before completion.</td>
</tr>
<tr>
<td>Microsoft.PowerToys.UnInstall_Fail</td>
<td>Triggered when the PowerToys uninstallation process fails to complete successfully due to an error. </td>
</tr>
<tr>
<td>Microsoft.PowerToys.Uninstall_Success</td>
<td>Logs when PowerToys is successfully uninstalled (who would do such a thing!).</td>
@@ -82,11 +118,19 @@ _If you want to find diagnostic data events in the source code, these two links
</table>
### OOBE (Out-of-box experience)
<table style="width:100%">
<table style="width:100%; table-layout:fixed">
<colgroup>
<col style="width:40%">
<col style="width:60%">
</colgroup>
<tr>
<th>Event Name</th>
<th>Description</th>
</tr>
<tr>
<td>Microsoft.PowerToys.OobeModuleRunEvent</td>
<td>Triggered when a user clicks to run or launch a PowerToys module directly from the OOBE (out-of-box experience) interface.</td>
</tr>
<tr>
<td>Microsoft.PowerToys.OobeSectionEvent</td>
<td>Occurs when OOBE is shown to the user.</td>
@@ -99,10 +143,18 @@ _If you want to find diagnostic data events in the source code, these two links
<td>Microsoft.PowerToys.OobeStartedEvent</td>
<td>Indicates when the out-of-box experience has been initiated.</td>
</tr>
<tr>
<td>Microsoft.PowerToys.OobeVariantAssignmentEvent</td>
<td>This event logs A/B testing assignments for experimental features, helping track which users are in control or alternate groups for feature experiments. </td>
</tr>
</table>
### Advanced Paste
<table style="width:100%">
<table style="width:100%; table-layout:fixed">
<colgroup>
<col style="width:40%">
<col style="width:60%">
</colgroup>
<tr>
<th>Event Name</th>
<th>Description</th>
@@ -170,7 +222,11 @@ _If you want to find diagnostic data events in the source code, these two links
</table>
### Always on Top
<table style="width:100%">
<table style="width:100%; table-layout:fixed">
<colgroup>
<col style="width:40%">
<col style="width:60%">
</colgroup>
<tr>
<th>Event Name</th>
<th>Description</th>
@@ -190,7 +246,11 @@ _If you want to find diagnostic data events in the source code, these two links
</table>
### Awake
<table style="width:100%">
<table style="width:100%; table-layout:fixed">
<colgroup>
<col style="width:40%">
<col style="width:60%">
</colgroup>
<tr>
<th>Event Name</th>
<th>Description</th>
@@ -218,7 +278,11 @@ _If you want to find diagnostic data events in the source code, these two links
</table>
### Color Picker
<table style="width:100%">
<table style="width:100%; table-layout:fixed">
<colgroup>
<col style="width:40%">
<col style="width:60%">
</colgroup>
<tr>
<th>Event Name</th>
<th>Description</th>
@@ -235,18 +299,14 @@ _If you want to find diagnostic data events in the source code, these two links
<td>Microsoft.PowerToys.ColorPicker_Settings</td>
<td>Triggered when the settings for the Color Picker are accessed or modified.</td>
</tr>
<tr>
<td>Microsoft.PowerToys.ColorPickerCancelledEvent</td>
<td>Occurs when a color picking action is cancelled by the user.</td>
</tr>
<tr>
<td>Microsoft.PowerToys.ColorPickerShowEvent</td>
<td>Triggered when the Color Picker UI is displayed on the screen.</td>
</tr>
</table>
### Command Not Found
<table style="width:100%">
<table style="width:100%; table-layout:fixed">
<colgroup>
<col style="width:40%">
<col style="width:60%">
</colgroup>
<tr>
<th>Event Name</th>
<th>Description</th>
@@ -259,10 +319,6 @@ _If you want to find diagnostic data events in the source code, these two links
<td>Microsoft.PowerToys.CmdNotFoundInstallEvent</td>
<td>Triggered when a Command Not Found is installed.</td>
</tr>
<tr>
<td>Microsoft.PowerToys.CmdNotFoundInstanceCreatedEvent</td>
<td>Occurs when an instance of a Command Not Found is created.</td>
</tr>
<tr>
<td>Microsoft.PowerToys.CmdNotFoundUninstallEvent</td>
<td>Triggered when Command Not Found is uninstalled after being previously installed.</td>
@@ -271,7 +327,11 @@ _If you want to find diagnostic data events in the source code, these two links
### Command Palette
<table style="width:100%">
<table style="width:100%; table-layout:fixed">
<colgroup>
<col style="width:40%">
<col style="width:60%">
</colgroup>
<tr>
<th>Event Name</th>
<th>Description</th>
@@ -335,7 +395,11 @@ _If you want to find diagnostic data events in the source code, these two links
</table>
### Crop And Lock
<table style="width:100%">
<table style="width:100%; table-layout:fixed">
<colgroup>
<col style="width:40%">
<col style="width:60%">
</colgroup>
<tr>
<th>Event Name</th>
<th>Description</th>
@@ -344,10 +408,26 @@ _If you want to find diagnostic data events in the source code, these two links
<td>Microsoft.PowerToys.CropAndLock_ActivateReparent</td>
<td>Triggered when the cropping interface is activated for reparenting the cropped content.</td>
</tr>
<tr>
<td>Microsoft.PowerToys.CropAndLock_ActivateScreenshot</td>
<td>Triggered when the screenshot mode is activated in Crop and Lock.</td>
</tr>
<tr>
<td>Microsoft.PowerToys.CropAndLock_ActivateThumbnail</td>
<td>Occurs when the thumbnail view for cropped content is activated.</td>
</tr>
<tr>
<td>Microsoft.PowerToys.CropAndLock_CreateReparentWindow</td>
<td>Triggered when a reparent window is created in Crop and Lock mode.</td>
</tr>
<tr>
<td>Microsoft.PowerToys.CropAndLock_CreateScreenshotWindow</td>
<td>Triggered when a screenshot window is created in Crop and Lock mode.</td>
</tr>
<tr>
<td>Microsoft.PowerToys.CropAndLock_CreateThumbnailWindow</td>
<td>Triggered when a thumbnail window is created in Crop and Lock mode.<-/td>
</tr>
<tr>
<td>Microsoft.PowerToys.CropAndLock_EnableCropAndLock</td>
<td>Triggered when Crop and Lock is enabled.</td>
@@ -358,8 +438,28 @@ _If you want to find diagnostic data events in the source code, these two links
</tr>
</table>
### Cursor Wrap
<table style="width:100%; table-layout:fixed">
<colgroup>
<col style="width:40%">
<col style="width:60%">
</colgroup>
<tr>
<th>Event Name</th>
<th>Description</th>
</tr>
<tr>
<td>Microsoft.PowerToys.CursorWrap_EnableCursorWrap</td>
<td>Triggered when Cursor Wrap is enabled or disabled.</td>
</tr>
</table>
### Environment Variables
<table style="width:100%">
<table style="width:100%; table-layout:fixed">
<colgroup>
<col style="width:40%">
<col style="width:60%">
</colgroup>
<tr>
<th>Event Name</th>
<th>Description</th>
@@ -387,7 +487,11 @@ _If you want to find diagnostic data events in the source code, these two links
</table>
### FancyZones
<table style="width:100%">
<table style="width:100%; table-layout:fixed">
<colgroup>
<col style="width:40%">
<col style="width:60%">
</colgroup>
<tr>
<th>Event Name</th>
<th>Description</th>
@@ -404,6 +508,10 @@ _If you want to find diagnostic data events in the source code, these two links
<td>Microsoft.PowerToys.FancyZones_EnableFancyZones</td>
<td>Occurs when FancyZones is enabled.</td>
</tr>
<tr>
<td>Microsoft.PowerToys.FancyZones_Error</td>
<td>Triggered when an error occurs within the FancyZones module. This event logs critical errors to help diagnose and troubleshoot issues with FancyZones functionality, such as failures to set up Windows hooks or other system-level operations required for window management.</td>
</tr>
<tr>
<td>Microsoft.PowerToys.FancyZones_KeyboardSnapWindowToZone</td>
<td>Triggered when a window is snapped to a zone using the keyboard.</td>
@@ -416,10 +524,6 @@ _If you want to find diagnostic data events in the source code, these two links
<td>Microsoft.PowerToys.FancyZones_MoveOrResizeStarted</td>
<td>Triggered when a window move or resize action is initiated.</td>
</tr>
<tr>
<td>Microsoft.PowerToys.FancyZones_MoveSizeEnd</td>
<td>Occurs when the moving or resizing of a window has ended.</td>
</tr>
<tr>
<td>Microsoft.PowerToys.FancyZones_OnKeyDown</td>
<td>Triggered when a key is pressed down while interacting with zones.</td>
@@ -432,10 +536,6 @@ _If you want to find diagnostic data events in the source code, these two links
<td>Microsoft.PowerToys.FancyZones_Settings</td>
<td>Triggered when FancyZones settings are accessed or modified.</td>
</tr>
<tr>
<td>Microsoft.PowerToys.FancyZones_SettingsChanged</td>
<td>Occurs when there is a change in the FancyZones settings.</td>
</tr>
<tr>
<td>Microsoft.PowerToys.FancyZones_SnapNewWindowIntoZone</td>
<td>Triggered when a new window is snapped into a zone.</td>
@@ -456,10 +556,50 @@ _If you want to find diagnostic data events in the source code, these two links
<td>Microsoft.PowerToys.FancyZones_CLICommand</td>
<td>Triggered when a FancyZones CLI command is executed, logging the command name and success status.</td>
</tr>
<tr>
<td>Microsoft.PowerToys.FancyZonesEditorStartEvent</td>
<td>Triggered when the FancyZones Editor application starts. This logs the initialization of the editor UI, which is used to create and configure custom zone layouts.</td>
</tr>
<tr>
<td>Microsoft.PowerToys.FancyZonesEditorStartFinishEvent</td>
<td>Triggered when the FancyZones Editor has completed loading and is ready for user interaction.</td>
</tr>
</table>
### File Locksmith
<table style="width:100%; table-layout:fixed">
<colgroup>
<col style="width:40%">
<col style="width:60%">
</colgroup>
<tr>
<th>Event Name</th>
<th>Description</th>
</tr>
<tr>
<td>Microsoft.PowerToys.FileLocksmith_EnableFileLocksmith</td>
<td>Triggered when File Locksmith is enabled.</td>
</tr>
<tr>
<td>Microsoft.PowerToys.FileLocksmith_Invoked</td>
<td>Occurs when File Locksmith is invoked.</td>
</tr>
<tr>
<td>Microsoft.PowerToys.FileLocksmith_InvokedRet</td>
<td>Triggered when File Locksmith invocation returns a result.</td>
</tr>
<tr>
<td>Microsoft.PowerToys.FileLocksmith_QueryContextMenuError</td>
<td>Occurs when there is an error querying the context menu for File Locksmith.</td>
</tr>
</table>
### FileExplorerAddOns
<table style="width:100%">
<table style="width:100%; table-layout:fixed">
<colgroup>
<col style="width:40%">
<col style="width:60%">
</colgroup>
<tr>
<th>Event Name</th>
<th>Description</th>
@@ -496,6 +636,10 @@ _If you want to find diagnostic data events in the source code, these two links
<td>Microsoft.PowerToys.MarkdownFilePreviewed</td>
<td>Triggered when a Markdown file is previewed in File Explorer.</td>
</tr>
<tr>
<td>Microsoft.PowerToys.MarkdownFilePreviewError</td>
<td>Triggered when there is an error previewing a Markdown file in File Explorer.</td>
</tr>
<tr>
<td>Microsoft.PowerToys.PdfFileHandlerLoaded</td>
<td>Occurs when a PDF file handler is loaded.</td>
@@ -504,6 +648,10 @@ _If you want to find diagnostic data events in the source code, these two links
<td>Microsoft.PowerToys.PdfFilePreviewed</td>
<td>Triggered when a PDF file is previewed in File Explorer.</td>
</tr>
<tr>
<td>Microsoft.PowerToys.PdfFilePreviewError</td>
<td>Triggered when there is an error previewing a PDF file in File Explorer.</td>
</tr>
<tr>
<td>Microsoft.PowerToys.PowerPreview_Enabled</td>
<td>Occurs when preview is enabled.</td>
@@ -520,6 +668,10 @@ _If you want to find diagnostic data events in the source code, these two links
<td>Microsoft.PowerToys.PowerPreview_TweakUISettings_InitSet__ErrorLoadingFile</td>
<td>Triggered when there is an error loading a file during Tweak UI settings initialization.</td>
</tr>
<tr>
<td>Microsoft.PowerToys.PowerPreview_TweakUISettings_SetConfig__InvalidJSONGiven</td>
<td>Triggered when invalid JSON is provided to the Power Preview settings configuration</td>
</tr>
<tr>
<td>Microsoft.PowerToys.PowerPreview_TweakUISettings_SuccessfullyUpdatedSettings</td>
<td>Occurs when the Tweak UI settings for Power Preview are successfully updated.</td>
@@ -528,6 +680,10 @@ _If you want to find diagnostic data events in the source code, these two links
<td>Microsoft.PowerToys.QoiFilePreviewed</td>
<td>Triggered when a QOI file is previewed in File Explorer.</td>
</tr>
<tr>
<td>Microsoft.PowerToys.QoiFilePreviewError</td>
<td>Triggered when there is an error previewing a QOI (Quite OK Image) file in File Explorer.</td>
</tr>
<tr>
<td>Microsoft.PowerToys.SvgFileHandlerLoaded</td>
<td>Occurs when an SVG file handler is loaded.</td>
@@ -542,32 +698,12 @@ _If you want to find diagnostic data events in the source code, these two links
</tr>
</table>
### File Locksmith
<table style="width:100%">
<tr>
<th>Event Name</th>
<th>Description</th>
</tr>
<tr>
<td>Microsoft.PowerToys.FileLocksmith_EnableFileLocksmith</td>
<td>Triggered when File Locksmith is enabled.</td>
</tr>
<tr>
<td>Microsoft.PowerToys.FileLocksmith_Invoked</td>
<td>Occurs when File Locksmith is invoked.</td>
</tr>
<tr>
<td>Microsoft.PowerToys.FileLocksmith_InvokedRet</td>
<td>Triggered when File Locksmith invocation returns a result.</td>
</tr>
<tr>
<td>Microsoft.PowerToys.FileLocksmith_QueryContextMenuError</td>
<td>Occurs when there is an error querying the context menu for File Locksmith.</td>
</tr>
</table>
### Find My Mouse
<table style="width:100%">
<table style="width:100%; table-layout:fixed">
<colgroup>
<col style="width:40%">
<col style="width:60%">
</colgroup>
<tr>
<th>Event Name</th>
<th>Description</th>
@@ -583,7 +719,11 @@ _If you want to find diagnostic data events in the source code, these two links
</table>
### Hosts File Editor
<table style="width:100%">
<table style="width:100%; table-layout:fixed">
<colgroup>
<col style="width:40%">
<col style="width:60%">
</colgroup>
<tr>
<th>Event Name</th>
<th>Description</th>
@@ -600,10 +740,22 @@ _If you want to find diagnostic data events in the source code, these two links
<td>Microsoft.PowerToys.HostsFileEditorOpenedEvent</td>
<td>Fires when Hosts File Editor is opened.</td>
</tr>
<tr>
<td>Microsoft.PowerToys.HostEditorStartEvent</td>
<td>Triggered when the Hosts File Editor application starts. This logs the initialization of the Hosts File Editor UI with a timestamp.</td>
</tr>
<tr>
<td>Microsoft.PowerToys.HostEditorStartFinishEvent</td>
<td>Triggered when the Hosts File Editor has completed loading and is ready for user interaction.</td>
</tr>
</table>
### Image Resizer
<table style="width:100%">
<table style="width:100%; table-layout:fixed">
<colgroup>
<col style="width:40%">
<col style="width:60%">
</colgroup>
<tr>
<th>Event Name</th>
<th>Description</th>
@@ -620,10 +772,18 @@ _If you want to find diagnostic data events in the source code, these two links
<td>Microsoft.PowerToys.ImageResizer_InvokedRet</td>
<td>Fires when the Image Resizer operation is completed and returns a result.</td>
</tr>
<tr>
<td>Microsoft.PowerToys.ImageResizer_QueryContextMenuError</td>
<td>Triggered when there is an error querying the context menu for Image Resizer.</td>
</tr>
</table>
### Keyboard Manager
<table style="width:100%">
<table style="width:100%; table-layout:fixed">
<colgroup>
<col style="width:40%">
<col style="width:60%">
</colgroup>
<tr>
<th>Event Name</th>
<th>Description</th>
@@ -636,10 +796,22 @@ _If you want to find diagnostic data events in the source code, these two links
<td>Microsoft.PowerToys.KeyboardManager_AppSpecificShortcutRemapCount</td>
<td>Logs the number of application-specific shortcut remaps configured by the user.</td>
</tr>
<tr>
<td>Microsoft.PowerToys.KeyboardManager_AppSpecificShortcutToKeyRemapInvoked</td>
<td>Logs each instance when an application-specific shortcut-to-key remap is used.</td>
</tr>
<tr>
<td>Microsoft.PowerToys.KeyboardManager_AppSpecificShortcutToShortcutRemapInvoked</td>
<td>Logs each instance when an application-specific shortcut-to-shortcut remap is used.</td>
</tr>
<tr>
<td>Microsoft.PowerToys.KeyboardManager_Error</td>
<td>Triggered when an error occurs in Keyboard Manager. This logs the error code, error message, and the method name where the error occurred.</td>
</tr>
<tr>
<td>Microsoft.PowerToys.KeyboardManager_ErrorSendingKeyAndShortcutRemapLoadedConfiguration</td>
<td>Triggered when there is an error sending remapping configuration telemetry. This occurs when Keyboard Manager fails to report the loaded key and shortcut remap configurations</td>
</tr>
<tr>
<td>Microsoft.PowerToys.KeyboardManager_DailyAppSpecificShortcutToKeyRemapInvoked</td>
<td>Logs the daily count of application-specific shortcut-to-key remaps executed by the user.</td>
@@ -703,7 +875,11 @@ _If you want to find diagnostic data events in the source code, these two links
</table>
### Light Switch
<table style="width:100%">
<table style="width:100%; table-layout:fixed">
<colgroup>
<col style="width:40%">
<col style="width:60%">
</colgroup>
<tr>
<th>Event Name</th>
<th>Description</th>
@@ -727,7 +903,11 @@ _If you want to find diagnostic data events in the source code, these two links
</table>
### Mouse Highlighter
<table style="width:100%">
<table style="width:100%; table-layout:fixed">
<colgroup>
<col style="width:40%">
<col style="width:60%">
</colgroup>
<tr>
<th>Event Name</th>
<th>Description</th>
@@ -740,10 +920,18 @@ _If you want to find diagnostic data events in the source code, these two links
<td>Microsoft.PowerToys.MouseHighlighter_StartHighlightingSession</td>
<td>Occurs when a new highlighting session is started.</td>
</tr>
<tr>
<td>Microsoft.PowerToys.MouseHighlighter_StartSpotlightSession</td>
<td>Triggered when a spotlight session is started in Mouse Highlighter. This occurs when the user activates the spotlight mode.</td>
</tr>
</table>
### Mouse Jump
<table style="width:100%">
<table style="width:100%; table-layout:fixed">
<colgroup>
<col style="width:40%">
<col style="width:60%">
</colgroup>
<tr>
<th>Event Name</th>
<th>Description</th>
@@ -767,7 +955,11 @@ _If you want to find diagnostic data events in the source code, these two links
</table>
### Mouse Pointer Crosshairs
<table style="width:100%">
<table style="width:100%; table-layout:fixed">
<colgroup>
<col style="width:40%">
<col style="width:60%">
</colgroup>
<tr>
<th>Event Name</th>
<th>Description</th>
@@ -783,7 +975,11 @@ _If you want to find diagnostic data events in the source code, these two links
</table>
### Mouse Without Borders
<table style="width:100%">
<table style="width:100%; table-layout:fixed">
<colgroup>
<col style="width:40%">
<col style="width:60%">
</colgroup>
<tr>
<th>Event Name</th>
<th>Description</th>
@@ -835,7 +1031,11 @@ _If you want to find diagnostic data events in the source code, these two links
</table>
### New+
<table style="width:100%">
<table style="width:100%; table-layout:fixed">
<colgroup>
<col style="width:40%">
<col style="width:60%">
</colgroup>
<tr>
<th>Event Name</th>
<th>Description</th>
@@ -867,7 +1067,11 @@ _If you want to find diagnostic data events in the source code, these two links
</table>
### Peek
<table style="width:100%">
<table style="width:100%; table-layout:fixed">
<colgroup>
<col style="width:40%">
<col style="width:60%">
</colgroup>
<tr>
<th>Event Name</th>
<th>Description</th>
@@ -900,10 +1104,18 @@ _If you want to find diagnostic data events in the source code, these two links
<td>Microsoft.PowerToys.Peek_Settings</td>
<td>Triggered when the settings for Peek are modified.</td>
</tr>
<tr>
<td>Microsoft.PowerToys.Peek_SpaceModeEnabled</td>
<td>Triggered when the Space key activation mode is enabled or disabled in Peek</td>
</tr>
</table>
### PowerRename
<table style="width:100%">
<table style="width:100%; table-layout:fixed">
<colgroup>
<col style="width:40%">
<col style="width:60%">
</colgroup>
<tr>
<th>Event Name</th>
<th>Description</th>
@@ -935,7 +1147,11 @@ _If you want to find diagnostic data events in the source code, these two links
</table>
### PowerToys Run
<table style="width:100%">
<table style="width:100%; table-layout:fixed">
<colgroup>
<col style="width:40%">
<col style="width:60%">
</colgroup>
<tr>
<th>Event Name</th>
<th>Description</th>
@@ -976,14 +1192,14 @@ _If you want to find diagnostic data events in the source code, these two links
<td>Microsoft.PowerToys.RunPluginsSettingsEvent</td>
<td>Triggered when the settings for PowerToys Run plugins are accessed or modified.</td>
</tr>
<tr>
<td>Microsoft.PowerToys.WindowWalker_EnableWindowWalker</td>
<td>Triggered when the Window Walker plugin is enabled.</td>
</tr>
</table>
### Quick Accent
<table style="width:100%">
<table style="width:100%; table-layout:fixed">
<colgroup>
<col style="width:40%">
<col style="width:60%">
</colgroup>
<tr>
<th>Event Name</th>
<th>Description</th>
@@ -999,7 +1215,11 @@ _If you want to find diagnostic data events in the source code, these two links
</table>
### Registry Preview
<table style="width:100%">
<table style="width:100%; table-layout:fixed">
<colgroup>
<col style="width:40%">
<col style="width:60%">
</colgroup>
<tr>
<th>Event Name</th>
<th>Description</th>
@@ -1012,10 +1232,22 @@ _If you want to find diagnostic data events in the source code, these two links
<td>Microsoft.PowerToys.RegistryPreview_EnableRegistryPreview</td>
<td>Occurs when Registry Preview is enabled.</td>
</tr>
<tr>
<td>Microsoft.PowerToys.RegistryPreviewEditorStartEvent</td>
<td>Triggered when the Registry Preview application starts. This logs the initialization of the Registry Preview UI with a timestamp.</td>
</tr>
<tr>
<td>Microsoft.PowerToys.RegistryPreviewEditorStartFinishEvent</td>
<td>Triggered when the Registry Preview application has completed loading and is ready for user interaction.</td>
</tr>
</table>
### Screen Ruler
<table style="width:100%">
<table style="width:100%; table-layout:fixed">
<colgroup>
<col style="width:40%">
<col style="width:60%">
</colgroup>
<tr>
<th>Event Name</th>
<th>Description</th>
@@ -1035,7 +1267,11 @@ _If you want to find diagnostic data events in the source code, these two links
</table>
### Shortcut Guide
<table style="width:100%">
<table style="width:100%; table-layout:fixed">
<colgroup>
<col style="width:40%">
<col style="width:60%">
</colgroup>
<tr>
<th>Event Name</th>
<th>Description</th>
@@ -1051,7 +1287,11 @@ _If you want to find diagnostic data events in the source code, these two links
</table>
### Text Extractor
<table style="width:100%">
<table style="width:100%; table-layout:fixed">
<colgroup>
<col style="width:40%">
<col style="width:60%">
</colgroup>
<tr>
<th>Event Name</th>
<th>Description</th>
@@ -1075,15 +1315,15 @@ _If you want to find diagnostic data events in the source code, these two links
</table>
### Workspaces
<table style="width:100%">
<table style="width:100%; table-layout:fixed">
<colgroup>
<col style="width:40%">
<col style="width:60%">
</colgroup>
<tr>
<th>Event Name</th>
<th>Description</th>
</tr>
<tr>
<td>Microsoft.PowerToys.Projects_CLIUsage</td>
<td>Logs usage of command-line arguments for launching apps.</td>
</tr>
<tr>
<td>Microsoft.PowerToys.Workspaces_CreateEvent</td>
<td>Triggered when a new workspace is created.</td>
@@ -1105,13 +1345,21 @@ _If you want to find diagnostic data events in the source code, these two links
<td>Triggered when a workspace is launched.</td>
</tr>
<tr>
<td>Microsoft.PowerToys.Workspaces_Settings</td>
<td>Logs changes to workspaces settings.</td>
<td>Microsoft.PowerToys.WorkspacesEditorStartEvent</td>
<td>Triggered when the Workspaces Editor application starts. This logs the initialization of the Workspaces Editor UI with a timestamp.</td>
</tr>
<tr>
<td>Microsoft.PowerToys.WorkspacesEditorStartFinishEvent</td>
<td>Triggered when the Workspaces Editor has completed loading and is ready for user interaction.</td>
</tr>
</table>
### ZoomIt
<table style="width:100%">
<table style="width:100%; table-layout:fixed">
<colgroup>
<col style="width:40%">
<col style="width:60%">
</colgroup>
<tr>
<th>Event Name</th>
<th>Description</th>

View File

@@ -18,6 +18,7 @@
<RegistryKey Root="$(var.RegistryScope)" Key="Software\Classes\powertoys\components">
<RegistryValue Type="string" Name="Module_CmdPal" Value="" KeyPath="yes" />
</RegistryKey>
<RemoveFile Id="RemoveOldCmdPalMsix" Name="Microsoft.CmdPal.UI_*.msix" On="install" />
<?if $(sys.BUILDARCH) = x64 ?>
<File Id="Microsoft.CmdPal.UI___var.CmdPalVersion_._x64.msix" Source="$(var.CmdPalBuildDir)AppPackages\Microsoft.CmdPal.UI_$(var.CmdPalVersion)_Test\Microsoft.CmdPal.UI_$(var.CmdPalVersion)_x64.msix" />
<?else?>

View File

@@ -16,13 +16,50 @@ DWORD WINAPI _checkTheme(LPVOID lpParam)
void ThemeListener::AddChangedHandler(THEME_HANDLE handle)
{
std::lock_guard<std::mutex> lock(handlesMutex);
handles.push_back(handle);
}
void ThemeListener::DelChangedHandler(THEME_HANDLE handle)
{
std::lock_guard<std::mutex> lock(handlesMutex);
auto it = std::find(handles.begin(), handles.end(), handle);
handles.erase(it);
if (it != handles.end())
{
handles.erase(it);
}
}
void ThemeListener::AddAppThemeChangedHandler(THEME_HANDLE handle)
{
std::lock_guard<std::mutex> lock(handlesMutex);
appThemeHandles.push_back(handle);
}
void ThemeListener::DelAppThemeChangedHandler(THEME_HANDLE handle)
{
std::lock_guard<std::mutex> lock(handlesMutex);
auto it = std::find(appThemeHandles.begin(), appThemeHandles.end(), handle);
if (it != appThemeHandles.end())
{
appThemeHandles.erase(it);
}
}
void ThemeListener::AddSystemThemeChangedHandler(THEME_HANDLE handle)
{
std::lock_guard<std::mutex> lock(handlesMutex);
systemThemeHandles.push_back(handle);
}
void ThemeListener::DelSystemThemeChangedHandler(THEME_HANDLE handle)
{
std::lock_guard<std::mutex> lock(handlesMutex);
auto it = std::find(systemThemeHandles.begin(), systemThemeHandles.end(), handle);
if (it != systemThemeHandles.end())
{
systemThemeHandles.erase(it);
}
}
void ThemeListener::CheckTheme()
@@ -48,13 +85,51 @@ void ThemeListener::CheckTheme()
WaitForSingleObject(hEvent, INFINITE);
auto _theme = ThemeHelpers::GetAppTheme();
if (AppTheme != _theme)
auto _appTheme = ThemeHelpers::GetAppTheme();
auto _systemTheme = ThemeHelpers::GetSystemTheme();
bool appThemeChanged = (AppTheme != _appTheme);
bool systemThemeChanged = (SystemTheme != _systemTheme);
if (appThemeChanged || systemThemeChanged)
{
AppTheme = _theme;
for (int i = 0; i < handles.size(); i++)
AppTheme = _appTheme;
SystemTheme = _systemTheme;
// Copy handlers under lock, then invoke outside lock to avoid deadlock
std::vector<THEME_HANDLE> handlesCopy;
std::vector<THEME_HANDLE> appThemeHandlesCopy;
std::vector<THEME_HANDLE> systemThemeHandlesCopy;
{
handles[i]();
std::lock_guard<std::mutex> lock(handlesMutex);
handlesCopy = handles;
if (appThemeChanged)
{
appThemeHandlesCopy = appThemeHandles;
}
if (systemThemeChanged)
{
systemThemeHandlesCopy = systemThemeHandles;
}
}
// Call generic handlers (backward compatible)
for (const auto& handler : handlesCopy)
{
handler();
}
// Call app theme specific handlers
for (const auto& handler : appThemeHandlesCopy)
{
handler();
}
// Call system theme specific handlers
for (const auto& handler : systemThemeHandlesCopy)
{
handler();
}
}
}

View File

@@ -3,6 +3,7 @@
#include <windows.h>
#include <iostream>
#include <vector>
#include <mutex>
typedef void (*THEME_HANDLE)();
DWORD WINAPI _checkTheme(LPVOID lpParam);
@@ -14,6 +15,7 @@ public:
ThemeListener()
{
AppTheme = ThemeHelpers::GetAppTheme();
SystemTheme = ThemeHelpers::GetSystemTheme();
dwThreadHandle = CreateThread(NULL, 0, (LPTHREAD_START_ROUTINE)_checkTheme, this, 0, &dwThreadId);
}
~ThemeListener()
@@ -23,12 +25,20 @@ public:
}
Theme AppTheme;
Theme SystemTheme;
void ThemeListener::AddChangedHandler(THEME_HANDLE handle);
void ThemeListener::DelChangedHandler(THEME_HANDLE handle);
void ThemeListener::AddAppThemeChangedHandler(THEME_HANDLE handle);
void ThemeListener::DelAppThemeChangedHandler(THEME_HANDLE handle);
void ThemeListener::AddSystemThemeChangedHandler(THEME_HANDLE handle);
void ThemeListener::DelSystemThemeChangedHandler(THEME_HANDLE handle);
void CheckTheme();
private:
HANDLE dwThreadHandle;
DWORD dwThreadId;
std::vector<THEME_HANDLE> handles;
std::vector<THEME_HANDLE> appThemeHandles;
std::vector<THEME_HANDLE> systemThemeHandles;
mutable std::mutex handlesMutex;
};

View File

@@ -2,6 +2,7 @@
#include <Windows.h>
#include <algorithm>
#include <appxpackaging.h>
#include <exception>
#include <filesystem>
@@ -337,6 +338,30 @@ namespace package
}
}
}
// Sort by package version in descending order (newest first)
std::sort(matchedFiles.begin(), matchedFiles.end(), [](const std::wstring& a, const std::wstring& b) {
std::wstring nameA, nameB;
PACKAGE_VERSION versionA{}, versionB{};
bool gotA = GetPackageNameAndVersionFromAppx(a, nameA, versionA);
bool gotB = GetPackageNameAndVersionFromAppx(b, nameB, versionB);
// Files that failed to parse go to the end
if (!gotA)
return false;
if (!gotB)
return true;
// Compare versions: Major, Minor, Build, Revision (descending)
if (versionA.Major != versionB.Major)
return versionA.Major > versionB.Major;
if (versionA.Minor != versionB.Minor)
return versionA.Minor > versionB.Minor;
if (versionA.Build != versionB.Build)
return versionA.Build > versionB.Build;
return versionA.Revision > versionB.Revision;
});
}
catch (const std::exception& ex)
{

View File

@@ -7,22 +7,62 @@
Queries Windows for all connected monitors and saves their configuration
(position, size, DPI, primary status) to a JSON file that can be used
for testing the CursorWrap module.
By default, potentially identifying information (computer name, user name,
device names) is anonymized to protect privacy when sharing layout files.
.PARAMETER OutputPath
Path where the JSON file will be saved. Default: monitor_layout.json
Path where the JSON file will be saved. Default: cursorwrap_monitor_layout.json
.PARAMETER AddUserMachineNames
Include computer name and user name in the output. By default these are
blank to protect privacy when sharing layout files.
.PARAMETER AddDeviceNames
Include device names (e.g., \\.\DISPLAY1) in the output. By default these
are anonymized to "DISPLAY1", "DISPLAY2", etc. to reduce fingerprinting.
.PARAMETER Help
Show this help message and exit.
.EXAMPLE
.\Capture-MonitorLayout.ps1
Captures layout with privacy-safe defaults (no user/machine names).
.EXAMPLE
.\Capture-MonitorLayout.ps1 -OutputPath "my_setup.json"
Saves to a custom filename.
.EXAMPLE
.\Capture-MonitorLayout.ps1 -AddUserMachineNames
Includes computer name and user name in the output.
.EXAMPLE
.\Capture-MonitorLayout.ps1 -AddUserMachineNames -AddDeviceNames
Includes all identifying information (useful for personal debugging).
#>
param(
[Parameter(Mandatory=$false)]
[string]$OutputPath = "$($env:USERNAME)_monitor_layout.json"
[string]$OutputPath = "cursorwrap_monitor_layout.json",
[Parameter(Mandatory=$false)]
[switch]$AddUserMachineNames,
[Parameter(Mandatory=$false)]
[switch]$AddDeviceNames,
[Parameter(Mandatory=$false)]
[Alias("h", "?")]
[switch]$Help
)
# Show help if requested
if ($Help) {
Get-Help $MyInvocation.MyCommand.Path -Detailed
exit 0
}
# Add Windows Forms for screen enumeration
Add-Type -AssemblyName System.Windows.Forms
@@ -137,12 +177,20 @@ function Capture-MonitorLayout {
$screens = [System.Windows.Forms.Screen]::AllScreens
$monitors = @()
$monitorIndex = 1
foreach ($screen in $screens) {
$isPrimary = $screen.Primary
$bounds = $screen.Bounds
$dpi = Get-MonitorDPI -Screen $screen
# Anonymize device name by default to reduce fingerprinting
$deviceName = if ($AddDeviceNames) {
$screen.DeviceName
} else {
"DISPLAY$monitorIndex"
}
$monitor = [ordered]@{
left = $bounds.Left
top = $bounds.Top
@@ -153,10 +201,11 @@ function Capture-MonitorLayout {
dpi = $dpi
scaling_percent = [math]::Round(($dpi / 96.0) * 100, 0)
primary = $isPrimary
device_name = $screen.DeviceName
device_name = $deviceName
}
$monitors += $monitor
$monitorIndex++
# Display info
$primaryTag = if ($isPrimary) { " [PRIMARY]" } else { "" }
@@ -170,11 +219,11 @@ function Capture-MonitorLayout {
Write-Host " Bounds: [$($bounds.Left), $($bounds.Top), $($bounds.Right), $($bounds.Bottom)]"
}
# Create output object
# Create output object with privacy-safe defaults
$output = [ordered]@{
captured_at = (Get-Date -Format "yyyy-MM-ddTHH:mm:sszzz")
computer_name = $env:COMPUTERNAME
user_name = $env:USERNAME
computer_name = if ($AddUserMachineNames) { $env:COMPUTERNAME } else { "" }
user_name = if ($AddUserMachineNames) { $env:USERNAME } else { "" }
monitor_count = $monitors.Count
monitors = $monitors
}
@@ -200,10 +249,22 @@ try {
Write-Host "`n" + ("=" * 80)
Write-Host "SUMMARY" -ForegroundColor Cyan
Write-Host ("=" * 80)
Write-Host "Configuration Name: $($layout.computer_name)"
if ($layout.computer_name) {
Write-Host "Configuration Name: $($layout.computer_name)"
}
Write-Host "Captured: $($layout.captured_at)"
Write-Host "Monitors: $($layout.monitor_count)"
# Privacy notice
if (-not $AddUserMachineNames -or -not $AddDeviceNames) {
Write-Host "`nPrivacy: " -NoNewline -ForegroundColor Yellow
$privacyNotes = @()
if (-not $AddUserMachineNames) { $privacyNotes += "user/machine names excluded" }
if (-not $AddDeviceNames) { $privacyNotes += "device names anonymized" }
Write-Host ($privacyNotes -join ", ") -ForegroundColor Yellow
Write-Host " Use -AddUserMachineNames and/or -AddDeviceNames to include." -ForegroundColor DarkGray
}
# Calculate desktop dimensions
$widths = @($layout.monitors | ForEach-Object { $_.width })
$heights = @($layout.monitors | ForEach-Object { $_.height })

View File

@@ -98,18 +98,16 @@ public sealed partial class AppListItem : ListItem
}
else
{
try
// do nothing if we fail to load an icon.
// Logging it would be too NOISY, there's really no need.
if (!string.IsNullOrEmpty(_app.IcoPath))
{
var stream = await ThumbnailHelper.GetThumbnail(_app.ExePath, true);
if (stream is not null)
{
heroImage = IconInfo.FromStream(stream);
}
heroImage = await TryLoadThumbnail(_app.IcoPath, jumbo: true, logOnFailure: false);
}
catch (Exception)
if (heroImage == null && !string.IsNullOrEmpty(_app.ExePath))
{
// do nothing if we fail to load an icon.
// Logging it would be too NOISY, there's really no need.
heroImage = await TryLoadThumbnail(_app.ExePath, jumbo: true, logOnFailure: false);
}
}
@@ -132,26 +130,19 @@ public sealed partial class AppListItem : ListItem
if (useThumbnails)
{
try
if (!string.IsNullOrEmpty(_app.IcoPath))
{
var stream = await ThumbnailHelper.GetThumbnail(_app.ExePath);
if (stream is not null)
{
icon = IconInfo.FromStream(stream);
}
}
catch (Exception ex)
{
Logger.LogDebug($"Failed to load icon for {AppIdentifier}:\n{ex}");
icon = await TryLoadThumbnail(_app.IcoPath, jumbo: false, logOnFailure: true);
}
icon = icon ?? new IconInfo(_app.IcoPath);
}
else
{
icon = new IconInfo(_app.IcoPath);
if (icon == null && !string.IsNullOrEmpty(_app.ExePath))
{
icon = await TryLoadThumbnail(_app.ExePath, jumbo: false, logOnFailure: true);
}
}
icon ??= new IconInfo(_app.IcoPath);
return icon;
}
@@ -183,4 +174,25 @@ public sealed partial class AppListItem : ListItem
return newCommands.ToArray();
}
private async Task<IconInfo?> TryLoadThumbnail(string path, bool jumbo, bool logOnFailure)
{
try
{
var stream = await ThumbnailHelper.GetThumbnail(path, jumbo);
if (stream is not null)
{
return IconInfo.FromStream(stream);
}
}
catch (Exception ex)
{
if (logOnFailure)
{
Logger.LogDebug($"Failed to load icon {path} for {AppIdentifier}:\n{ex}");
}
}
return null;
}
}

View File

@@ -1052,9 +1052,6 @@ public class Win32Program : IProgram
app.FullPath) :
app.IcoPath;
icoPath = icoPath.EndsWith(".lnk", System.StringComparison.InvariantCultureIgnoreCase) ?
app.FullPath :
icoPath;
return new AppItem()
{
Name = app.Name,

View File

@@ -33,23 +33,27 @@ static std::wstring SanitizeAndNormalize(const std::wstring& input)
// Normalize to NFC (Precomposed).
// Get the size needed for the normalized string, including null terminator.
int size = NormalizeString(NormalizationC, sanitized.c_str(), -1, nullptr, 0);
if (size <= 0)
int sizeEstimate = NormalizeString(NormalizationC, sanitized.c_str(), -1, nullptr, 0);
if (sizeEstimate <= 0)
{
return sanitized; // Return unaltered if normalization fails.
}
// Perform the normalization.
std::wstring normalized;
normalized.resize(size);
NormalizeString(NormalizationC, sanitized.c_str(), -1, &normalized[0], size);
normalized.resize(sizeEstimate);
int actualSize = NormalizeString(NormalizationC, sanitized.c_str(), -1, &normalized[0], sizeEstimate);
// Remove the explicit null terminator added by NormalizeString.
if (!normalized.empty() && normalized.back() == L'\0')
if (actualSize <= 0)
{
normalized.pop_back();
// Normalization failed, return sanitized string.
return sanitized;
}
// Resize to actual size minus the null terminator.
// actualSize includes the null terminator when input length is -1.
normalized.resize(static_cast<size_t>(actualSize) - 1);
return normalized;
}

View File

@@ -695,6 +695,38 @@ TEST_METHOD(VerifyUnicodeAndWhitespaceNormalizationRegex)
VerifyNormalizationHelper(UseRegularExpressions);
}
TEST_METHOD(VerifyRegexMetacharacterDollarSign)
{
CComPtr<IPowerRenameRegEx> renameRegEx;
Assert::IsTrue(CPowerRenameRegEx::s_CreateInstance(&renameRegEx) == S_OK);
DWORD flags = UseRegularExpressions;
Assert::IsTrue(renameRegEx->PutFlags(flags) == S_OK);
PWSTR result = nullptr;
Assert::IsTrue(renameRegEx->PutSearchTerm(L"$") == S_OK);
Assert::IsTrue(renameRegEx->PutReplaceTerm(L"_end") == S_OK);
unsigned long index = {};
Assert::IsTrue(renameRegEx->Replace(L"test.txt", &result, index) == S_OK);
Assert::AreEqual(L"test.txt_end", result);
CoTaskMemFree(result);
}
TEST_METHOD(VerifyRegexMetacharacterCaret)
{
CComPtr<IPowerRenameRegEx> renameRegEx;
Assert::IsTrue(CPowerRenameRegEx::s_CreateInstance(&renameRegEx) == S_OK);
DWORD flags = UseRegularExpressions;
Assert::IsTrue(renameRegEx->PutFlags(flags) == S_OK);
PWSTR result = nullptr;
Assert::IsTrue(renameRegEx->PutSearchTerm(L"^") == S_OK);
Assert::IsTrue(renameRegEx->PutReplaceTerm(L"start_") == S_OK);
unsigned long index = {};
Assert::IsTrue(renameRegEx->Replace(L"test.txt", &result, index) == S_OK);
Assert::AreEqual(L"start_test.txt", result);
CoTaskMemFree(result);
}
#ifndef TESTS_PARTIAL
};
}

View File

@@ -173,6 +173,10 @@
<value>Settings\tDouble-click</value>
<comment>Don't localize "\t" as that is what separates the click portion to be right aligned in the menu.</comment>
</data>
<data name="SETTINGS_MENU_TEXT_LEFTCLICK" xml:space="preserve">
<value>Settings\tLeft-click</value>
<comment>Don't localize "\t" as that is what separates the click portion to be right aligned in the menu. This is shown when Quick Access is disabled.</comment>
</data>
<data name="DOCUMENTATION_MENU_TEXT" xml:space="preserve">
<value>Documentation</value>
</data>

View File

@@ -81,3 +81,36 @@ void Trace::UpdateDownloadCompleted(bool success, const std::wstring& version)
TraceLoggingBoolean(TRUE, "UTCReplace_AppSessionGuid"),
TraceLoggingKeyword(PROJECT_KEYWORD_MEASURE));
}
void Trace::TrayIconLeftClick(bool quickAccessEnabled)
{
TraceLoggingWriteWrapper(
g_hProvider,
"TrayIcon_LeftClick",
TraceLoggingBoolean(quickAccessEnabled, "QuickAccessEnabled"),
ProjectTelemetryPrivacyDataTag(ProjectTelemetryTag_ProductAndServicePerformance),
TraceLoggingBoolean(TRUE, "UTCReplace_AppSessionGuid"),
TraceLoggingKeyword(PROJECT_KEYWORD_MEASURE));
}
void Trace::TrayIconDoubleClick(bool quickAccessEnabled)
{
TraceLoggingWriteWrapper(
g_hProvider,
"TrayIcon_DoubleClick",
TraceLoggingBoolean(quickAccessEnabled, "QuickAccessEnabled"),
ProjectTelemetryPrivacyDataTag(ProjectTelemetryTag_ProductAndServicePerformance),
TraceLoggingBoolean(TRUE, "UTCReplace_AppSessionGuid"),
TraceLoggingKeyword(PROJECT_KEYWORD_MEASURE));
}
void Trace::TrayIconRightClick(bool quickAccessEnabled)
{
TraceLoggingWriteWrapper(
g_hProvider,
"TrayIcon_RightClick",
TraceLoggingBoolean(quickAccessEnabled, "QuickAccessEnabled"),
ProjectTelemetryPrivacyDataTag(ProjectTelemetryTag_ProductAndServicePerformance),
TraceLoggingBoolean(TRUE, "UTCReplace_AppSessionGuid"),
TraceLoggingKeyword(PROJECT_KEYWORD_MEASURE));
}

View File

@@ -13,4 +13,9 @@ public:
// Auto-update telemetry
static void UpdateCheckCompleted(bool success, bool updateAvailable, const std::wstring& fromVersion, const std::wstring& toVersion);
static void UpdateDownloadCompleted(bool success, const std::wstring& version);
// Tray icon interaction telemetry
static void TrayIconLeftClick(bool quickAccessEnabled);
static void TrayIconDoubleClick(bool quickAccessEnabled);
static void TrayIconRightClick(bool quickAccessEnabled);
};

View File

@@ -7,6 +7,7 @@
#include "centralized_kb_hook.h"
#include "quick_access_host.h"
#include "hotkey_conflict_detector.h"
#include "trace.h"
#include <Windows.h>
#include <common/utils/resources.h>
@@ -14,6 +15,7 @@
#include <common/logger/logger.h>
#include <common/utils/elevation.h>
#include <common/Themes/theme_listener.h>
#include <common/Themes/theme_helpers.h>
#include "bug_report.h"
namespace
@@ -39,6 +41,7 @@ namespace
bool double_click_timer_running = false;
bool double_clicked = false;
POINT tray_icon_click_point;
std::optional<bool> last_quick_access_state; // Track the last known Quick Access state
static ThemeListener theme_listener;
static bool theme_adaptive_enabled = false;
@@ -129,6 +132,9 @@ void click_timer_elapsed()
double_click_timer_running = false;
if (!double_clicked)
{
// Log telemetry for single click (confirmed it's not a double click)
Trace::TrayIconLeftClick(get_general_settings().enableQuickAccess);
if (get_general_settings().enableQuickAccess)
{
open_quick_access_flyout_window();
@@ -194,6 +200,21 @@ LRESULT __stdcall tray_icon_window_proc(HWND window, UINT message, WPARAM wparam
case WM_RBUTTONUP:
case WM_CONTEXTMENU:
{
bool quick_access_enabled = get_general_settings().enableQuickAccess;
// Log telemetry
Trace::TrayIconRightClick(quick_access_enabled);
// Reload menu if Quick Access state has changed or is first time
if (h_menu && (!last_quick_access_state.has_value() || quick_access_enabled != last_quick_access_state.value()))
{
DestroyMenu(h_menu);
h_menu = nullptr;
h_sub_menu = nullptr;
}
last_quick_access_state = quick_access_enabled;
if (!h_menu)
{
h_menu = LoadMenu(reinterpret_cast<HINSTANCE>(&__ImageBase), MAKEINTRESOURCE(ID_TRAY_MENU));
@@ -201,17 +222,39 @@ LRESULT __stdcall tray_icon_window_proc(HWND window, UINT message, WPARAM wparam
if (h_menu)
{
static std::wstring settings_menuitem_label = GET_RESOURCE_STRING(IDS_SETTINGS_MENU_TEXT);
static std::wstring settings_menuitem_label_leftclick = GET_RESOURCE_STRING(IDS_SETTINGS_MENU_TEXT_LEFTCLICK);
static std::wstring close_menuitem_label = GET_RESOURCE_STRING(IDS_CLOSE_MENU_TEXT);
static std::wstring submit_bug_menuitem_label = GET_RESOURCE_STRING(IDS_SUBMIT_BUG_TEXT);
static std::wstring documentation_menuitem_label = GET_RESOURCE_STRING(IDS_DOCUMENTATION_MENU_TEXT);
static std::wstring quick_access_menuitem_label = GET_RESOURCE_STRING(IDS_QUICK_ACCESS_MENU_TEXT);
change_menu_item_text(ID_SETTINGS_MENU_COMMAND, settings_menuitem_label.data());
// Update Settings menu text based on Quick Access state
if (quick_access_enabled)
{
change_menu_item_text(ID_SETTINGS_MENU_COMMAND, settings_menuitem_label.data());
}
else
{
change_menu_item_text(ID_SETTINGS_MENU_COMMAND, settings_menuitem_label_leftclick.data());
}
change_menu_item_text(ID_CLOSE_MENU_COMMAND, close_menuitem_label.data());
change_menu_item_text(ID_REPORT_BUG_COMMAND, submit_bug_menuitem_label.data());
bool bug_report_disabled = is_bug_report_running();
EnableMenuItem(h_sub_menu, ID_REPORT_BUG_COMMAND, MF_BYCOMMAND | (bug_report_disabled ? MF_GRAYED : MF_ENABLED));
change_menu_item_text(ID_DOCUMENTATION_MENU_COMMAND, documentation_menuitem_label.data());
change_menu_item_text(ID_QUICK_ACCESS_MENU_COMMAND, quick_access_menuitem_label.data());
// Hide or show Quick Access menu item based on setting
if (!h_sub_menu)
{
h_sub_menu = GetSubMenu(h_menu, 0);
}
if (!quick_access_enabled)
{
// Remove Quick Access menu item when disabled
DeleteMenu(h_sub_menu, ID_QUICK_ACCESS_MENU_COMMAND, MF_BYCOMMAND);
}
}
if (!h_sub_menu)
{
@@ -242,6 +285,9 @@ LRESULT __stdcall tray_icon_window_proc(HWND window, UINT message, WPARAM wparam
}
case WM_LBUTTONDBLCLK:
{
// Log telemetry
Trace::TrayIconDoubleClick(get_general_settings().enableQuickAccess);
double_clicked = true;
open_settings_window(std::nullopt);
break;
@@ -293,7 +339,7 @@ static void handle_theme_change()
{
if (theme_adaptive_enabled)
{
tray_icon_data.hIcon = get_icon(theme_listener.AppTheme);
tray_icon_data.hIcon = get_icon(ThemeHelpers::GetSystemTheme());
Shell_NotifyIcon(NIM_MODIFY, &tray_icon_data);
}
}
@@ -310,7 +356,7 @@ void start_tray_icon(bool isProcessElevated, bool theme_adaptive)
{
theme_adaptive_enabled = theme_adaptive;
auto h_instance = reinterpret_cast<HINSTANCE>(&__ImageBase);
HICON const icon = theme_adaptive ? get_icon(theme_listener.AppTheme) : LoadIcon(h_instance, MAKEINTRESOURCE(APPICON));
HICON const icon = theme_adaptive ? get_icon(ThemeHelpers::GetSystemTheme()) : LoadIcon(h_instance, MAKEINTRESOURCE(APPICON));
if (icon)
{
UINT id_tray_icon = 1;
@@ -357,7 +403,7 @@ void start_tray_icon(bool isProcessElevated, bool theme_adaptive)
ChangeWindowMessageFilterEx(hwnd, WM_COMMAND, MSGFLT_ALLOW, nullptr);
tray_icon_created = Shell_NotifyIcon(NIM_ADD, &tray_icon_data) == TRUE;
theme_listener.AddChangedHandler(&handle_theme_change);
theme_listener.AddSystemThemeChangedHandler(&handle_theme_change);
// Register callback to update bug report menu item status
BugReportManager::instance().register_callback([](bool isRunning) {
@@ -389,7 +435,7 @@ void set_tray_icon_theme_adaptive(bool theme_adaptive)
if (theme_adaptive)
{
icon = get_icon(theme_listener.AppTheme);
icon = get_icon(ThemeHelpers::GetSystemTheme());
if (!icon)
{
Logger::warn(L"set_tray_icon_theme_adaptive: Failed to load theme adaptive icon, falling back to default");

2
tools/mcp/github-artifacts/.gitignore vendored Normal file
View File

@@ -0,0 +1,2 @@
node_modules/
package-lock.json

View File

@@ -0,0 +1,14 @@
import { existsSync } from "fs";
import { execSync } from "child_process";
import { fileURLToPath } from "url";
import { dirname } from "path";
const __dirname = dirname(fileURLToPath(import.meta.url));
process.chdir(__dirname);
if (!existsSync("node_modules")) {
console.log("[MCP] Installing dependencies...");
execSync("npm install", { stdio: "inherit" });
}
import("./server.js");

View File

@@ -0,0 +1,19 @@
{
"name": "github-artifacts",
"version": "1.0.0",
"description": "",
"main": "server.js",
"scripts": {
"test": "node test-github_issue_images.js && node test-github_issue_attachments.js",
"start": "node server.js"
},
"keywords": [],
"author": "",
"license": "ISC",
"type": "module",
"dependencies": {
"@modelcontextprotocol/sdk": "^1.25.2",
"jszip": "^3.10.1",
"zod": "^3.25.76"
}
}

View File

@@ -0,0 +1,385 @@
import { McpServer } from "@modelcontextprotocol/sdk/server/mcp.js";
import { StdioServerTransport } from "@modelcontextprotocol/sdk/server/stdio.js";
import { z } from "zod";
import fs from "fs/promises";
import path from "path";
import { execFile } from "child_process";
const server = new McpServer({
name: "issue-images",
version: "0.1.0"
});
const GH_API_PER_PAGE = 100;
const MAX_TEXT_FILE_BYTES = 100000;
// Limit to common text files to avoid binary blobs, huge payloads, and non-UTF8 noise.
const TEXT_EXTENSIONS = new Set([
".txt", ".log", ".json", ".xml", ".yaml", ".yml", ".md", ".csv",
".ini", ".config", ".conf", ".bat", ".ps1", ".sh", ".reg", ".etl"
]);
function extractImageUrls(markdownOrHtml) {
const urls = new Set();
// Markdown images: ![alt](url)
for (const m of markdownOrHtml.matchAll(/!\[[^\]]*?\]\((https?:\/\/[^\s)]+)\)/g)) {
urls.add(m[1]);
}
// HTML <img src="...">
for (const m of markdownOrHtml.matchAll(/<img[^>]+src="(https?:\/\/[^">]+)"/g)) {
urls.add(m[1]);
}
return [...urls];
}
function extractZipUrls(markdownOrHtml) {
const urls = new Set();
// Markdown links to .zip files: [text](url.zip)
for (const m of markdownOrHtml.matchAll(/\[[^\]]*?\]\((https?:\/\/[^\s)]+\.zip)\)/gi)) {
urls.add(m[1]);
}
// Plain URLs ending in .zip
for (const m of markdownOrHtml.matchAll(/(https?:\/\/[^\s<>"]+\.zip)/gi)) {
urls.add(m[1]);
}
return [...urls];
}
async function fetchJson(url, token) {
const res = await fetch(url, {
headers: {
"Accept": "application/vnd.github+json",
...(token ? { "Authorization": `Bearer ${token}` } : {}),
"X-GitHub-Api-Version": "2022-11-28",
"User-Agent": "issue-images-mcp"
}
});
if (!res.ok) throw new Error(`GitHub API failed: ${res.status} ${res.statusText}`);
return await res.json();
}
async function downloadBytes(url, token) {
const res = await fetch(url, {
headers: {
...(token ? { "Authorization": `Bearer ${token}` } : {}),
"User-Agent": "issue-images-mcp"
}
});
if (!res.ok) throw new Error(`Image download failed: ${res.status} ${res.statusText}`);
const buf = new Uint8Array(await res.arrayBuffer());
const ct = res.headers.get("content-type") || "image/png";
return { buf, mimeType: ct };
}
async function downloadZipBytes(url, token) {
const zipUrl = url.includes("?") ? url : `${url}?download=1`;
const tryFetch = async (useAuth) => {
const res = await fetch(zipUrl, {
headers: {
"Accept": "application/octet-stream",
...(useAuth && token ? { "Authorization": `Bearer ${token}` } : {}),
"User-Agent": "issue-images-mcp"
},
redirect: "follow"
});
if (!res.ok) throw new Error(`ZIP download failed: ${res.status} ${res.statusText}`);
const contentType = (res.headers.get("content-type") || "").toLowerCase();
const buf = new Uint8Array(await res.arrayBuffer());
return { buf, contentType };
};
let { buf, contentType } = await tryFetch(true);
const isZip = buf.length >= 4 && buf[0] === 0x50 && buf[1] === 0x4b;
if (!isZip) {
({ buf, contentType } = await tryFetch(false));
}
const isZipRetry = buf.length >= 4 && buf[0] === 0x50 && buf[1] === 0x4b;
if (!isZipRetry || contentType.includes("text/html") || buf.length < 100) {
throw new Error("ZIP download returned HTML or invalid data. Check permissions or rate limits.");
}
return buf;
}
function execFileAsync(file, args) {
return new Promise((resolve, reject) => {
execFile(file, args, (error, stdout, stderr) => {
if (error) {
reject(new Error(stderr || error.message));
} else {
resolve(stdout);
}
});
});
}
async function extractZipToFolder(zipPath, extractPath) {
if (process.platform === "win32") {
await execFileAsync("powershell", [
"-NoProfile",
"-NonInteractive",
"-Command",
`$ProgressPreference='SilentlyContinue'; Expand-Archive -Path \"${zipPath}\" -DestinationPath \"${extractPath}\" -Force -ErrorAction Stop | Out-Null`
]);
return;
}
await execFileAsync("unzip", ["-o", zipPath, "-d", extractPath]);
}
async function listFilesRecursively(dir) {
const entries = await fs.readdir(dir, { withFileTypes: true });
const files = [];
for (const entry of entries) {
const fullPath = path.join(dir, entry.name);
if (entry.isDirectory()) {
files.push(...await listFilesRecursively(fullPath));
} else {
files.push(fullPath);
}
}
return files;
}
async function pathExists(p) {
try {
await fs.access(p);
return true;
} catch {
return false;
}
}
async function fetchAllComments(owner, repo, issueNumber, token) {
let comments = [];
let page = 1;
while (true) {
const pageComments = await fetchJson(
`https://api.github.com/repos/${owner}/${repo}/issues/${issueNumber}/comments?per_page=${GH_API_PER_PAGE}&page=${page}`,
token
);
comments = comments.concat(pageComments);
if (pageComments.length < GH_API_PER_PAGE) break;
page++;
}
return comments;
}
async function fetchIssueAndComments(owner, repo, issueNumber, token) {
const issue = await fetchJson(`https://api.github.com/repos/${owner}/${repo}/issues/${issueNumber}`, token);
const comments = issue.comments > 0 ? await fetchAllComments(owner, repo, issueNumber, token) : [];
return { issue, comments };
}
function buildBlobs(issue, comments) {
return [issue.body || "", ...comments.map(c => c.body || "")].join("\n\n---\n\n");
}
server.registerTool(
"github_issue_images",
{
title: "GitHub Issue Images",
description: `Download and return images from a GitHub issue or pull request.
USE THIS TOOL WHEN:
- User asks about a GitHub issue/PR that contains screenshots, images, or visual content
- User wants to understand a bug report with attached images
- User asks to analyze, describe, or review images in an issue/PR
- User references a GitHub issue/PR URL and the context suggests images are relevant
- User asks about UI bugs, visual glitches, design issues, or anything visual in nature
WHAT IT DOES:
- Fetches all images from the issue/PR body and all comments
- Returns actual image data (not just URLs) so the LLM can see and analyze the images
- Supports PNG, JPEG, GIF, and other common image formats
EXAMPLES OF WHEN TO USE:
- "What does the bug in issue #123 look like?"
- "Can you see the screenshot in this PR?"
- "Analyze the images in microsoft/PowerToys#25595"
- "What UI problem is shown in this issue?"`,
inputSchema: {
owner: z.string(),
repo: z.string(),
issueNumber: z.number(),
maxImages: z.number().min(1).max(20).optional()
},
outputSchema: {
images: z.number(),
comments: z.number()
}
},
async ({ owner, repo, issueNumber, maxImages = 20 }) => {
try {
const token = process.env.GITHUB_TOKEN;
const { issue, comments } = await fetchIssueAndComments(owner, repo, issueNumber, token);
const blobs = buildBlobs(issue, comments);
const urls = extractImageUrls(blobs).slice(0, maxImages);
const content = [
{ type: "text", text: `Found ${urls.length} image(s) in issue #${issueNumber} (from ${comments.length} comments). Returning as image parts.` }
];
for (const url of urls) {
const { buf, mimeType } = await downloadBytes(url, token);
const b64 = Buffer.from(buf).toString("base64");
content.push({ type: "image", data: b64, mimeType });
content.push({ type: "text", text: `Image source: ${url}` });
}
const output = { images: urls.length, comments: comments.length };
return { content, structuredContent: output };
} catch (err) {
const message = err instanceof Error ? err.message : String(err);
return { content: [{ type: "text", text: `Error: ${message}` }], isError: true };
}
}
);
server.registerTool(
"github_issue_attachments",
{
title: "GitHub Issue Attachments",
description: `Download and extract ZIP file attachments from a GitHub issue or pull request.
USE THIS TOOL WHEN:
- User asks about diagnostic logs, crash reports, or debug information in an issue
- Issue contains ZIP attachments like PowerToysReport_*.zip, logs.zip, debug.zip
- User wants to analyze log files, configuration files, or system info from an issue
- User asks about error logs, stack traces, or diagnostic data attached to an issue
- Issue mentions attached files that need to be examined
WHAT IT DOES:
- Finds all ZIP file attachments in the issue body and comments
- Downloads and extracts each ZIP to a local folder
- Returns file listing and contents of text files (logs, json, xml, txt, etc.)
- Each ZIP is extracted to: {extractFolder}/{zipFileName}/
EXAMPLES OF WHEN TO USE:
- "What's in the diagnostic report attached to issue #39476?"
- "Can you check the logs in the PowerToysReport zip?"
- "Analyze the crash dump attached to this issue"
- "What error is shown in the attached log files?"`,
inputSchema: {
owner: z.string(),
repo: z.string(),
issueNumber: z.number(),
extractFolder: z.string(),
maxFiles: z.number().min(1).optional()
},
outputSchema: {
zips: z.number(),
extracted: z.number(),
extractedTo: z.array(z.string())
}
},
async ({ owner, repo, issueNumber, extractFolder, maxFiles = 50 }) => {
try {
const token = process.env.GITHUB_TOKEN;
const { issue, comments } = await fetchIssueAndComments(owner, repo, issueNumber, token);
const blobs = buildBlobs(issue, comments);
const zipUrls = extractZipUrls(blobs);
if (zipUrls.length === 0) {
return {
content: [{ type: "text", text: `No ZIP attachments found in issue #${issueNumber}.` }],
structuredContent: { zips: 0, extracted: 0, extractedTo: [] }
};
}
await fs.mkdir(extractFolder, { recursive: true });
const content = [
{ type: "text", text: `Found ${zipUrls.length} ZIP attachment(s) in issue #${issueNumber}. Extracting to: ${extractFolder}` }
];
let totalFilesReturned = 0;
const extractedPaths = [];
let extractedCount = 0;
for (const zipUrl of zipUrls) {
try {
const urlPath = new URL(zipUrl).pathname;
const zipFileName = path.basename(urlPath, ".zip");
const extractPath = path.join(extractFolder, zipFileName);
const zipPath = path.join(extractFolder, `${zipFileName}.zip`);
let extractedFiles = [];
const extractPathExists = await pathExists(extractPath);
if (extractPathExists) {
extractedFiles = await listFilesRecursively(extractPath);
}
if (!extractPathExists || extractedFiles.length === 0) {
const zipExists = await pathExists(zipPath);
if (!zipExists) {
const buf = await downloadZipBytes(zipUrl, token);
await fs.writeFile(zipPath, buf);
}
await fs.mkdir(extractPath, { recursive: true });
await extractZipToFolder(zipPath, extractPath);
extractedFiles = await listFilesRecursively(extractPath);
}
extractedPaths.push(extractPath);
extractedCount++;
const fileList = [];
const textContents = [];
for (const fullPath of extractedFiles) {
const relPath = path.relative(extractPath, fullPath).replace(/\\/g, "/");
const ext = path.extname(relPath).toLowerCase();
const stat = await fs.stat(fullPath);
const sizeKB = Math.round(stat.size / 1024);
fileList.push(` ${relPath} (${sizeKB} KB)`);
if (TEXT_EXTENSIONS.has(ext) && totalFilesReturned < maxFiles && stat.size < MAX_TEXT_FILE_BYTES) {
try {
const textContent = await fs.readFile(fullPath, "utf-8");
textContents.push({ path: relPath, content: textContent });
totalFilesReturned++;
} catch {
// Not valid UTF-8, skip
}
}
}
content.push({ type: "text", text: `\n📦 ${zipFileName}.zip extracted to: ${extractPath}\nFiles:\n${fileList.join("\n")}` });
for (const { path: fPath, content: fContent } of textContents) {
content.push({ type: "text", text: `\n--- ${fPath} ---\n${fContent}` });
}
} catch (err) {
const message = err instanceof Error ? err.message : String(err);
content.push({ type: "text", text: `❌ Failed to extract ${zipUrl}: ${message}` });
}
}
const output = { zips: zipUrls.length, extracted: extractedCount, extractedTo: extractedPaths };
return { content, structuredContent: output };
} catch (err) {
const message = err instanceof Error ? err.message : String(err);
return { content: [{ type: "text", text: `Error: ${message}` }], isError: true };
}
}
);
const transport = new StdioServerTransport();
await server.connect(transport);

View File

@@ -0,0 +1,141 @@
// Test script for github_issue_attachments tool
// Run with: node test-github_issue_attachments.js
// Make sure GITHUB_TOKEN is set in environment
import { spawn } from "child_process";
import fs from "fs/promises";
import path from "path";
import { fileURLToPath } from "url";
const __dirname = path.dirname(fileURLToPath(import.meta.url));
const extractFolder = path.join(__dirname, "test-extracts");
const server = spawn("node", ["server.js"], {
stdio: ["pipe", "pipe", "inherit"],
env: { ...process.env }
});
// Send initialize request
const initRequest = {
jsonrpc: "2.0",
id: 1,
method: "initialize",
params: {
protocolVersion: "2024-11-05",
capabilities: {},
clientInfo: { name: "test-client", version: "1.0.0" }
}
};
server.stdin.write(JSON.stringify(initRequest) + "\n");
// Send list tools request
setTimeout(() => {
const listToolsRequest = {
jsonrpc: "2.0",
id: 2,
method: "tools/list",
params: {}
};
server.stdin.write(JSON.stringify(listToolsRequest) + "\n");
}, 500);
// Send call tool request - test with PowerToys issue that has ZIP attachments
setTimeout(() => {
const callToolRequest = {
jsonrpc: "2.0",
id: 3,
method: "tools/call",
params: {
name: "github_issue_attachments",
arguments: {
owner: "microsoft",
repo: "PowerToys",
issueNumber: 39476, // Has PowerToysReport_*.zip attachment
extractFolder: extractFolder,
maxFiles: 20
}
}
};
server.stdin.write(JSON.stringify(callToolRequest) + "\n");
}, 1000);
// Track summary
let summary = { zips: 0, files: 0, extractPath: "" };
let buffer = "";
// Read responses
server.stdout.on("data", (data) => {
buffer += data.toString();
// Try to parse complete JSON objects from buffer
const lines = buffer.split("\n");
buffer = lines.pop() || ""; // Keep incomplete line in buffer
for (const line of lines) {
if (!line.trim()) continue;
try {
const response = JSON.parse(line);
console.log("\n=== Response ===");
console.log("ID:", response.id);
if (response.result?.tools) {
console.log("Tools:", response.result.tools.map(t => t.name));
} else if (response.result?.content) {
for (const item of response.result.content) {
if (item.type === "text") {
// Truncate long file contents for display
const text = item.text;
if (text.startsWith("---") && text.length > 500) {
console.log(text.substring(0, 500) + "\n... [truncated]");
} else {
console.log(text);
}
// Track stats
if (text.includes("ZIP attachment")) {
const match = text.match(/Found (\d+) ZIP/);
if (match) summary.zips = parseInt(match[1]);
}
if (text.includes("extracted to:")) {
summary.extractPath = text.match(/extracted to: (.+)/)?.[1] || "";
}
if (text.includes("Files:")) {
summary.files = (text.match(/ /g) || []).length;
}
}
}
} else {
console.log("Result:", JSON.stringify(response.result, null, 2));
}
} catch (e) {
// Likely incomplete JSON, will be in next chunk
}
}
});
// Exit after 60 seconds (ZIP download may take time)
setTimeout(() => {
console.log("\n" + "=".repeat(50));
console.log("=== Test Summary ===");
console.log("=".repeat(50));
console.log(`ZIP files found: ${summary.zips}`);
console.log(`Files extracted: ${summary.files}`);
if (summary.extractPath) {
console.log(`Extract location: ${summary.extractPath}`);
}
console.log("=".repeat(50));
console.log("Cleaning up extracted files...");
fs.rm(extractFolder, { recursive: true, force: true })
.then(() => {
console.log("Cleanup done.");
})
.catch((err) => {
console.log(`Cleanup failed: ${err.message}`);
})
.finally(() => {
console.log("Test complete!");
server.kill();
process.exit(0);
});
}, 60000);

View File

@@ -0,0 +1,118 @@
// Simple test script - run with: node test-github_issue_images.js
// Make sure GITHUB_TOKEN is set in environment
import { spawn } from "child_process";
const server = spawn("node", ["server.js"], {
stdio: ["pipe", "pipe", "inherit"],
env: { ...process.env }
});
// Send initialize request
const initRequest = {
jsonrpc: "2.0",
id: 1,
method: "initialize",
params: {
protocolVersion: "2024-11-05",
capabilities: {},
clientInfo: { name: "test-client", version: "1.0.0" }
}
};
server.stdin.write(JSON.stringify(initRequest) + "\n");
// Send list tools request
setTimeout(() => {
const listToolsRequest = {
jsonrpc: "2.0",
id: 2,
method: "tools/list",
params: {}
};
server.stdin.write(JSON.stringify(listToolsRequest) + "\n");
}, 500);
// Send call tool request (test with a real issue)
setTimeout(() => {
const callToolRequest = {
jsonrpc: "2.0",
id: 3,
method: "tools/call",
params: {
name: "github_issue_images",
arguments: {
owner: "microsoft",
repo: "PowerToys",
issueNumber: 25595, // 315 comments, many images - tests pagination!
maxImages: 5
}
}
};
server.stdin.write(JSON.stringify(callToolRequest) + "\n");
}, 1000);
// Track summary
let summary = { images: 0, totalKB: 0, text: "" };
let gotToolResponse = false;
let buffer = "";
// Read responses
server.stdout.on("data", (data) => {
buffer += data.toString();
// Try to parse complete JSON objects from buffer
const lines = buffer.split("\n");
buffer = lines.pop() || ""; // Keep incomplete line in buffer
for (const line of lines) {
if (!line.trim()) continue;
try {
const response = JSON.parse(line);
console.log("\n=== Response ===");
console.log("ID:", response.id);
if (response.result?.tools) {
console.log("Tools:", response.result.tools.map(t => t.name));
} else if (response.result?.content) {
gotToolResponse = true;
let imageCount = 0;
for (const item of response.result.content) {
if (item.type === "text") {
console.log("Text:", item.text);
summary.text = item.text;
} else if (item.type === "image") {
imageCount++;
const sizeKB = Math.round(item.data.length * 0.75 / 1024); // base64 to actual size
console.log(` [Image ${imageCount}] ${item.mimeType} - ${sizeKB} KB downloaded`);
summary.images++;
summary.totalKB += sizeKB;
}
}
} else {
console.log("Result:", JSON.stringify(response.result, null, 2));
}
} catch (e) {
// Likely incomplete JSON, will be in next chunk
}
}
});
// Exit after 60 seconds (more time for downloads)
setTimeout(() => {
console.log("\n" + "=".repeat(50));
console.log("=== Test Summary ===");
console.log("=".repeat(50));
if (summary.text) {
console.log(summary.text);
}
if (summary.images > 0) {
console.log(`Total images downloaded: ${summary.images}`);
console.log(`Total size: ${summary.totalKB} KB`);
} else if (!gotToolResponse) {
console.log("No tool response received yet. The request may still be running or was rate-limited.");
}
console.log("=".repeat(50));
console.log("Test complete!");
server.kill();
process.exit(0);
}, 60000);