Files
PowerToys/.github/prompts/review-issue.prompt.md
Gordon Lam 961a65f470 Update VSCode task for streamline build + fix prompts syntax error (#44605)
<!-- Enter a brief description/summary of your PR here. What does it
fix/what does it change/how was it tested (even manually, if necessary)?
-->
## Summary of the Pull Request
This pull request introduces several improvements and updates across the
repository, focusing on build tooling, prompt configuration for AI
agents, and documentation clarity. The most significant changes include
the addition of new VSCode build tasks, updates to prompt files to use a
newer AI model, enhancements to the build script for flexibility, and
refinements to documentation and style rules.

### Enabled scenario
* On any active files in VSCode, do "Ctrl + Shift + B" just build
correctly for you.
* The Run Task (no matter from Terminal or Ctrl + Shift + P), we can see
the build option:
<img width="1210" height="253" alt="image"
src="https://github.com/user-attachments/assets/09fbef16-55ce-42d5-a9ec-74111be83472"
/>

**Build tooling and automation:**

* Added a new `.vscode/tasks.json` file with configurable build tasks
for PowerToys, supporting quick and customizable builds with platform
and configuration selection.
* Enhanced `tools/build/build.ps1` to support an optional `-Path`
parameter for building projects in a specified directory, updated
parameter documentation, and improved logic to resolve the working
directory.
[[1]](diffhunk://#diff-7a444242b2a6d9c642341bd2ef45f51ba5698ad7827e5136e85eb483863967a7R14-R16)
[[2]](diffhunk://#diff-7a444242b2a6d9c642341bd2ef45f51ba5698ad7827e5136e85eb483863967a7R27-R30)
[[3]](diffhunk://#diff-7a444242b2a6d9c642341bd2ef45f51ba5698ad7827e5136e85eb483863967a7R51)
[[4]](diffhunk://#diff-7a444242b2a6d9c642341bd2ef45f51ba5698ad7827e5136e85eb483863967a7L81-R93)


**AI agent prompt configuration:**

* Updated all prompt files in `.github/prompts/` to use the
`GPT-5.1-Codex-Max` model instead of older models, and standardized the
agent field format.
[[1]](diffhunk://#diff-7a5c9b18594ff83fda2c191fd5a401ca01b74451e8949dc09e14eabee15de165L1-R2)
[[2]](diffhunk://#diff-f48674f7557a6c623bb48120c41b4546b20b32f741cc13d82076b0f4b2375d98L1-R2)
[[3]](diffhunk://#diff-a6831d9c98a26487c89c39532ec54d26f8987a8bdc88c5becb9368e9d7e589b9L1-R2)
[[4]](diffhunk://#diff-60e145ef3296b0cb4bec35363cc8afbfe0b6b7bd0c7785fe16a11d98e38c6f29L1-R2)
[[5]](diffhunk://#diff-6a7664740d6984e73a33254333a302a7e258c638a17134921c53967b4965a304L1-R3)
[[6]](diffhunk://#diff-7b246ee6421c503c22d7994406b822ede18d7d0c791b7667a55fcd50524fb0b0L1-R2)
* Improved clarity and consistency in the description and instructions
within the `review-issue.prompt.md` file.

**Documentation and style rules:**

* Updated `.github/copilot-instructions.md` to clarify that C++ and C#
style analyzers and formatters are now enforced from the `src/`
directory, not the repo root, and made the C++ formatting rule more
precise.

## PR Checklist
- [N/A] Closes: #xxx
- [N/A] **Tests:** Added/updated and all pass

## Validation Steps Performed
- Not run (config/docs/build-script changes only).
2026-01-08 15:41:51 +08:00

7.8 KiB
Raw Blame History

agent: 'agent' model: GPT-5.1-Codex-Max description: "You are a GitHub issue review and planning expert; score (0-100) and write one implementation plan. Outputs: overview.md, implementation-plan.md."

GOAL

For #{{issue_number}} produce:

  1. Generated Files/issueReview/{{issue_number}}/overview.md
  2. Generated Files/issueReview/{{issue_number}}/implementation-plan.md

Inputs

Figure out required inputs {{issue_number}} from the invocation context; if anything is missing, ask for the value or note it as a gap.

CONTEXT (brief)

Ground evidence using gh issue view {{issue_number}} --json number,title,body,author,createdAt,updatedAt,state,labels,milestone,reactions,comments,linkedPullRequests, and download images to better understand the issue context. Locate source code in the current workspace; feel free to use rg/git grep. Link related issues and PRs.

OVERVIEW.MD

Summary

Issue, state, milestone, labels. Signals: 👍/❤️/👎, comment count, last activity, linked PRs.

At-a-Glance Score Table

Present all ratings in a compact table for quick scanning:

Dimension Score Assessment Key Drivers
A) Business Importance X/100 Low/Medium/High Top 2 factors with scores
B) Community Excitement X/100 Low/Medium/High Top 2 factors with scores
C) Technical Feasibility X/100 Low/Medium/High Top 2 factors with scores
D) Requirement Clarity X/100 Low/Medium/High Top 2 factors with scores
Overall Priority X/100 Low/Medium/High/Critical Average or weighted summary
Effort Estimate X days (T-shirt) XS/S/M/L/XL/XXL/Epic Type: bug/feature/chore
Similar Issues Found X open, Y closed Quick reference to related work
Potential Assignees @username, @username Top contributors to module

Assessment bands: 0-25 Low, 26-50 Medium, 51-75 High, 76-100 Critical

Ratings (0100) — add evidence & short rationale

A) Business Importance

  • Labels (priority/security/regression): ≤35
  • Milestone/roadmap: ≤25
  • Customer/contract impact: ≤20
  • Unblocks/platform leverage: ≤20

B) Community Excitement

  • 👍+❤️ normalized: ≤45
  • Comment volume & unique participants: ≤25
  • Recent activity (≤30d): ≤15
  • Duplicates/related issues: ≤15

C) Technical Feasibility

  • Contained surface/clear seams: ≤30
  • Existing patterns/utilities: ≤25
  • Risk (perf/sec/compat) manageable: ≤25
  • Testability & CI support: ≤20

D) Requirement Clarity

  • Behavior/repro/constraints: ≤60
  • Non-functionals (perf/sec/i18n/a11y): ≤25
  • Decision owners/acceptance signals: ≤15

Effort

Days + T-shirt (XS 0.51d, S 12, M 24, L 47, XL 714, XXL 1430, Epic >30).
Type/level: bug/feature/chore/docs/refactor/test-only; severity/value tier.

Suggested Actions

Provide actionable recommendations for issue triage and assignment:

A) Requirement Clarification (if Clarity score <50)

When Requirement Clarity (Dimension D) is Medium or Low:

  • Identify specific gaps in issue description: missing repro steps, unclear expected behavior, undefined acceptance criteria, missing non-functional requirements
  • Draft 3-5 clarifying questions to post as issue comment
  • Suggest additional information needed: screenshots, logs, environment details, OS version, PowerToys version, error messages
  • If behavior is ambiguous, propose 2-3 interpretation scenarios and ask reporter to confirm
  • Example questions:
    • "Can you provide exact steps to reproduce this issue?"
    • "What is the expected behavior vs. what you're actually seeing?"
    • "Does this happen on Windows 10, 11, or both?"
    • "Can you attach a screenshot or screen recording?"

B) Correct Label Suggestions

  • Analyze issue type, module, and severity to suggest missing or incorrect labels
  • Recommend labels from: Issue-Bug, Issue-Feature, Issue-Docs, Issue-Task, Priority-High, Priority-Medium, Priority-Low, Needs-Triage, Needs-Author-Feedback, Product-<ModuleName>, etc.
  • If Requirement Clarity is low (<50), add Needs-Author-Feedback label
  • If current labels are incorrect or incomplete, provide specific label changes with rationale

C) Find Similar Issues & Past Fixes

  • Search for similar issues using gh issue list --search "keywords" --state all --json number,title,state,closedAt
  • Identify patterns: duplicate issues, related bugs, or similar feature requests
  • For closed issues, find linked PRs that fixed them: check linkedPullRequests in issue data
  • Provide 3-5 examples of similar issues with format: #<number> - <title> (closed by PR #<pr>) or (still open)

D) Identify Subject Matter Experts

  • Use git blame/log to find who fixed similar issues in the past
  • Search for PR authors who touched relevant files: git log --all --format='%aN' -- <file_paths> | sort | uniq -c | sort -rn | head -5
  • Check issue/PR history for frequent contributors to the affected module
  • Suggest 2-3 potential assignees with context: @<username> - <reason> (e.g., "fixed similar rendering bug in #12345", "maintains FancyZones module")
  • Use semantic_search tool to find similar issues, code patterns, or past discussions
  • Search queries should include: issue keywords, module names, error messages, feature descriptions
  • Cross-reference semantic results with GitHub issue search for comprehensive coverage

Output format for Suggested Actions section in overview.md:

## Suggested Actions

### Clarifying Questions (if Clarity <50)
Post these questions as issue comment to gather missing information:
1. <question>
2. <question>
3. <question>

**Recommended label**: `Needs-Author-Feedback`

### Label Recommendations
- Add: `<label>` - <reason>
- Remove: `<label>` - <reason>
- Current labels are appropriate ✓

### Similar Issues Found
1. #<number> - <title> (<state>, closed by PR #<pr> on <date>)
2. #<number> - <title> (<state>)
...

### Potential Assignees
- @<username> - <reason>
- @<username> - <reason>

### Related Code/Discussions
- <semantic search findings>

IMPLEMENTATION-PLAN.MD

  1. Problem Framing — restate problem; current vs expected; scope boundaries.
  2. Layers & Files — layers (UI/domain/data/infra/build). For each, list files/dirs to modify and new files (exact paths + why). Prefer repo patterns; cite examples/PRs.
  3. Pattern Choices — reuse existing; if new, justify trade-offs & transition.
  4. Fundamentals (brief plan or N/A + reason):
  • Performance (hot paths, allocs, caching/streaming)
  • Security (validation, authN/Z, secrets, SSRF/XSS/CSRF)
  • G11N/L10N (resources, number/date, pluralization)
  • Compatibility (public APIs, formats, OS/runtime/toolchain)
  • Extensibility (DI seams, options/flags, plugin points)
  • Accessibility (roles, labels, focus, keyboard, contrast)
  • SOLID & repo conventions (naming, folders, dependency direction)
  1. Logging & Exception Handling
  • Where to log; levels; structured fields; correlation/traces.
  • What to catch vs rethrow; retries/backoff; user-visible errors.
  • Privacy: never log secrets/PII; redaction policy.
  1. Telemetry (optional — business metrics only)
  • Events/metrics (name, when, props); success signal; privacy/sampling; dashboards/alerts.
  1. Risks & Mitigations — flags/canary/shadow-write/config guards.
  2. Task Breakdown (agent-ready) — table (leave a blank line before the header so Markdown renders correctly):
Task Intent Files/Areas Steps Tests (brief) Owner (Agent/Human) Human interaction needed? (why)
  1. Tests to Add (only)
  • Unit: targets, cases (success/edge/error), mocks/fixtures, path, notes.
  • UI (if applicable): flows, locator strategy, env/data/flags, path, flake mitigation.