Compare commits

..

1 Commits

Author SHA1 Message Date
Shawn Yuan (from Dev Box)
8f87c040b2 fix empty endpoint issue 2025-12-24 15:49:10 +08:00
499 changed files with 6375 additions and 26754 deletions

View File

@@ -330,9 +330,6 @@ HHH
riday
YYY
# Unicode
precomposed
# GitHub issue/PR commands
azp
feedbackhub

View File

@@ -32,7 +32,6 @@ advfirewall
AFeature
affordances
AFX
agentskills
AGGREGATABLE
AHK
AHybrid
@@ -65,9 +64,6 @@ apidl
APIENTRY
APIIs
Apm
APMPOWERSTATUSCHANGE
APMRESUMEAUTOMATIC
APMRESUMESUSPEND
APPBARDATA
APPEXECLINK
appext
@@ -210,19 +206,15 @@ certmgr
cfp
CHANGECBCHAIN
changecursor
checkmarks
CHILDACTIVATE
CHILDWINDOW
CHOOSEFONT
CIBUILD
cidl
CIELCh
cim
CImage
cla
claude
CLASSDC
classmethod
CLASSNOTAVAILABLE
CLEARTYPE
clickable
@@ -372,9 +364,7 @@ DEFAULTFLAGS
DEFAULTICON
defaultlib
DEFAULTONLY
DEFAULTSIZE
DEFAULTTONEAREST
Defaulttonearest
DEFAULTTONULL
DEFAULTTOPRIMARY
DEFERERASE
@@ -839,11 +829,9 @@ ITHUMBNAIL
IUI
IUWP
IWIC
jeli
jfif
jgeosdfsdsgmkedfgdfgdfgbkmhcgcflmi
jjw
JOBOBJECT
jobject
jpe
jpnime
@@ -877,7 +865,6 @@ lastcodeanalysissucceeded
LASTEXITCODE
LAYOUTRTL
lbl
Lbuttondown
LCh
lcid
LCIDTo
@@ -1000,7 +987,6 @@ maxversiontested
mber
MBM
MBR
Mbuttondown
MDICHILD
MDL
mdtext
@@ -1042,7 +1028,6 @@ mmi
mmsys
mobileredirect
mockapi
modelcontextprotocol
MODALFRAME
MODESPRUNED
MONITORENUMPROC
@@ -1351,7 +1336,6 @@ Pomodoro
Popups
POPUPWINDOW
POSITIONITEM
POWERBROADCAST
POWERRENAMECONTEXTMENU
powerrenameinput
POWERRENAMETEST
@@ -1461,7 +1445,6 @@ RAWINPUTHEADER
RAWMODE
RAWPATH
rbhid
Rbuttondown
rclsid
RCZOOMIT
remotedesktop
@@ -1497,7 +1480,6 @@ remoteip
Removelnk
renamable
RENAMEONCOLLISION
RENDERFULLCONTENT
reparented
reparenting
reportfileaccesses
@@ -1612,7 +1594,7 @@ sharpfuzz
SHCNE
SHCNF
SHCONTF
shcore
Shcore
shellapi
SHELLDETAILS
SHELLDLL
@@ -1711,7 +1693,6 @@ srw
srwlock
sse
ssf
sszzz
STACKFRAME
stackoverflow
STARTF
@@ -1722,7 +1703,6 @@ STARTUPINFOW
startupscreen
STATFLAG
STATICEDGE
staticmethod
STATSTG
stdafx
STDAPI
@@ -1740,7 +1720,6 @@ STICKYKEYS
sticpl
storelogo
stprintf
streamable
streamjsonrpc
STRINGIZE
stringtable
@@ -1760,7 +1739,6 @@ SUBMODULEUPDATE
subresource
Superbar
sut
swe
svchost
SVGIn
SVGIO
@@ -1768,7 +1746,6 @@ svgz
SVSI
SWFO
SWP
Swp
SWPNOSIZE
SWPNOZORDER
SWRESTORE
@@ -1788,7 +1765,6 @@ syskeydown
SYSKEYUP
SYSLIB
SYSMENU
Sysmenu
systemai
SYSTEMAPPS
SYSTEMMODAL
@@ -1834,6 +1810,7 @@ TILEDWINDOW
TILLSON
timedate
timediff
timeunion
timeutil
TITLEBARINFO
Titlecase
@@ -1891,7 +1868,6 @@ uild
uitests
UITo
ULONGLONG
Ultrawide
ums
UMax
UMin
@@ -1967,7 +1943,6 @@ visualeffects
vkey
vmovl
VMs
vnd
vorrq
VOS
vpaddlq
@@ -2115,7 +2090,6 @@ Wwanpp
xap
XAxis
XButton
Xbuttondown
xclip
xcopy
XDeployment

View File

@@ -273,6 +273,3 @@ St&yle
# Usernames with numbers
# 0x6f677548 is user name but user folder causes a flag
\bx6f677548\b
# Microsoft Store URLs and product IDs
ms-windows-store://\S+

View File

@@ -1,92 +0,0 @@
---
description: 'Implements fixes for GitHub issues based on implementation plans'
name: 'FixIssue'
tools: ['read', 'edit', 'search', 'execute', 'agent', 'usages', 'problems', 'changes', 'testFailure', 'github/*', 'github.vscode-pull-request-github/*']
argument-hint: 'GitHub issue number (e.g., #12345)'
infer: true
---
# FixIssue Agent
You are an **IMPLEMENTATION AGENT** specialized in executing implementation plans to fix GitHub issues.
## Identity & Expertise
- Expert at translating plans into working code
- Deep knowledge of PowerToys codebase patterns and conventions
- Skilled at writing tests, handling edge cases, and validating builds
- You follow plans precisely while handling ambiguity gracefully
## Goal
For the given **issue_number**, execute the implementation plan and produce:
1. Working code changes applied directly to the repository
2. `Generated Files/issueFix/{{issue_number}}/pr-description.md` — PR-ready description
3. `Generated Files/issueFix/{{issue_number}}/manual-steps.md` — Only if human action needed
## Core Directive
**Follow the implementation plan in `Generated Files/issueReview/{{issue_number}}/implementation-plan.md` as the single source of truth.**
If the plan doesn't exist, invoke PlanIssue agent first via `runSubagent`.
## Working Principles
- **Plan First**: Read and understand the entire implementation plan before coding
- **Validate Always**: For each change: Edit → Build → Verify → Commit. Never proceed if build fails.
- **Atomic Commits**: Each commit must be self-contained, buildable, and meaningful
- **Ask, Don't Guess**: When uncertain, insert `// TODO(Human input needed): <question>` and document in manual-steps.md
## Strategy
**Core Loop** — For every unit of work:
1. **Edit**: Make focused changes to implement one logical piece
2. **Build**: Run `tools\build\build.cmd` and check for exit code 0
3. **Verify**: Use `problems` tool for lint/compile errors; run relevant tests
4. **Commit**: Only after build passes — use `.github/prompts/create-commit-title.prompt.md`
Never skip steps. Never commit broken code. Never proceed if build fails.
**Feature-by-Feature E2E**: For big scenarios with multiple features, complete each feature end-to-end before moving to the next:
- Settings UI → Functionality → Logging → Tests (for Feature 1)
- Then repeat for Feature 2
- Benefits: Each feature is self-contained, testable, easier to review, can ship incrementally
**Large Changes** (3+ files or cross-module):
- Use `tools\build\New-WorktreeFromBranch.ps1` for isolated worktrees
- Create separate branches per feature (e.g., `issue/{{issue_number}}-export`, `issue/{{issue_number}}-import`)
- Merge feature branches back after each is validated
**Recovery**: If implementation goes wrong:
- Create a checkpoint branch before risky changes
- On failure: branch from last known-good state, cherry-pick working changes, abandon broken branch
- For complex changes, consider multiple smaller PRs
## Guidelines
**DO**:
- Follow the plan exactly
- Validate build before every commit — **NEVER commit broken code**
- Use `.github/prompts/create-commit-title.prompt.md` for commit messages
- Add comprehensive tests for changed behavior
- Use worktrees for large changes (3+ files or cross-module)
- Document deviations from plan
**DON'T**:
- Implement everything in a single massive commit
- Continue after a failed build without fixing
- Make drive-by refactors outside issue scope
- Skip tests for behavioral changes
- Add noisy logs in hot paths
- Break IPC/JSON contracts without updating both sides
- Introduce dependencies without documenting in NOTICE.md
## References
- [Build Guidelines](../../tools/build/BUILD-GUIDELINES.md) — Build commands and validation
- [Coding Style](../../doc/devdocs/development/style.md) — Formatting and conventions
- [AGENTS.md](../../AGENTS.md) — Full contributor guide
## Parameter
- **issue_number**: Extract from `#123`, `issue 123`, or plain number. If missing, ask user.

View File

@@ -1,65 +0,0 @@
---
description: 'Analyzes GitHub issues to produce overview and implementation plans'
name: 'PlanIssue'
tools: ['execute', 'read', 'edit', 'search', 'web', 'github/*', 'agent', 'github-artifacts/*', 'todo']
argument-hint: 'GitHub issue number (e.g., #12345)'
handoffs:
- label: Start Implementation
agent: FixIssue
prompt: 'Fix issue #{{issue_number}} using the implementation plan'
- label: Open Plan in Editor
agent: agent
prompt: 'Open Generated Files/issueReview/{{issue_number}}/overview.md and implementation-plan.md'
showContinueOn: false
send: true
infer: true
---
# PlanIssue Agent
You are a **PLANNING AGENT** specialized in analyzing GitHub issues and producing comprehensive planning documentation.
## Identity & Expertise
- Expert at issue triage, priority scoring, and technical analysis
- Deep knowledge of PowerToys architecture and codebase patterns
- Skilled at breaking down problems into actionable implementation steps
- You research thoroughly before planning, gathering 80% confidence before drafting
## Goal
For the given **issue_number**, produce two deliverables:
1. `Generated Files/issueReview/{{issue_number}}/overview.md` — Issue analysis with scoring
2. `Generated Files/issueReview/{{issue_number}}/implementation-plan.md` — Technical implementation plan
Above is the core interaction with the end user. If you cannot produce the files above, you fail the task. Each time, you must check whether the files exist or have been modified by the end user, without assuming you know their contents.
3. `Generated Files/issueReview/{{issue_number}}/logs/**` — logs for your diagnostic of root cause, research steps, and reasoning
## Core Directive
**Follow the template in `.github/prompts/review-issue.prompt.md` exactly.** Read it first, then apply every section as specified.
- Fetch issue details: reactions, comments, linked PRs, images, logs
- Search related code and similar past fixes
- Ask clarifying questions when ambiguous
- Identify subject matter experts via git history
<stopping_rules>
You are a PLANNING agent, NOT an implementation agent.
STOP if you catch yourself:
- Writing code or editing source files outside `Generated Files/issueReview/`
- Making assumptions without researching
- Skipping the scoring/assessment phase
Plans describe what the USER or FixIssue agent will execute later.
</stopping_rules>
## References
- [Review Issue Prompt](../.github/prompts/review-issue.prompt.md) — Template for plan structure
- [Architecture Overview](../../doc/devdocs/core/architecture.md) — System design context
- [AGENTS.md](../../AGENTS.md) — Full contributor guide
## Parameter
- **issue_number**: Extract from `#123`, `issue 123`, or plain number. If missing, ask user.

View File

@@ -1,36 +1,59 @@
---
description: 'PowerToys AI contributor guidance'
description: PowerToys AI contributor guidance.
applyTo: pullRequests
---
# PowerToys Copilot Instructions
# PowerToys - Copilot guide (concise)
Concise guidance for AI contributions. For complete details, see [AGENTS.md](../AGENTS.md).
This is the top-level guide for AI changes. Keep edits small, follow existing patterns, and cite exact paths in PRs.
## Key Rules
# Repo map (1-line per area)
- Core apps: `src/runner/**` (tray/loader), `src/settings-ui/**` (Settings app)
- Shared libs: `src/common/**`
- Modules: `src/modules/*` (one per utility; Command Palette in `src/modules/cmdpal/**`)
- Build tools/docs: `tools/**`, `doc/devdocs/**`
- Atomic PRs: one logical change, no drive-by refactors
- Add tests when changing behavior
- Keep hot paths quiet (no logging in hooks/tight loops)
# Build and test (defaults)
- Prerequisites: Visual Studio 2022 17.4+, minimal Windows 10 1803+.
- Build discipline:
- One terminal per operation (build -> test). Do not switch or open new ones mid-flow.
- After making changes, `cd` to the project folder that changed (`.csproj`/`.vcxproj`).
- Use scripts to build, synchronously block and wait in foreground for completion: `tools/build/build.ps1|.cmd` (current folder), `build-essentials.*` (once per brand new build for missing nuget packages).
- Treat build exit code 0 as success; any non-zero exit code is a failure. Read the errors log in the build folder (such as `build.*.*.errors.log`) and surface problems.
- Do not start tests or launch Runner until the previous step succeeded.
- Tests (fast and targeted):
- Find the test project by product code prefix (for example FancyZones, AdvancedPaste). Look for a sibling folder or one to two levels up named like `<Product>*UnitTests` or `<Product>*UITests`.
- Build the test project, wait for exit, then run only those tests via VS Test Explorer or `vstest.console.exe` with filters. Avoid `dotnet test` in this repo.
- Add or adjust tests when changing behavior; if skipped, state why (for example comment-only or string rename).
## Style Enforcement
# Pull requests (expectations)
- Atomic: one logical change; no drive-by refactors.
- Describe: problem, approach, risk, test evidence.
- List: touched paths if not obvious.
- C#: `src/.editorconfig`, StyleCop.Analyzers
- C++: `src/.clang-format`
- XAML: XamlStyler
# When to ask for clarification
- Ambiguous spec after scanning relevant docs (see below).
- Cross-module impact (shared enum or struct) not clear.
- Security, elevation, or installer changes.
## When to Ask for Clarification
# Logging (use existing stacks)
- C++ logging lives in `src/common/logger/**` (`Logger::info`, `Logger::warn`, `Logger::error`, `Logger::debug`). Keep hot paths quiet (hooks, tight loops).
- C# logging goes through `ManagedCommon.Logger` (`LogInfo`, `LogWarning`, `LogError`, `LogDebug`, `LogTrace`). Some UIs use injected `ILogger` via `LoggerInstance.Logger`.
- Ambiguous spec after scanning docs
- Cross-module impact unclear
- Security, elevation, or installer changes
# Docs to consult
- `tools/build/BUILD-GUIDELINES.md`
- `doc/devdocs/core/architecture.md`
- `doc/devdocs/core/runner.md`
- `doc/devdocs/core/settings/readme.md`
- `doc/devdocs/modules/readme.md`
## Component-Specific Instructions
# Language style rules
- Always enforce repo analyzers: root `.editorconfig` plus any `stylecop.json`.
- C# code follows StyleCop.Analyzers and Microsoft.CodeAnalysis.NetAnalyzers.
- C++ code honors `.clang-format` plus `.clang-tidy` (modernize/cppcoreguidelines/readability).
- Markdown files wrap at 80 characters and use ATX headers with fenced code blocks that include language tags.
- YAML files indent two spaces and add comments for complex settings while keeping keys clear.
- PowerShell scripts use Verb-Noun names and prefer single-quoted literals while documenting parameters and satisfying PSScriptAnalyzer.
These are auto-applied based on file location:
- [Runner & Settings UI](.github/instructions/runner-settings-ui.instructions.md)
- [Common Libraries](.github/instructions/common-libraries.instructions.md)
## Detailed Documentation
- [Architecture](../doc/devdocs/core/architecture.md)
- [Coding Style](../doc/devdocs/development/style.md)
# Done checklist (self review before finishing)
- Build clean? Tests updated or passed? No unintended formatting? Any new dependency? Documented skips?

View File

@@ -1,261 +0,0 @@
---
description: 'Guidelines for creating high-quality Agent Skills for GitHub Copilot'
applyTo: '**/.github/skills/**/SKILL.md, **/.claude/skills/**/SKILL.md'
---
# Agent Skills File Guidelines
Instructions for creating effective and portable Agent Skills that enhance GitHub Copilot with specialized capabilities, workflows, and bundled resources.
## What Are Agent Skills?
Agent Skills are self-contained folders with instructions and bundled resources that teach AI agents specialized capabilities. Unlike custom instructions (which define coding standards), skills enable task-specific workflows that can include scripts, examples, templates, and reference data.
Key characteristics:
- **Portable**: Works across VS Code, Copilot CLI, and Copilot coding agent
- **Progressive loading**: Only loaded when relevant to the user's request
- **Resource-bundled**: Can include scripts, templates, examples alongside instructions
- **On-demand**: Activated automatically based on prompt relevance
## Directory Structure
Skills are stored in specific locations:
| Location | Scope | Recommendation |
|----------|-------|----------------|
| `.github/skills/<skill-name>/` | Project/repository | Recommended for project skills |
| `.claude/skills/<skill-name>/` | Project/repository | Legacy, for backward compatibility |
| `~/.github/skills/<skill-name>/` | Personal (user-wide) | Recommended for personal skills |
| `~/.claude/skills/<skill-name>/` | Personal (user-wide) | Legacy, for backward compatibility |
Each skill **must** have its own subdirectory containing at minimum a `SKILL.md` file.
## Required SKILL.md Format
### Frontmatter (Required)
```yaml
---
name: webapp-testing
description: Toolkit for testing local web applications using Playwright. Use when asked to verify frontend functionality, debug UI behavior, capture browser screenshots, check for visual regressions, or view browser console logs. Supports Chrome, Firefox, and WebKit browsers.
license: Complete terms in LICENSE.txt
---
```
| Field | Required | Constraints |
|-------|----------|-------------|
| `name` | Yes | Lowercase, hyphens for spaces, max 64 characters (e.g., `webapp-testing`) |
| `description` | Yes | Clear description of capabilities AND use cases, max 1024 characters |
| `license` | No | Reference to LICENSE.txt (e.g., `Complete terms in LICENSE.txt`) or SPDX identifier |
### Description Best Practices
**CRITICAL**: The `description` field is the PRIMARY mechanism for automatic skill discovery. Copilot reads ONLY the `name` and `description` to decide whether to load a skill. If your description is vague, the skill will never be activated.
**What to include in description:**
1. **WHAT** the skill does (capabilities)
2. **WHEN** to use it (specific triggers, scenarios, file types, or user requests)
3. **Keywords** that users might mention in their prompts
**Good description:**
```yaml
description: Toolkit for testing local web applications using Playwright. Use when asked to verify frontend functionality, debug UI behavior, capture browser screenshots, check for visual regressions, or view browser console logs. Supports Chrome, Firefox, and WebKit browsers.
```
**Poor description:**
```yaml
description: Web testing helpers
```
The poor description fails because:
- No specific triggers (when should Copilot load this?)
- No keywords (what user prompts would match?)
- No capabilities (what can it actually do?)
### Body Content
The body contains detailed instructions that Copilot loads AFTER the skill is activated. Recommended sections:
| Section | Purpose |
|---------|---------|
| `# Title` | Brief overview of what this skill enables |
| `## When to Use This Skill` | List of scenarios (reinforces description triggers) |
| `## Prerequisites` | Required tools, dependencies, environment setup |
| `## Step-by-Step Workflows` | Numbered steps for common tasks |
| `## Troubleshooting` | Common issues and solutions table |
| `## References` | Links to bundled docs or external resources |
## Bundling Resources
Skills can include additional files that Copilot accesses on-demand:
### Supported Resource Types
| Folder | Purpose | Loaded into Context? | Example Files |
|--------|---------|---------------------|---------------|
| `scripts/` | Executable automation that performs specific operations | When executed | `helper.py`, `validate.sh`, `build.ts` |
| `references/` | Documentation the AI agent reads to inform decisions | Yes, when referenced | `api_reference.md`, `schema.md`, `workflow_guide.md` |
| `assets/` | **Static files used AS-IS** in output (not modified by the AI agent) | No | `logo.png`, `brand-template.pptx`, `custom-font.ttf` |
| `templates/` | **Starter code/scaffolds that the AI agent MODIFIES** and builds upon | Yes, when referenced | `viewer.html` (insert algorithm), `hello-world/` (extend) |
### Directory Structure Example
```
.github/skills/my-skill/
├── SKILL.md # Required: Main instructions
├── LICENSE.txt # Recommended: License terms (Apache 2.0 typical)
├── scripts/ # Optional: Executable automation
│ ├── helper.py # Python script
│ └── helper.ps1 # PowerShell script
├── references/ # Optional: Documentation loaded into context
│ ├── api_reference.md
│ ├── step1-setup.md # Detailed workflow (>3 steps)
│ └── step2-deployment.md
├── assets/ # Optional: Static files used AS-IS in output
│ ├── baseline.png # Reference image for comparison
│ └── report-template.html
└── templates/ # Optional: Starter code the AI agent modifies
├── scaffold.py # Code scaffold the AI agent customizes
└── config.template # Config template the AI agent fills in
```
> **LICENSE.txt**: When creating a skill, download the Apache 2.0 license text from https://www.apache.org/licenses/LICENSE-2.0.txt and save as `LICENSE.txt`. Update the copyright year and owner in the appendix section.
### Assets vs Templates: Key Distinction
**Assets** are static resources **consumed unchanged** in the output:
- A `logo.png` that gets embedded into a generated document
- A `report-template.html` copied as output format
- A `custom-font.ttf` applied to text rendering
**Templates** are starter code/scaffolds that **the AI agent actively modifies**:
- A `scaffold.py` where the AI agent inserts logic
- A `config.template` where the AI agent fills in values based on user requirements
- A `hello-world/` project directory that the AI agent extends with new features
**Rule of thumb**: If the AI agent reads and builds upon the file content → `templates/`. If the file is used as-is in output → `assets/`.
### Referencing Resources in SKILL.md
Use relative paths to reference files within the skill directory:
```markdown
## Available Scripts
Run the [helper script](./scripts/helper.py) to automate common tasks.
See [API reference](./references/api_reference.md) for detailed documentation.
Use the [scaffold](./templates/scaffold.py) as a starting point.
```
## Progressive Loading Architecture
Skills use three-level loading for efficiency:
| Level | What Loads | When |
|-------|------------|------|
| 1. Discovery | `name` and `description` only | Always (lightweight metadata) |
| 2. Instructions | Full `SKILL.md` body | When request matches description |
| 3. Resources | Scripts, examples, docs | Only when Copilot references them |
This means:
- Install many skills without consuming context
- Only relevant content loads per task
- Resources don't load until explicitly needed
## Content Guidelines
### Writing Style
- Use imperative mood: "Run", "Create", "Configure" (not "You should run")
- Be specific and actionable
- Include exact commands with parameters
- Show expected outputs where helpful
- Keep sections focused and scannable
### Script Requirements
When including scripts, prefer cross-platform languages:
| Language | Use Case |
|----------|----------|
| Python | Complex automation, data processing |
| pwsh | PowerShell Core scripting |
| Node.js | JavaScript-based tooling |
| Bash/Shell | Simple automation tasks |
Best practices:
- Include help/usage documentation (`--help` flag)
- Handle errors gracefully with clear messages
- Avoid storing credentials or secrets
- Use relative paths where possible
### When to Bundle Scripts
Include scripts in your skill when:
- The same code would be rewritten repeatedly by the agent
- Deterministic reliability is critical (e.g., file manipulation, API calls)
- Complex logic benefits from being pre-tested rather than generated each time
- The operation has a self-contained purpose that can evolve independently
- Testability matters — scripts can be unit tested and validated
- Predictable behavior is preferred over dynamic generation
Scripts enable evolution: even simple operations benefit from being implemented as scripts when they may grow in complexity, need consistent behavior across invocations, or require future extensibility.
### Security Considerations
- Scripts rely on existing credential helpers (no credential storage)
- Include `--force` flags only for destructive operations
- Warn users before irreversible actions
- Document any network operations or external calls
## Common Patterns
### Parameter Table Pattern
Document parameters clearly:
```markdown
| Parameter | Required | Default | Description |
|-----------|----------|---------|-------------|
| `--input` | Yes | - | Input file or URL to process |
| `--action` | Yes | - | Action to perform |
| `--verbose` | No | `false` | Enable verbose output |
```
## Validation Checklist
Before publishing a skill:
- [ ] `SKILL.md` has valid frontmatter with `name` and `description`
- [ ] `name` is lowercase with hyphens, ≤64 characters
- [ ] `description` clearly states **WHAT** it does, **WHEN** to use it, and relevant **KEYWORDS**
- [ ] Body includes when to use, prerequisites, and step-by-step workflows
- [ ] SKILL.md body kept under 500 lines (split large content into `references/` folder)
- [ ] Large workflows (>5 steps) split into `references/` folder with clear links from SKILL.md
- [ ] Scripts include help documentation and error handling
- [ ] Relative paths used for all resource references
- [ ] No hardcoded credentials or secrets
## Workflow Execution Pattern
When executing multi-step workflows, create a TODO list where each step references the relevant documentation:
```markdown
## TODO
- [ ] Step 1: Configure environment - see [workflow-setup.md](./references/workflow-setup.md#environment)
- [ ] Step 2: Build project - see [workflow-setup.md](./references/workflow-setup.md#build)
- [ ] Step 3: Deploy to staging - see [workflow-deployment.md](./references/workflow-deployment.md#staging)
- [ ] Step 4: Run validation - see [workflow-deployment.md](./references/workflow-deployment.md#validation)
- [ ] Step 5: Deploy to production - see [workflow-deployment.md](./references/workflow-deployment.md#production)
```
This ensures traceability and allows resuming workflows if interrupted.
## Related Resources
- [Agent Skills Specification](https://agentskills.io/)
- [VS Code Agent Skills Documentation](https://code.visualstudio.com/docs/copilot/customization/agent-skills)
- [Reference Skills Repository](https://github.com/anthropics/skills)
- [Awesome Copilot Skills](https://github.com/github/awesome-copilot/blob/main/docs/README.skills.md)

View File

@@ -1,791 +0,0 @@
---
description: 'Guidelines for creating custom agent files for GitHub Copilot'
applyTo: '**/*.agent.md'
---
# Custom Agent File Guidelines
Instructions for creating effective and maintainable custom agent files that provide specialized expertise for specific development tasks in GitHub Copilot.
## Project Context
- Target audience: Developers creating custom agents for GitHub Copilot
- File format: Markdown with YAML frontmatter
- File naming convention: lowercase with hyphens (e.g., `test-specialist.agent.md`)
- Location: `.github/agents/` directory (repository-level) or `agents/` directory (organization/enterprise-level)
- Purpose: Define specialized agents with tailored expertise, tools, and instructions for specific tasks
- Official documentation: https://docs.github.com/en/copilot/how-tos/use-copilot-agents/coding-agent/create-custom-agents
## Required Frontmatter
Every agent file must include YAML frontmatter with the following fields:
```yaml
---
description: 'Brief description of the agent purpose and capabilities'
name: 'Agent Display Name'
tools: ['read', 'edit', 'search']
model: 'Claude Sonnet 4.5'
target: 'vscode'
infer: true
---
```
### Core Frontmatter Properties
#### **description** (REQUIRED)
- Single-quoted string, clearly stating the agent's purpose and domain expertise
- Should be concise (50-150 characters) and actionable
- Example: `'Focuses on test coverage, quality, and testing best practices'`
#### **name** (OPTIONAL)
- Display name for the agent in the UI
- If omitted, defaults to filename (without `.md` or `.agent.md`)
- Use title case and be descriptive
- Example: `'Testing Specialist'`
#### **tools** (OPTIONAL)
- List of tool names or aliases the agent can use
- Supports comma-separated string or YAML array format
- If omitted, agent has access to all available tools
- See "Tool Configuration" section below for details
#### **model** (STRONGLY RECOMMENDED)
- Specifies which AI model the agent should use
- Supported in VS Code, JetBrains IDEs, Eclipse, and Xcode
- Example: `'Claude Sonnet 4.5'`, `'gpt-4'`, `'gpt-4o'`
- Choose based on agent complexity and required capabilities
#### **target** (OPTIONAL)
- Specifies target environment: `'vscode'` or `'github-copilot'`
- If omitted, agent is available in both environments
- Use when agent has environment-specific features
#### **infer** (OPTIONAL)
- Boolean controlling whether Copilot can automatically use this agent based on context
- Default: `true` if omitted
- Set to `false` to require manual agent selection
#### **metadata** (OPTIONAL, GitHub.com only)
- Object with name-value pairs for agent annotation
- Example: `metadata: { category: 'testing', version: '1.0' }`
- Not supported in VS Code
#### **mcp-servers** (OPTIONAL, Organization/Enterprise only)
- Configure MCP servers available only to this agent
- Only supported for organization/enterprise level agents
- See "MCP Server Configuration" section below
## Tool Configuration
### Tool Specification Strategies
**Enable all tools** (default):
```yaml
# Omit tools property entirely, or use:
tools: ['*']
```
**Enable specific tools**:
```yaml
tools: ['read', 'edit', 'search', 'execute']
```
**Enable MCP server tools**:
```yaml
tools: ['read', 'edit', 'github/*', 'playwright/navigate']
```
**Disable all tools**:
```yaml
tools: []
```
### Standard Tool Aliases
All aliases are case-insensitive:
| Alias | Alternative Names | Category | Description |
|-------|------------------|----------|-------------|
| `execute` | shell, Bash, powershell | Shell execution | Execute commands in appropriate shell |
| `read` | Read, NotebookRead, view | File reading | Read file contents |
| `edit` | Edit, MultiEdit, Write, NotebookEdit | File editing | Edit and modify files |
| `search` | Grep, Glob, search | Code search | Search for files or text in files |
| `agent` | custom-agent, Task | Agent invocation | Invoke other custom agents |
| `web` | WebSearch, WebFetch | Web access | Fetch web content and search |
| `todo` | TodoWrite | Task management | Create and manage task lists (VS Code only) |
### Built-in MCP Server Tools
**GitHub MCP Server**:
```yaml
tools: ['github/*'] # All GitHub tools
tools: ['github/get_file_contents', 'github/search_repositories'] # Specific tools
```
- All read-only tools available by default
- Token scoped to source repository
**Playwright MCP Server**:
```yaml
tools: ['playwright/*'] # All Playwright tools
tools: ['playwright/navigate', 'playwright/screenshot'] # Specific tools
```
- Configured to access localhost only
- Useful for browser automation and testing
### Tool Selection Best Practices
- **Principle of Least Privilege**: Only enable tools necessary for the agent's purpose
- **Security**: Limit `execute` access unless explicitly required
- **Focus**: Fewer tools = clearer agent purpose and better performance
- **Documentation**: Comment why specific tools are required for complex configurations
## Sub-Agent Invocation (Agent Orchestration)
Agents can invoke other agents using `runSubagent` to orchestrate multi-step workflows.
### How It Works
Include `agent` in tools list to enable sub-agent invocation:
```yaml
tools: ['read', 'edit', 'search', 'agent']
```
Then invoke other agents with `runSubagent`:
```javascript
const result = await runSubagent({
description: 'What this step does',
prompt: `You are the [Specialist] specialist.
Context:
- Parameter: ${parameterValue}
- Input: ${inputPath}
- Output: ${outputPath}
Task:
1. Do the specific work
2. Write results to output location
3. Return summary of completion`
});
```
### Basic Pattern
Structure each sub-agent call with:
1. **description**: Clear one-line purpose of the sub-agent invocation
2. **prompt**: Detailed instructions with substituted variables
The prompt should include:
- Who the sub-agent is (specialist role)
- What context it needs (parameters, paths)
- What to do (concrete tasks)
- Where to write output
- What to return (summary)
### Example: Multi-Step Processing
```javascript
// Step 1: Process data
const processing = await runSubagent({
description: 'Transform raw input data',
prompt: `You are the Data Processor specialist.
Project: ${projectName}
Input: ${basePath}/raw/
Output: ${basePath}/processed/
Task:
1. Read all files from input directory
2. Apply transformations
3. Write processed files to output
4. Create summary: ${basePath}/processed/summary.md
Return: Number of files processed and any issues found`
});
// Step 2: Analyze (depends on Step 1)
const analysis = await runSubagent({
description: 'Analyze processed data',
prompt: `You are the Data Analyst specialist.
Project: ${projectName}
Input: ${basePath}/processed/
Output: ${basePath}/analysis/
Task:
1. Read processed files from input
2. Generate analysis report
3. Write to: ${basePath}/analysis/report.md
Return: Key findings and identified patterns`
});
```
### Key Points
- **Pass variables in prompts**: Use `${variableName}` for all dynamic values
- **Keep prompts focused**: Clear, specific tasks for each sub-agent
- **Return summaries**: Each sub-agent should report what it accomplished
- **Sequential execution**: Use `await` to maintain order when steps depend on each other
- **Error handling**: Check results before proceeding to dependent steps
### ⚠️ Tool Availability Requirement
**Critical**: If a sub-agent requires specific tools (e.g., `edit`, `execute`, `search`), the orchestrator must include those tools in its own `tools` list. Sub-agents cannot access tools that aren't available to their parent orchestrator.
**Example**:
```yaml
# If your sub-agents need to edit files, execute commands, or search code
tools: ['read', 'edit', 'search', 'execute', 'agent']
```
The orchestrator's tool permissions act as a ceiling for all invoked sub-agents. Plan your tool list carefully to ensure all sub-agents have the tools they need.
### ⚠️ Important Limitation
**Sub-agent orchestration is NOT suitable for large-scale data processing.** Avoid using `runSubagent` when:
- Processing hundreds or thousands of files
- Handling large datasets
- Performing bulk transformations on big codebases
- Orchestrating more than 5-10 sequential steps
Each sub-agent call adds latency and context overhead. For high-volume processing, implement logic directly in a single agent instead. Use orchestration only for coordinating specialized tasks on focused, manageable datasets.
## Agent Prompt Structure
The markdown content below the frontmatter defines the agent's behavior, expertise, and instructions. Well-structured prompts typically include:
1. **Agent Identity and Role**: Who the agent is and its primary role
2. **Core Responsibilities**: What specific tasks the agent performs
3. **Approach and Methodology**: How the agent works to accomplish tasks
4. **Guidelines and Constraints**: What to do/avoid and quality standards
5. **Output Expectations**: Expected output format and quality
### Prompt Writing Best Practices
- **Be Specific and Direct**: Use imperative mood ("Analyze", "Generate"); avoid vague terms
- **Define Boundaries**: Clearly state scope limits and constraints
- **Include Context**: Explain domain expertise and reference relevant frameworks
- **Focus on Behavior**: Describe how the agent should think and work
- **Use Structured Format**: Headers, bullets, and lists make prompts scannable
## Variable Definition and Extraction
Agents can define dynamic parameters to extract values from user input and use them throughout the agent's behavior and sub-agent communications. This enables flexible, context-aware agents that adapt to user-provided data.
### When to Use Variables
**Use variables when**:
- Agent behavior depends on user input
- Need to pass dynamic values to sub-agents
- Want to make agents reusable across different contexts
- Require parameterized workflows
- Need to track or reference user-provided context
**Examples**:
- Extract project name from user prompt
- Capture certification name for pipeline processing
- Identify file paths or directories
- Extract configuration options
- Parse feature names or module identifiers
### Variable Declaration Pattern
Define variables section early in the agent prompt to document expected parameters:
```markdown
# Agent Name
## Dynamic Parameters
- **Parameter Name**: Description and usage
- **Another Parameter**: How it's extracted and used
## Your Mission
Process [PARAMETER_NAME] to accomplish [task].
```
### Variable Extraction Methods
#### 1. **Explicit User Input**
Ask the user to provide the variable if not detected in the prompt:
```markdown
## Your Mission
Process the project by analyzing your codebase.
### Step 1: Identify Project
If no project name is provided, **ASK THE USER** for:
- Project name or identifier
- Base path or directory location
- Configuration type (if applicable)
Use this information to contextualize all subsequent tasks.
```
#### 2. **Implicit Extraction from Prompt**
Automatically extract variables from the user's natural language input:
```javascript
// Example: Extract certification name from user input
const userInput = "Process My Certification";
// Extract key information
const certificationName = extractCertificationName(userInput);
// Result: "My Certification"
const basePath = `certifications/${certificationName}`;
// Result: "certifications/My Certification"
```
#### 3. **Contextual Variable Resolution**
Use file context or workspace information to derive variables:
```markdown
## Variable Resolution Strategy
1. **From User Prompt**: First, look for explicit mentions in user input
2. **From File Context**: Check current file name or path
3. **From Workspace**: Use workspace folder or active project
4. **From Settings**: Reference configuration files
5. **Ask User**: If all else fails, request missing information
```
### Using Variables in Agent Prompts
#### Variable Substitution in Instructions
Use template variables in agent prompts to make them dynamic:
```markdown
# Agent Name
## Dynamic Parameters
- **Project Name**: ${projectName}
- **Base Path**: ${basePath}
- **Output Directory**: ${outputDir}
## Your Mission
Process the **${projectName}** project located at `${basePath}`.
## Process Steps
1. Read input from: `${basePath}/input/`
2. Process files according to project configuration
3. Write results to: `${outputDir}/`
4. Generate summary report
## Quality Standards
- Maintain project-specific coding standards for **${projectName}**
- Follow directory structure: `${basePath}/[structure]`
```
#### Passing Variables to Sub-Agents
When invoking a sub-agent, pass all context through template variables in the prompt:
```javascript
// Extract and prepare variables
const basePath = `projects/${projectName}`;
const inputPath = `${basePath}/src/`;
const outputPath = `${basePath}/docs/`;
// Pass to sub-agent with all variables substituted
const result = await runSubagent({
description: 'Generate project documentation',
prompt: `You are the Documentation specialist.
Project: ${projectName}
Input: ${inputPath}
Output: ${outputPath}
Task:
1. Read source files from ${inputPath}
2. Generate comprehensive documentation
3. Write to ${outputPath}/index.md
4. Include code examples and usage guides
Return: Summary of documentation generated (file count, word count)`
});
```
The sub-agent receives all necessary context embedded in the prompt. Variables are resolved before sending the prompt, so the sub-agent works with concrete paths and values, not variable placeholders.
### Real-World Example: Code Review Orchestrator
Example of a simple orchestrator that validates code through multiple specialized agents:
```javascript
async function reviewCodePipeline(repositoryName, prNumber) {
const basePath = `projects/${repositoryName}/pr-${prNumber}`;
// Step 1: Security Review
const security = await runSubagent({
description: 'Scan for security vulnerabilities',
prompt: `You are the Security Reviewer specialist.
Repository: ${repositoryName}
PR: ${prNumber}
Code: ${basePath}/changes/
Task:
1. Scan code for OWASP Top 10 vulnerabilities
2. Check for injection attacks, auth flaws
3. Write findings to ${basePath}/security-review.md
Return: List of critical, high, and medium issues found`
});
// Step 2: Test Coverage Check
const coverage = await runSubagent({
description: 'Verify test coverage for changes',
prompt: `You are the Test Coverage specialist.
Repository: ${repositoryName}
PR: ${prNumber}
Changes: ${basePath}/changes/
Task:
1. Analyze code coverage for modified files
2. Identify untested critical paths
3. Write report to ${basePath}/coverage-report.md
Return: Current coverage percentage and gaps`
});
// Step 3: Aggregate Results
const finalReport = await runSubagent({
description: 'Compile all review findings',
prompt: `You are the Review Aggregator specialist.
Repository: ${repositoryName}
Reports: ${basePath}/*.md
Task:
1. Read all review reports from ${basePath}/
2. Synthesize findings into single report
3. Determine overall verdict (APPROVE/NEEDS_FIXES/BLOCK)
4. Write to ${basePath}/final-review.md
Return: Final verdict and executive summary`
});
return finalReport;
}
```
This pattern applies to any orchestration scenario: extract variables, call sub-agents with clear context, await results.
### Variable Best Practices
#### 1. **Clear Documentation**
Always document what variables are expected:
```markdown
## Required Variables
- **projectName**: The name of the project (string, required)
- **basePath**: Root directory for project files (path, required)
## Optional Variables
- **mode**: Processing mode - quick/standard/detailed (enum, default: standard)
- **outputFormat**: Output format - markdown/json/html (enum, default: markdown)
## Derived Variables
- **outputDir**: Automatically set to ${basePath}/output
- **logFile**: Automatically set to ${basePath}/.log.md
```
#### 2. **Consistent Naming**
Use consistent variable naming conventions:
```javascript
// Good: Clear, descriptive naming
const variables = {
projectName, // What project to work on
basePath, // Where project files are located
outputDirectory, // Where to save results
processingMode, // How to process (detail level)
configurationPath // Where config files are
};
// Avoid: Ambiguous or inconsistent
const bad_variables = {
name, // Too generic
path, // Unclear which path
mode, // Too short
config // Too vague
};
```
#### 3. **Validation and Constraints**
Document valid values and constraints:
```markdown
## Variable Constraints
**projectName**:
- Type: string (alphanumeric, hyphens, underscores allowed)
- Length: 1-100 characters
- Required: yes
- Pattern: `/^[a-zA-Z0-9_-]+$/`
**processingMode**:
- Type: enum
- Valid values: "quick" (< 5min), "standard" (5-15min), "detailed" (15+ min)
- Default: "standard"
- Required: no
```
## MCP Server Configuration (Organization/Enterprise Only)
MCP servers extend agent capabilities with additional tools. Only supported for organization and enterprise-level agents.
### Configuration Format
```yaml
---
name: my-custom-agent
description: 'Agent with MCP integration'
tools: ['read', 'edit', 'custom-mcp/tool-1']
mcp-servers:
custom-mcp:
type: 'local'
command: 'some-command'
args: ['--arg1', '--arg2']
tools: ["*"]
env:
ENV_VAR_NAME: ${{ secrets.API_KEY }}
---
```
### MCP Server Properties
- **type**: Server type (`'local'` or `'stdio'`)
- **command**: Command to start the MCP server
- **args**: Array of command arguments
- **tools**: Tools to enable from this server (`["*"]` for all)
- **env**: Environment variables (supports secrets)
### Environment Variables and Secrets
Secrets must be configured in repository settings under "copilot" environment.
**Supported syntax**:
```yaml
env:
# Environment variable only
VAR_NAME: COPILOT_MCP_ENV_VAR_VALUE
# Variable with header
VAR_NAME: $COPILOT_MCP_ENV_VAR_VALUE
VAR_NAME: ${COPILOT_MCP_ENV_VAR_VALUE}
# GitHub Actions-style (YAML only)
VAR_NAME: ${{ secrets.COPILOT_MCP_ENV_VAR_VALUE }}
VAR_NAME: ${{ var.COPILOT_MCP_ENV_VAR_VALUE }}
```
## File Organization and Naming
### Repository-Level Agents
- Location: `.github/agents/`
- Scope: Available only in the specific repository
- Access: Uses repository-configured MCP servers
### Organization/Enterprise-Level Agents
- Location: `.github-private/agents/` (then move to `agents/` root)
- Scope: Available across all repositories in org/enterprise
- Access: Can configure dedicated MCP servers
### Naming Conventions
- Use lowercase with hyphens: `test-specialist.agent.md`
- Name should reflect agent purpose
- Filename becomes default agent name (if `name` not specified)
- Allowed characters: `.`, `-`, `_`, `a-z`, `A-Z`, `0-9`
## Agent Processing and Behavior
### Versioning
- Based on Git commit SHAs for the agent file
- Create branches/tags for different agent versions
- Instantiated using latest version for repository/branch
- PR interactions use same agent version for consistency
### Name Conflicts
Priority (highest to lowest):
1. Repository-level agent
2. Organization-level agent
3. Enterprise-level agent
Lower-level configurations override higher-level ones with the same name.
### Tool Processing
- `tools` list filters available tools (built-in and MCP)
- No tools specified = all tools enabled
- Empty list (`[]`) = all tools disabled
- Specific list = only those tools enabled
- Unrecognized tool names are ignored (allows environment-specific tools)
### MCP Server Processing Order
1. Out-of-the-box MCP servers (e.g., GitHub MCP)
2. Custom agent MCP configuration (org/enterprise only)
3. Repository-level MCP configurations
Each level can override settings from previous levels.
## Agent Creation Checklist
### Frontmatter
- [ ] `description` field present and descriptive (50-150 chars)
- [ ] `description` wrapped in single quotes
- [ ] `name` specified (optional but recommended)
- [ ] `tools` configured appropriately (or intentionally omitted)
- [ ] `model` specified for optimal performance
- [ ] `target` set if environment-specific
- [ ] `infer` set to `false` if manual selection required
### Prompt Content
- [ ] Clear agent identity and role defined
- [ ] Core responsibilities listed explicitly
- [ ] Approach and methodology explained
- [ ] Guidelines and constraints specified
- [ ] Output expectations documented
- [ ] Examples provided where helpful
- [ ] Instructions are specific and actionable
- [ ] Scope and boundaries clearly defined
- [ ] Total content under 30,000 characters
### File Structure
- [ ] Filename follows lowercase-with-hyphens convention
- [ ] File placed in correct directory (`.github/agents/` or `agents/`)
- [ ] Filename uses only allowed characters
- [ ] File extension is `.agent.md`
### Quality Assurance
- [ ] Agent purpose is unique and not duplicative
- [ ] Tools are minimal and necessary
- [ ] Instructions are clear and unambiguous
- [ ] Agent has been tested with representative tasks
- [ ] Documentation references are current
- [ ] Security considerations addressed (if applicable)
## Common Agent Patterns
### Testing Specialist
**Purpose**: Focus on test coverage and quality
**Tools**: All tools (for comprehensive test creation)
**Approach**: Analyze, identify gaps, write tests, avoid production code changes
### Implementation Planner
**Purpose**: Create detailed technical plans and specifications
**Tools**: Limited to `['read', 'search', 'edit']`
**Approach**: Analyze requirements, create documentation, avoid implementation
### Code Reviewer
**Purpose**: Review code quality and provide feedback
**Tools**: `['read', 'search']` only
**Approach**: Analyze, suggest improvements, no direct modifications
### Refactoring Specialist
**Purpose**: Improve code structure and maintainability
**Tools**: `['read', 'search', 'edit']`
**Approach**: Analyze patterns, propose refactorings, implement safely
### Security Auditor
**Purpose**: Identify security issues and vulnerabilities
**Tools**: `['read', 'search', 'web']`
**Approach**: Scan code, check against OWASP, report findings
## Common Mistakes to Avoid
### Frontmatter Errors
- ❌ Missing `description` field
- ❌ Description not wrapped in quotes
- ❌ Invalid tool names without checking documentation
- ❌ Incorrect YAML syntax (indentation, quotes)
### Tool Configuration Issues
- ❌ Granting excessive tool access unnecessarily
- ❌ Missing required tools for agent's purpose
- ❌ Not using tool aliases consistently
- ❌ Forgetting MCP server namespace (`server-name/tool`)
### Prompt Content Problems
- ❌ Vague, ambiguous instructions
- ❌ Conflicting or contradictory guidelines
- ❌ Lack of clear scope definition
- ❌ Missing output expectations
- ❌ Overly verbose instructions (exceeding character limits)
- ❌ No examples or context for complex tasks
### Organizational Issues
- ❌ Filename doesn't reflect agent purpose
- ❌ Wrong directory (confusing repo vs org level)
- ❌ Using spaces or special characters in filename
- ❌ Duplicate agent names causing conflicts
## Testing and Validation
### Manual Testing
1. Create the agent file with proper frontmatter
2. Reload VS Code or refresh GitHub.com
3. Select the agent from the dropdown in Copilot Chat
4. Test with representative user queries
5. Verify tool access works as expected
6. Confirm output meets expectations
### Integration Testing
- Test agent with different file types in scope
- Verify MCP server connectivity (if configured)
- Check agent behavior with missing context
- Test error handling and edge cases
- Validate agent switching and handoffs
### Quality Checks
- Run through agent creation checklist
- Review against common mistakes list
- Compare with example agents in repository
- Get peer review for complex agents
- Document any special configuration needs
## Additional Resources
### Official Documentation
- [Creating Custom Agents](https://docs.github.com/en/copilot/how-tos/use-copilot-agents/coding-agent/create-custom-agents)
- [Custom Agents Configuration](https://docs.github.com/en/copilot/reference/custom-agents-configuration)
- [Custom Agents in VS Code](https://code.visualstudio.com/docs/copilot/customization/custom-agents)
- [MCP Integration](https://docs.github.com/en/copilot/how-tos/use-copilot-agents/coding-agent/extend-coding-agent-with-mcp)
### Community Resources
- [Awesome Copilot Agents Collection](https://github.com/github/awesome-copilot/tree/main/agents)
- [Customization Library Examples](https://docs.github.com/en/copilot/tutorials/customization-library/custom-agents)
- [Your First Custom Agent Tutorial](https://docs.github.com/en/copilot/tutorials/customization-library/custom-agents/your-first-custom-agent)
### Related Files
- [Prompt Files Guidelines](./prompt.instructions.md) - For creating prompt files
- [Instructions Guidelines](./instructions.instructions.md) - For creating instruction files
## Version Compatibility Notes
### GitHub.com (Coding Agent)
- ✅ Fully supports all standard frontmatter properties
- ✅ Repository and org/enterprise level agents
- ✅ MCP server configuration (org/enterprise)
- ❌ Does not support `model`, `argument-hint`, `handoffs` properties
### VS Code / JetBrains / Eclipse / Xcode
- ✅ Supports `model` property for AI model selection
- ✅ Supports `argument-hint` and `handoffs` properties
- ✅ User profile and workspace-level agents
- ❌ Cannot configure MCP servers at repository level
- ⚠️ Some properties may behave differently
When creating agents for multiple environments, focus on common properties and test in all target environments. Use `target` property to create environment-specific agents when necessary.

View File

@@ -1,187 +0,0 @@
---
description: 'Best practices for Azure DevOps Pipeline YAML files'
applyTo: '**/azure-pipelines.yml, **/azure-pipelines*.yml, **/*.pipeline.yml'
---
# Azure DevOps Pipeline YAML Best Practices
Guidelines for creating maintainable, secure, and efficient Azure DevOps pipelines in PowerToys.
## General Guidelines
- Use YAML syntax consistently with proper indentation (2 spaces)
- Always include meaningful names and display names for pipelines, stages, jobs, and steps
- Implement proper error handling and conditional execution
- Use variables and parameters to make pipelines reusable and maintainable
- Follow the principle of least privilege for service connections and permissions
- Include comprehensive logging and diagnostics for troubleshooting
## Pipeline Structure
- Organize complex pipelines using stages for better visualization and control
- Use jobs to group related steps and enable parallel execution when possible
- Implement proper dependencies between stages and jobs
- Use templates for reusable pipeline components
- Keep pipeline files focused and modular - split large pipelines into multiple files
## Build Best Practices
- Use specific agent pool versions and VM images for consistency
- Cache dependencies (npm, NuGet, Maven, etc.) to improve build performance
- Implement proper artifact management with meaningful names and retention policies
- Use build variables for version numbers and build metadata
- Include code quality gates (lint checks, testing, security scans)
- Ensure builds are reproducible and environment-independent
## Testing Integration
- Run unit tests as part of the build process
- Publish test results in standard formats (JUnit, VSTest, etc.)
- Include code coverage reporting and quality gates
- Implement integration and end-to-end tests in appropriate stages
- Use test impact analysis when available to optimize test execution
- Fail fast on test failures to provide quick feedback
## Security Considerations
- Use Azure Key Vault for sensitive configuration and secrets
- Implement proper secret management with variable groups
- Use service connections with minimal required permissions
- Enable security scans (dependency vulnerabilities, static analysis)
- Implement approval gates for production deployments
- Use managed identities when possible instead of service principals
## Deployment Strategies
- Implement proper environment promotion (dev → staging → production)
- Use deployment jobs with proper environment targeting
- Implement blue-green or canary deployment strategies when appropriate
- Include rollback mechanisms and health checks
- Use infrastructure as code (ARM, Bicep, Terraform) for consistent deployments
- Implement proper configuration management per environment
## Variable and Parameter Management
- Use variable groups for shared configuration across pipelines
- Implement runtime parameters for flexible pipeline execution
- Use conditional variables based on branches or environments
- Secure sensitive variables and mark them as secrets
- Document variable purposes and expected values
- Use variable templates for complex variable logic
## Performance Optimization
- Use parallel jobs and matrix strategies when appropriate
- Implement proper caching strategies for dependencies and build outputs
- Use shallow clone for Git operations when full history isn't needed
- Optimize Docker image builds with multi-stage builds and layer caching
- Monitor pipeline performance and optimize bottlenecks
- Use pipeline resource triggers efficiently
## Monitoring and Observability
- Include comprehensive logging throughout the pipeline
- Use Azure Monitor and Application Insights for deployment tracking
- Implement proper notification strategies for failures and successes
- Include deployment health checks and automated rollback triggers
- Use pipeline analytics to identify improvement opportunities
- Document pipeline behavior and troubleshooting steps
## Template and Reusability
- Create pipeline templates for common patterns
- Use extends templates for complete pipeline inheritance
- Implement step templates for reusable task sequences
- Use variable templates for complex variable logic
- Version templates appropriately for stability
- Document template parameters and usage examples
## Branch and Trigger Strategy
- Implement appropriate triggers for different branch types
- Use path filters to trigger builds only when relevant files change
- Configure proper CI/CD triggers for main/master branches
- Use pull request triggers for code validation
- Implement scheduled triggers for maintenance tasks
- Consider resource triggers for multi-repository scenarios
## Example Structure
```yaml
# azure-pipelines.yml
trigger:
branches:
include:
- main
- develop
paths:
exclude:
- docs/*
- README.md
variables:
- group: shared-variables
- name: buildConfiguration
value: 'Release'
stages:
- stage: Build
displayName: 'Build and Test'
jobs:
- job: Build
displayName: 'Build Application'
pool:
vmImage: 'ubuntu-latest'
steps:
- task: UseDotNet@2
displayName: 'Use .NET SDK'
inputs:
version: '8.x'
- task: DotNetCoreCLI@2
displayName: 'Restore dependencies'
inputs:
command: 'restore'
projects: '**/*.csproj'
- task: DotNetCoreCLI@2
displayName: 'Build application'
inputs:
command: 'build'
projects: '**/*.csproj'
arguments: '--configuration $(buildConfiguration) --no-restore'
- stage: Deploy
displayName: 'Deploy to Staging'
dependsOn: Build
condition: and(succeeded(), eq(variables['Build.SourceBranch'], 'refs/heads/main'))
jobs:
- deployment: DeployToStaging
displayName: 'Deploy to Staging Environment'
environment: 'staging'
strategy:
runOnce:
deploy:
steps:
- download: current
displayName: 'Download drop artifact'
artifact: drop
- task: AzureWebApp@1
displayName: 'Deploy to Azure Web App'
inputs:
azureSubscription: 'staging-service-connection'
appType: 'webApp'
appName: 'myapp-staging'
package: '$(Pipeline.Workspace)/drop/**/*.zip'
```
## Common Anti-Patterns to Avoid
- Hardcoding sensitive values directly in YAML files
- Using overly broad triggers that cause unnecessary builds
- Mixing build and deployment logic in a single stage
- Not implementing proper error handling and cleanup
- Using deprecated task versions without upgrade plans
- Creating monolithic pipelines that are difficult to maintain
- Not using proper naming conventions for clarity
- Ignoring pipeline security best practices

View File

@@ -1,61 +0,0 @@
---
description: 'Guidelines for shared libraries including logging, IPC, settings, DPI, telemetry, and utilities consumed by multiple modules'
applyTo: 'src/common/**'
---
# Common Libraries Shared Code Guidance
Guidelines for modifying shared code in `src/common/`. Changes here can have wide-reaching impact across the entire PowerToys codebase.
## Scope
- Logging infrastructure (`src/common/logger/`)
- IPC primitives and named pipe utilities
- Settings serialization and management
- DPI awareness and scaling utilities
- Telemetry helpers
- General utilities (JSON parsing, string helpers, etc.)
## Guidelines
### API Stability
- Avoid breaking public headers/APIs; if changed, search & update all callers
- Coordinate ABI-impacting struct/class layout changes; keep binary compatibility
- When modifying public interfaces, grep the entire codebase for usages
### Performance
- Watch perf in hot paths (hooks, timers, serialization)
- Avoid avoidable allocations in frequently called code
- Profile changes that touch performance-sensitive areas
### Dependencies
- Ask before adding third-party deps or changing serialization formats
- New dependencies must be MIT-licensed or approved by PM team
- Add any new external packages to `NOTICE.md`
### Logging
- C++ logging uses spdlog (`Logger::info`, `Logger::warn`, `Logger::error`, `Logger::debug`)
- Initialize with `init_logger()` early in startup
- Keep hot paths quiet no logging in tight loops or hooks
## Acceptance Criteria
- No unintended ABI breaks
- No noisy logs in hot paths
- New non-obvious symbols briefly commented
- All callers updated when interfaces change
## Code Style
- **C++**: Follow `.clang-format` in `src/`; use Modern C++ patterns per C++ Core Guidelines
- **C#**: Follow `src/.editorconfig`; enforce StyleCop.Analyzers
## Validation
- Build: `tools\build\build.cmd` from `src/common/` folder
- Verify no ABI breaks: grep for changed function/struct names across codebase
- Check logs: ensure no new logging in performance-critical paths

View File

@@ -1,256 +0,0 @@
---
description: 'Guidelines for creating high-quality custom instruction files for GitHub Copilot'
applyTo: '**/*.instructions.md'
---
# Custom Instructions File Guidelines
Instructions for creating effective and maintainable custom instruction files that guide GitHub Copilot in generating domain-specific code and following project conventions.
## Project Context
- Target audience: Developers and GitHub Copilot working with domain-specific code
- File format: Markdown with YAML frontmatter
- File naming convention: lowercase with hyphens (e.g., `react-best-practices.instructions.md`)
- Location: `.github/instructions/` directory
- Purpose: Provide context-aware guidance for code generation, review, and documentation
## Required Frontmatter
Every instruction file must include YAML frontmatter with the following fields:
```yaml
---
description: 'Brief description of the instruction purpose and scope'
applyTo: 'glob pattern for target files (e.g., **/*.ts, **/*.py)'
---
```
### Frontmatter Guidelines
- **description**: Single-quoted string, 1-500 characters, clearly stating the purpose
- **applyTo**: Glob pattern(s) specifying which files these instructions apply to
- Single pattern: `'**/*.ts'`
- Multiple patterns: `'**/*.ts, **/*.tsx, **/*.js'`
- Specific files: `'src/**/*.py'`
- All files: `'**'`
## File Structure
A well-structured instruction file should include the following sections:
### 1. Title and Overview
- Clear, descriptive title using `#` heading
- Brief introduction explaining the purpose and scope
- Optional: Project context section with key technologies and versions
### 2. Core Sections
Organize content into logical sections based on the domain:
- **General Instructions**: High-level guidelines and principles
- **Best Practices**: Recommended patterns and approaches
- **Code Standards**: Naming conventions, formatting, style rules
- **Architecture/Structure**: Project organization and design patterns
- **Common Patterns**: Frequently used implementations
- **Security**: Security considerations (if applicable)
- **Performance**: Optimization guidelines (if applicable)
- **Testing**: Testing standards and approaches (if applicable)
### 3. Examples and Code Snippets
Provide concrete examples with clear labels:
```markdown
### Good Example
\`\`\`language
// Recommended approach
code example here
\`\`\`
### Bad Example
\`\`\`language
// Avoid this pattern
code example here
\`\`\`
```
### 4. Validation and Verification (Optional but Recommended)
- Build commands to verify code
- Lint checks and formatting tools
- Testing requirements
- Verification steps
## Content Guidelines
### Writing Style
- Use clear, concise language
- Write in imperative mood ("Use", "Implement", "Avoid")
- Be specific and actionable
- Avoid ambiguous terms like "should", "might", "possibly"
- Use bullet points and lists for readability
- Keep sections focused and scannable
### Best Practices
- **Be Specific**: Provide concrete examples rather than abstract concepts
- **Show Why**: Explain the reasoning behind recommendations when it adds value
- **Use Tables**: For comparing options, listing rules, or showing patterns
- **Include Examples**: Real code snippets are more effective than descriptions
- **Stay Current**: Reference current versions and best practices
- **Link Resources**: Include official documentation and authoritative sources
### Common Patterns to Include
1. **Naming Conventions**: How to name variables, functions, classes, files
2. **Code Organization**: File structure, module organization, import order
3. **Error Handling**: Preferred error handling patterns
4. **Dependencies**: How to manage and document dependencies
5. **Comments and Documentation**: When and how to document code
6. **Version Information**: Target language/framework versions
## Patterns to Follow
### Bullet Points and Lists
```markdown
## Security Best Practices
- Always validate user input before processing
- Use parameterized queries to prevent SQL injection
- Store secrets in environment variables, never in code
- Implement proper authentication and authorization
- Enable HTTPS for all production endpoints
```
### Tables for Structured Information
```markdown
## Common Issues
| Issue | Solution | Example |
| ---------------- | ------------------- | ----------------------------- |
| Magic numbers | Use named constants | `const MAX_RETRIES = 3` |
| Deep nesting | Extract functions | Refactor nested if statements |
| Hardcoded values | Use configuration | Store API URLs in config |
```
### Code Comparison
```markdown
### Good Example - Using TypeScript interfaces
\`\`\`typescript
interface User {
id: string;
name: string;
email: string;
}
function getUser(id: string): User {
// Implementation
}
\`\`\`
### Bad Example - Using any type
\`\`\`typescript
function getUser(id: any): any {
// Loses type safety
}
\`\`\`
```
### Conditional Guidance
```markdown
## Framework Selection
- **For small projects**: Use Minimal API approach
- **For large projects**: Use controller-based architecture with clear separation
- **For microservices**: Consider domain-driven design patterns
```
## Patterns to Avoid
- **Overly verbose explanations**: Keep it concise and scannable
- **Outdated information**: Always reference current versions and practices
- **Ambiguous guidelines**: Be specific about what to do or avoid
- **Missing examples**: Abstract rules without concrete code examples
- **Contradictory advice**: Ensure consistency throughout the file
- **Copy-paste from documentation**: Add value by distilling and providing context
## Testing Your Instructions
Before finalizing instruction files:
1. **Test with Copilot**: Try the instructions with actual prompts in VS Code
2. **Verify Examples**: Ensure code examples are correct and run without errors
3. **Check Glob Patterns**: Confirm `applyTo` patterns match intended files
## Example Structure
Here's a minimal example structure for a new instruction file:
```markdown
---
description: 'Brief description of purpose'
applyTo: '**/*.ext'
---
# Technology Name Development
Brief introduction and context.
## General Instructions
- High-level guideline 1
- High-level guideline 2
## Best Practices
- Specific practice 1
- Specific practice 2
## Code Standards
### Naming Conventions
- Rule 1
- Rule 2
### File Organization
- Structure 1
- Structure 2
## Common Patterns
### Pattern 1
Description and example
\`\`\`language
code example
\`\`\`
### Pattern 2
Description and example
## Validation
- Build command: `command to verify`
- Lint checks: `command to lint`
- Testing: `command to test`
```
## Maintenance
- Review instructions when dependencies or frameworks are updated
- Update examples to reflect current best practices
- Remove outdated patterns or deprecated features
- Add new patterns as they emerge in the community
- Keep glob patterns accurate as project structure evolves
## Additional Resources
- [Custom Instructions Documentation](https://code.visualstudio.com/docs/copilot/customization/custom-instructions)
- [Awesome Copilot Instructions](https://github.com/github/awesome-copilot/tree/main/instructions)

View File

@@ -1,88 +0,0 @@
---
description: 'Guidelines for creating high-quality prompt files for GitHub Copilot'
applyTo: '**/*.prompt.md'
---
# Copilot Prompt Files Guidelines
Instructions for creating effective and maintainable prompt files that guide GitHub Copilot in delivering consistent, high-quality outcomes across any repository.
## Scope and Principles
- Target audience: maintainers and contributors authoring reusable prompts for Copilot Chat.
- Goals: predictable behaviour, clear expectations, minimal permissions, and portability across repositories.
- Primary references: VS Code documentation on prompt files and organization-specific conventions.
## Frontmatter Requirements
Every prompt file should include YAML frontmatter with the following fields:
### Required/Recommended Fields
| Field | Required | Description |
|-------|----------|-------------|
| `description` | Recommended | A short description of the prompt (single sentence, actionable outcome) |
| `name` | Optional | The name shown after typing `/` in chat. Defaults to filename if not specified |
| `agent` | Recommended | The agent to use: `ask`, `edit`, `agent`, or a custom agent name. Defaults to current agent |
| `model` | Optional | The language model to use. Defaults to currently selected model |
| `tools` | Optional | List of tool/tool set names available for this prompt |
| `argument-hint` | Optional | Hint text shown in chat input to guide user interaction |
### Guidelines
- Use consistent quoting (single quotes recommended) and keep one field per line for readability and version control clarity
- If `tools` are specified and current agent is `ask` or `edit`, the default agent becomes `agent`
- Preserve any additional metadata (`language`, `tags`, `visibility`, etc.) required by your organization
## File Naming and Placement
- Use kebab-case filenames ending with `.prompt.md` and store them under `.github/prompts/` unless your workspace standard specifies another directory.
- Provide a short filename that communicates the action (for example, `generate-readme.prompt.md` rather than `prompt1.prompt.md`).
## Body Structure
- Start with an `#` level heading that matches the prompt intent so it surfaces well in Quick Pick search.
- Organize content with predictable sections. Recommended baseline: `Mission` or `Primary Directive`, `Scope & Preconditions`, `Inputs`, `Workflow` (step-by-step), `Output Expectations`, and `Quality Assurance`.
- Adjust section names to fit the domain, but retain the logical flow: why → context → inputs → actions → outputs → validation.
- Reference related prompts or instruction files using relative links to aid discoverability.
## Input and Context Handling
- Use `${input:variableName[:placeholder]}` for required values and explain when the user must supply them. Provide defaults or alternatives where possible.
- Call out contextual variables such as `${selection}`, `${file}`, `${workspaceFolder}` only when they are essential, and describe how Copilot should interpret them.
- Document how to proceed when mandatory context is missing (for example, “Request the file path and stop if it remains undefined”).
## Tool and Permission Guidance
- Limit `tools` to the smallest set that enables the task. List them in the preferred execution order when the sequence matters.
- If the prompt inherits tools from a chat mode, mention that relationship and state any critical tool behaviours or side effects.
- Warn about destructive operations (file creation, edits, terminal commands) and include guard rails or confirmation steps in the workflow.
## Instruction Tone and Style
- Write in direct, imperative sentences targeted at Copilot (for example, “Analyze”, “Generate”, “Summarize”).
- Keep sentences short and unambiguous, following Google Developer Documentation translation best practices to support localization.
- Avoid idioms, humor, or culturally specific references; favor neutral, inclusive language.
## Output Definition
- Specify the format, structure, and location of expected results (for example, “Create an architecture decision record file using the template below, such as `docs/architecture-decisions/record-XXXX.md`).
- Include success criteria and failure triggers so Copilot knows when to halt or retry.
- Provide validation steps—manual checks, automated commands, or acceptance criteria lists—that reviewers can execute after running the prompt.
## Examples and Reusable Assets
- Embed Good/Bad examples or scaffolds (Markdown templates, JSON stubs) that the prompt should produce or follow.
- Maintain reference tables (capabilities, status codes, role descriptions) inline to keep the prompt self-contained. Update these tables when upstream resources change.
- Link to authoritative documentation instead of duplicating lengthy guidance.
## Quality Assurance Checklist
- [ ] Frontmatter fields are complete, accurate, and least-privilege.
- [ ] Inputs include placeholders, default behaviours, and fallbacks.
- [ ] Workflow covers preparation, execution, and post-processing without gaps.
- [ ] Output expectations include formatting and storage details.
- [ ] Validation steps are actionable (commands, diff checks, review prompts).
- [ ] Security, compliance, and privacy policies referenced by the prompt are current.
- [ ] Prompt executes successfully in VS Code (`Chat: Run Prompt`) using representative scenarios.
## Maintenance Guidance
- Version-control prompts alongside the code they affect; update them when dependencies, tooling, or review processes change.
- Review prompts periodically to ensure tool lists, model requirements, and linked documents remain valid.
- Coordinate with other repositories: when a prompt proves broadly useful, extract common guidance into instruction files or shared prompt packs.
## Additional Resources
- [Prompt Files Documentation](https://code.visualstudio.com/docs/copilot/customization/prompt-files#_prompt-file-format)
- [Awesome Copilot Prompt Files](https://github.com/github/awesome-copilot/tree/main/prompts)
- [Tool Configuration](https://code.visualstudio.com/docs/copilot/chat/chat-agent-mode#_agent-mode-tools)

View File

@@ -1,68 +0,0 @@
---
description: 'Guidelines for Runner and Settings UI components that communicate via named pipes and manage module lifecycle'
applyTo: 'src/runner/**,src/settings-ui/**'
---
# Runner & Settings UI Core Components Guidance
Guidelines for modifying the Runner (tray/module loader) and Settings UI (configuration app). These components communicate via Windows Named Pipes using JSON messages.
## Runner (`src/runner/`)
### Scope
- Module bootstrap, hotkey management, settings bridge, update/elevation handling
### Guidelines
- If IPC/JSON contracts change, mirror updates in `src/settings-ui/**`
- Keep module discovery in `src/runner/main.cpp` in sync when adding/removing modules
- Keep startup lean: avoid blocking/network calls in early init path
- Preserve GPO & elevation behaviors; confirm no regression in policy handling
- Ask before modifying update workflow or elevation logic
### Acceptance Criteria
- Stable startup, consistent contracts, no unnecessary logging noise
## Settings UI (`src/settings-ui/`)
### Scope
- WinUI/WPF UI, communicates with Runner over named pipes; manages persisted settings schema
### Guidelines
- Don't break settings schema silently; add migration when shape changes
- If IPC/JSON contracts change, align with `src/runner/**` implementation
- Keep UI responsive: marshal to UI thread for UI-bound operations
- Reuse existing styles/resources; avoid duplicate theme keys
- Add/adjust migration or serialization tests when changing persisted settings
### Acceptance Criteria
- Schema integrity preserved, responsive UI, consistent contracts, no style duplication
## Shared Concerns
### IPC Contract Changes
When modifying the JSON message format between Runner and Settings UI:
1. Update both `src/runner/` and `src/settings-ui/` in the same PR
2. Preserve backward compatibility where possible
3. Add migration logic for settings schema changes
4. Test both directions of communication
### Code Style
- **C++ (Runner)**: Follow `.clang-format` in `src/`
- **C# (Settings UI)**: Follow `src/.editorconfig`, use StyleCop.Analyzers
- **XAML**: Use XamlStyler or run `.\.pipelines\applyXamlStyling.ps1 -Main`
## Validation
- Build Runner: `tools\build\build.cmd` from `src/runner/`
- Build Settings UI: `tools\build\build.cmd` from `src/settings-ui/`
- Test IPC: Launch both Runner and Settings UI, verify communication works
- Schema changes: Run serialization tests if settings shape changed

View File

@@ -1,228 +0,0 @@
---
description: 'Instructions for building Model Context Protocol (MCP) servers using the TypeScript SDK'
applyTo: '**/*.ts, **/*.js, **/package.json'
---
# TypeScript MCP Server Development
## Instructions
- Use the **@modelcontextprotocol/sdk** npm package: `npm install @modelcontextprotocol/sdk`
- Import from specific paths: `@modelcontextprotocol/sdk/server/mcp.js`, `@modelcontextprotocol/sdk/server/stdio.js`, etc.
- Use `McpServer` class for high-level server implementation with automatic protocol handling
- Use `Server` class for low-level control with manual request handlers
- Use **zod** for input/output schema validation: `npm install zod@3`
- Always provide `title` field for tools, resources, and prompts for better UI display
- Use `registerTool()`, `registerResource()`, and `registerPrompt()` methods (recommended over older APIs)
- Define schemas using zod: `{ inputSchema: { param: z.string() }, outputSchema: { result: z.string() } }`
- Return both `content` (for display) and `structuredContent` (for structured data) from tools
- For HTTP servers, use `StreamableHTTPServerTransport` with Express or similar frameworks
- For local integrations, use `StdioServerTransport` for stdio-based communication
- Create new transport instances per request to prevent request ID collisions (stateless mode)
- Use session management with `sessionIdGenerator` for stateful servers
- Enable DNS rebinding protection for local servers: `enableDnsRebindingProtection: true`
- Configure CORS headers and expose `Mcp-Session-Id` for browser-based clients
- Use `ResourceTemplate` for dynamic resources with URI parameters: `new ResourceTemplate('resource://{param}', { list: undefined })`
- Support completions for better UX using `completable()` wrapper from `@modelcontextprotocol/sdk/server/completable.js`
- Implement sampling with `server.server.createMessage()` to request LLM completions from clients
- Use `server.server.elicitInput()` to request additional user input during tool execution
- Enable notification debouncing for bulk updates: `debouncedNotificationMethods: ['notifications/tools/list_changed']`
- Dynamic updates: call `.enable()`, `.disable()`, `.update()`, or `.remove()` on registered items to emit `listChanged` notifications
- Use `getDisplayName()` from `@modelcontextprotocol/sdk/shared/metadataUtils.js` for UI display names
- Test servers with MCP Inspector: `npx @modelcontextprotocol/inspector`
## Best Practices
- Keep tool implementations focused on single responsibilities
- Provide clear, descriptive titles and descriptions for LLM understanding
- Use proper TypeScript types for all parameters and return values
- Implement comprehensive error handling with try-catch blocks
- Return `isError: true` in tool results for error conditions
- Use async/await for all asynchronous operations
- Close database connections and clean up resources properly
- Validate input parameters before processing
- Use structured logging for debugging without polluting stdout/stderr
- Consider security implications when exposing file system or network access
- Implement proper resource cleanup on transport close events
- Use environment variables for configuration (ports, API keys, etc.)
- Document tool capabilities and limitations clearly
- Test with multiple clients to ensure compatibility
## Common Patterns
### Basic Server Setup (HTTP)
```typescript
import { McpServer } from '@modelcontextprotocol/sdk/server/mcp.js';
import { StreamableHTTPServerTransport } from '@modelcontextprotocol/sdk/server/streamableHttp.js';
import express from 'express';
const server = new McpServer({
name: 'my-server',
version: '1.0.0'
});
const app = express();
app.use(express.json());
app.post('/mcp', async (req, res) => {
const transport = new StreamableHTTPServerTransport({
sessionIdGenerator: undefined,
enableJsonResponse: true
});
res.on('close', () => transport.close());
await server.connect(transport);
await transport.handleRequest(req, res, req.body);
});
app.listen(3000);
```
### Basic Server Setup (stdio)
```typescript
import { McpServer } from '@modelcontextprotocol/sdk/server/mcp.js';
import { StdioServerTransport } from '@modelcontextprotocol/sdk/server/stdio.js';
const server = new McpServer({
name: 'my-server',
version: '1.0.0'
});
// ... register tools, resources, prompts ...
const transport = new StdioServerTransport();
await server.connect(transport);
```
### Simple Tool
```typescript
import { z } from 'zod';
server.registerTool(
'calculate',
{
title: 'Calculator',
description: 'Perform basic calculations',
inputSchema: { a: z.number(), b: z.number(), op: z.enum(['+', '-', '*', '/']) },
outputSchema: { result: z.number() }
},
async ({ a, b, op }) => {
const result = op === '+' ? a + b : op === '-' ? a - b :
op === '*' ? a * b : a / b;
const output = { result };
return {
content: [{ type: 'text', text: JSON.stringify(output) }],
structuredContent: output
};
}
);
```
### Dynamic Resource
```typescript
import { ResourceTemplate } from '@modelcontextprotocol/sdk/server/mcp.js';
server.registerResource(
'user',
new ResourceTemplate('users://{userId}', { list: undefined }),
{
title: 'User Profile',
description: 'Fetch user profile data'
},
async (uri, { userId }) => ({
contents: [{
uri: uri.href,
text: `User ${userId} data here`
}]
})
);
```
### Tool with Sampling
```typescript
server.registerTool(
'summarize',
{
title: 'Text Summarizer',
description: 'Summarize text using LLM',
inputSchema: { text: z.string() },
outputSchema: { summary: z.string() }
},
async ({ text }) => {
const response = await server.server.createMessage({
messages: [{
role: 'user',
content: { type: 'text', text: `Summarize: ${text}` }
}],
maxTokens: 500
});
const summary = response.content.type === 'text' ?
response.content.text : 'Unable to summarize';
const output = { summary };
return {
content: [{ type: 'text', text: JSON.stringify(output) }],
structuredContent: output
};
}
);
```
### Prompt with Completion
```typescript
import { completable } from '@modelcontextprotocol/sdk/server/completable.js';
server.registerPrompt(
'review',
{
title: 'Code Review',
description: 'Review code with specific focus',
argsSchema: {
language: completable(z.string(), value =>
['typescript', 'python', 'javascript', 'java']
.filter(l => l.startsWith(value))
),
code: z.string()
}
},
({ language, code }) => ({
messages: [{
role: 'user',
content: {
type: 'text',
text: `Review this ${language} code:\n\n${code}`
}
}]
})
);
```
### Error Handling
```typescript
server.registerTool(
'risky-operation',
{
title: 'Risky Operation',
description: 'An operation that might fail',
inputSchema: { input: z.string() },
outputSchema: { result: z.string() }
},
async ({ input }) => {
try {
const result = await performRiskyOperation(input);
const output = { result };
return {
content: [{ type: 'text', text: JSON.stringify(output) }],
structuredContent: output
};
} catch (err: unknown) {
const error = err as Error;
return {
content: [{ type: 'text', text: `Error: ${error.message}` }],
isError: true
};
}
}
);
```

View File

@@ -1,50 +1,16 @@
---
agent: 'agent'
model: 'GPT-5.1-Codex-Max'
description: 'Generate an 80-character git commit title for the local diff'
mode: 'agent'
model: Claude Sonnet 4.5
description: 'Generate an 80-character git commit title for the local diff.'
---
# Generate Commit Title
**Goal:** Provide a ready-to-paste git commit title (<= 80 characters) that captures the most important local changes since `HEAD`.
## Purpose
Provide a single-line, ready-to-paste git commit title (<= 80 characters) that reflects the most important local changes since `HEAD`.
## Input to collect
- Run exactly one command to view the local diff:
```@terminal
git diff HEAD
```
## How to decide the title
1. From the diff, find the dominant area (e.g., `src/modules/*`, `doc/devdocs/**`) and the change type (bug fix, docs update, config tweak).
2. Draft an imperative, plain-ASCII title that:
- Mentions the primary component when obvious (e.g., `FancyZones:` or `Docs:`)
- Stays within 80 characters and has no trailing punctuation
## Final output
- Reply with only the commit title on a single line—no extra text.
## PR title convention (when asked)
Use Conventional Commits style:
`<type>(<scope>): <summary>`
**Allowed types**
- feat, fix, docs, refactor, perf, test, build, ci, chore
**Scope rules**
- Use a short, PowerToys-focused scope (one word preferred). Common scopes:
- Core: `runner`, `settings-ui`, `common`, `docs`, `build`, `ci`, `installer`, `gpo`, `dsc`
- Modules: `fancyzones`, `powerrename`, `awake`, `colorpicker`, `imageresizer`, `keyboardmanager`, `mouseutils`, `peek`, `hosts`, `file-locksmith`, `screen-ruler`, `text-extractor`, `cropandlock`, `paste`, `powerlauncher`
- If unclear, pick the closest module or subsystem; omit only if unavoidable
**Summary rules**
- Imperative, present tense (“add”, “update”, “remove”, “fix”)
- Keep it <= 72 characters when possible; be specific, avoid “misc changes”
**Examples**
- `feat(fancyzones): add canvas template duplication`
- `fix(mouseutils): guard crosshair toggle when dpi info missing`
- `docs(runner): document tray icon states`
- `build(installer): align wix v5 suffix flag`
- `ci(ci): cache pipeline artifacts for x64`
**Workflow:**
1. Run a single command to view the local diff since the last commit:
```@terminal
git diff HEAD
```
2. From that diff, identify the dominant area (reference key paths like `src/modules/*`, `doc/devdocs/**`, etc.), the type of change (bug fix, docs update, config tweak), and any notable impact.
3. Draft a concise, imperative commit title summarizing the dominant change. Keep it plain ASCII, <= 80 characters, and avoid trailing punctuation. Mention the primary component when obvious (for example `FancyZones:` or `Docs:`).
4. Respond with only the final commit title on a single line so it can be pasted directly into `git commit`.

View File

@@ -1,11 +1,9 @@
---
agent: 'agent'
model: 'GPT-5.1-Codex-Max'
description: 'Generate a PowerToys-ready pull request description from the local diff'
mode: 'agent'
model: Claude Sonnet 4.5
description: 'Generate a PowerToys-ready pull request description from the local diff.'
---
# Generate PR Summary
**Goal:** Produce a ready-to-paste PR title and description that follows PowerToys conventions by comparing the current branch against a user-selected target branch.
**Repo guardrails:**
@@ -22,4 +20,3 @@ description: 'Generate a PowerToys-ready pull request description from the local
5. Confirm validation: list tests executed with results or state why tests were skipped in line with repo guidance.
6. Load `.github/pull_request_template.md`, mirror its section order, and populate it with the gathered facts. Include only relevant checklist entries, marking them `[x]/[ ]` and noting any intentional omissions as "N/A".
7. Present the filled template inside a fenced ```markdown code block with no extra commentary so it is ready to paste into a PR, clearly flagging any placeholders that still need user input.
8. Prepend the PR title above the filled template, applying the Conventional Commit type/scope rules from `.github/prompts/create-commit-title.prompt.md`; pick the dominant component from the diff and keep the title concise and imperative.

View File

@@ -1,12 +1,10 @@
---
agent: 'agent'
model: 'GPT-5.1-Codex-Max'
description: 'Execute the fix for a GitHub issue using the previously generated implementation plan'
mode: 'agent'
model: GPT-5-Codex (Preview)
description: " Execute the fix for a GitHub issue using the previously generated implementation plan. Apply code & tests directly in the repo. Output only a PR description (and optional manual steps)."
---
# Fix GitHub Issue
## Dependencies
# DEPENDENCY
Source review prompt (for generating the implementation plan if missing):
- .github/prompts/review-issue.prompt.md

View File

@@ -1,17 +1,15 @@
---
agent: 'agent'
model: 'GPT-5.1-Codex-Max'
description: 'Resolve Code scanning / check-spelling comments on the active PR'
mode: 'agent'
model: GPT-5-Codex (Preview)
description: 'Resolve Code scanning / check-spelling comments on the active PR.'
---
# Fix Spelling Comments
**Goal:** Clear every outstanding GitHub pull request comment created by the `Code scanning / check-spelling` workflow by explicitly allowing intentional terms.
**Guardrails:**
- Update only discussion threads authored by `github-actions` or `github-actions[bot]` that mention `Code scanning results / check-spelling`.
- Prefer improving the wording in the originally flagged file when it clarifies intent without changing meaning; if the wording is already clear/standard for the context, handle it via `.github/actions/spell-check/expect.txt` and reuse existing entries.
- Limit edits to the flagged text and `.github/actions/spell-check/expect.txt`; leave all other files and topics untouched.
- Resolve findings solely by editing `.github/actions/spell-check/expect.txt`; reuse existing entries.
- Leave all other files and topics untouched.
**Prerequisites:**
- Install GitHub CLI if it is not present: `winget install GitHub.cli`.
@@ -20,6 +18,5 @@ description: 'Resolve Code scanning / check-spelling comments on the active PR'
**Workflow:**
1. Determine the active pull request with a single `gh pr view --json number` call (default to the current branch).
2. Fetch all PR discussion data once via `gh pr view --json comments,reviews` and filter to check-spelling comments authored by `github-actions` or `github-actions[bot]` that are not minimized; when several remain, process only the most recent comment body.
3. For each flagged token, first consider tightening or rephrasing the original text to avoid the false positive while keeping the meaning intact; if the existing wording is already normal and professional for the context, proceed to allowlisting instead of changing it.
4. When allowlisting, review `.github/actions/spell-check/expect.txt` for an equivalent term (for example an existing lowercase variant); when found, reuse that normalized term rather than adding a new entry, even if the flagged token differs only by casing. Only add a new entry after confirming no equivalent already exists.
5. Add any remaining missing token to `.github/actions/spell-check/expect.txt`, keeping surrounding formatting intact.
3. For each flagged token, review `.github/actions/spell-check/expect.txt` for an equivalent term (for example an existing lowercase variant); when found, reuse that normalized term rather than adding a new entry, even if the flagged token differs only by casing. Only add a new entry after confirming no equivalent already exists.
4. Add any remaining missing token to `.github/actions/spell-check/expect.txt`, keeping surrounding formatting intact.

View File

@@ -1,28 +1,20 @@
---
agent: 'agent'
model: 'GPT-5.1-Codex-Max'
description: 'Review a GitHub issue, score it (0-100), and generate an implementation plan'
mode: 'agent'
model: Claude Sonnet 4.5
description: "You are github issue review and planning expertise, Score (0100) and write one Implementation Plan. Outputs: overview.md, implementation-plan.md."
---
# Review GitHub Issue
## Goal
# GOAL
For **#{{issue_number}}** produce:
1) `Generated Files/issueReview/{{issue_number}}/overview.md`
2) `Generated Files/issueReview/{{issue_number}}/implementation-plan.md`
## Inputs
Figure out required inputs {{issue_number}} from the invocation context; if anything is missing, ask for the value or note it as a gap.
figure out from the prompt on the
# CONTEXT (brief)
Ground evidence using `gh issue view {{issue_number}} --json number,title,body,author,createdAt,updatedAt,state,labels,milestone,reactions,comments,linkedPullRequests`, download images via MCP `github_issue_images` to better understand the issue context. Finally, use MCP `github_issue_attachments` to download logs with parameter `extractFolder` as `Generated Files/issueReview/{{issue_number}}/logs`, and analyze the downloaded logs if available to identify relevant issues. Locate the source code in the current workspace (use `rg`/`git grep` as needed). Link related issues and PRs.
## When to call MCP tools
If the following MCP "github-artifacts" tools are available in the environment, use them:
- `github_issue_images`: use when the issue/PR likely contains screenshots or other visual evidence (UI bugs, glitches, design problems).
- `github_issue_attachments`: use when the issue/PR mentions attached ZIPs (PowerToysReport_*.zip, logs.zip, debug.zip) or asks to analyze logs/diagnostics. Always provide `extractFolder` as `Generated Files/issueReview/{{issue_number}}/logs`
If these tools are not available (not listed by the runtime), start the MCP server "github-artifacts" first.
Ground evidence using `gh issue view {{issue_number}} --json number,title,body,author,createdAt,updatedAt,state,labels,milestone,reactions,comments,linkedPullRequests`, and download the image for understand the context of the issue more.
Locate source code in current workspace, but also free feel to use via `rg`/`git grep`. Link related issues/PRs.
# OVERVIEW.MD
## Summary

View File

@@ -1,10 +1,10 @@
---
agent: 'agent'
model: 'GPT-5.1-Codex-Max'
description: 'Perform a comprehensive PR review with per-step Markdown and machine-readable outputs'
mode: 'agent'
model: Claude Sonnet 4.5
description: "gh-driven PR review; per-step Markdown + machine-readable outputs"
---
# Review Pull Request
# PR Review — gh + stepwise
**Goal**: Given `{{pr_number}}`, run a *one-topic-per-step* review. Write files to `Generated Files/prReview/{{pr_number}}/` (replace `{{pr_number}}` with the integer). Emit machinereadable blocks for a GitHub MCP to post review comments.

View File

@@ -1,201 +0,0 @@
Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common
control with that entity. For the purposes of this definition,
"control" means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity
exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation
source, and configuration files.
"Object" form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but
not limited to compiled object code, generated documentation,
and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or
Object form, made available under the License, as indicated by a
copyright notice that is included in or attached to the work
(an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the purposes
of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of,
the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including
the original version of the Work and any modifications or additions
to that Work or Derivative Works thereof, that is intentionally
submitted to the Licensor for inclusion in the Work by the copyright owner
or by an individual or Legal Entity authorized to submit on behalf of
the copyright owner. For the purposes of this definition, "submitted"
means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems,
and issue tracking systems that are managed by, or on behalf of, the
Licensor for the purpose of discussing and improving the Work, but
excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity
on behalf of whom a Contribution has been received by Licensor and
subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the
Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
(except as stated in this section) patent license to make, have made,
use, offer to sell, sell, import, and otherwise transfer the Work,
where such license applies only to those patent claims licensable
by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s)
with the Work to which such Contribution(s) was submitted. If You
institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work
or a Contribution incorporated within the Work constitutes direct
or contributory patent infringement, then any patent licenses
granted to You under this License for that Work shall terminate
as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the
Work or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You
meet the following conditions:
(a) You must give any other recipients of the Work or
Derivative Works a copy of this License; and
(b) You must cause any modified files to carry prominent notices
stating that You changed the files; and
(c) You must retain, in the Source form of any Derivative Works
that You distribute, all copyright, patent, trademark, and
attribution notices from the Source form of the Work,
excluding those notices that do not pertain to any part of
the Derivative Works; and
(d) If the Work includes a "NOTICE" text file as part of its
distribution, then any Derivative Works that You distribute must
include a readable copy of the attribution notices contained
within such NOTICE file, excluding those notices that do not
pertain to any part of the Derivative Works, in at least one
of the following places: within a NOTICE text file distributed
as part of the Derivative Works; within the Source form or
documentation, if provided along with the Derivative Works; or,
within a display generated by the Derivative Works, if and
wherever such third-party notices normally appear. The contents
of the NOTICE file are for informational purposes only and
do not modify the License. You may add Your own attribution
notices within Derivative Works that You distribute, alongside
or as an addendum to the NOTICE text from the Work, provided
that such additional attribution notices cannot be construed
as modifying the License.
You may add Your own copyright statement to Your modifications and
may provide additional or different license terms and conditions
for use, reproduction, or distribution of Your modifications, or
for any such Derivative Works as a whole, provided Your use,
reproduction, and distribution of the Work otherwise complies with
the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade
names, trademarks, service marks, or product names of the Licensor,
except as required for reasonable and customary use in describing the
origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or
agreed to in writing, Licensor provides the Work (and each
Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any
risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise,
unless required by applicable law (such as deliberate and grossly
negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special,
incidental, or consequential damages of any character arising as a
result of this License or out of the use or inability to use the
Work (including but not limited to damages for loss of goodwill,
work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses), even if such Contributor
has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing
the Work or Derivative Works thereof, You may choose to offer,
and charge a fee for, acceptance of support, warranty, indemnity,
or other liability obligations and/or rights consistent with this
License. However, in accepting such obligations, You may act only
on Your own behalf and on Your sole responsibility, not on behalf
of any other Contributor, and only if You agree to indemnify,
defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason
of your accepting any such warranty or additional liability.
END OF TERMS AND CONDITIONS
APPENDIX: How to apply the Apache License to your work.
To apply the Apache License to your work, attach the following
boilerplate notice, with the fields enclosed by brackets "[]"
replaced with your own identifying information. (Don't include
the brackets!) The text should be enclosed in the appropriate
comment syntax for the file format. We also recommend that a
file or class name and description of purpose be included on the
same "printed page" as the copyright notice for easier
identification within third-party archives.
Copyright 2026 Microsoft Corporation
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.

View File

@@ -1,132 +0,0 @@
---
name: release-note-generation
description: Toolkit for generating PowerToys release notes from GitHub milestone PRs or commit ranges. Use when asked to create release notes, summarize milestone PRs, generate changelog, prepare release documentation, request Copilot reviews for PRs, update README for a new release, manage PR milestones, or collect PRs between commits/tags. Supports PR collection by milestone or commit range, milestone assignment, grouping by label, summarization with external contributor attribution, and README version bumping.
license: Complete terms in LICENSE.txt
---
# Release Note Generation Skill
Generate professional release notes for PowerToys milestones by collecting merged PRs, requesting Copilot code reviews, grouping by label, and producing user-facing summaries.
## Output Directory
All generated artifacts are placed under `Generated Files/ReleaseNotes/` at the repository root (gitignored).
```
Generated Files/ReleaseNotes/
├── milestone_prs.json # Raw PR data from GitHub
├── sorted_prs.csv # Sorted PR list with Copilot summaries
├── prs_with_milestone.csv # Milestone assignment tracking
├── grouped_csv/ # PRs grouped by label (one CSV per label)
├── grouped_md/ # Generated markdown summaries per label
└── v{VERSION}-release-notes.md # Final consolidated release notes
```
## When to Use This Skill
- Generate release notes for a milestone
- Summarize PRs merged in a release
- Request Copilot reviews for milestone PRs
- Assign milestones to PRs missing them
- Collect PRs between two commits/tags
- Update README.md for a new version
## Prerequisites
- GitHub CLI (`gh`) installed and authenticated
- MCP Server: github-mcp-server installed
- GitHub Copilot code review enabled for the org/repo
## Required Variables
⚠️ **Before starting**, confirm `{{ReleaseVersion}}` with the user. If not provided, **ASK**: "What release version are we generating notes for? (e.g., 0.98)"
| Variable | Description | Example |
|----------|-------------|---------|
| `{{ReleaseVersion}}` | Target release version | `0.98` |
## Workflow Overview
```
┌────────────────────────────────┐
│ 1.1 Collect PRs (stable range) │
└────────────────────────────────┘
┌────────────────────────────────┐
│ 1.2 Assign Milestones │
└────────────────────────────────┘
┌────────────────────────────────┐
│ 2.12.4 Label PRs (auto+human) │
└────────────────────────────────┘
┌────────────────────────────────┐
│ 3.1 Request Reviews (Copilot) │
└────────────────────────────────┘
┌────────────────────────────────┐
│ 3.2 Refresh PR data │
│ (CopilotSummary) │
└────────────────────────────────┘
┌────────────────────────────────┐
│ 3.3 Group by label │
│ (grouped_csv) │
└────────────────────────────────┘
┌────────────────────────────────┐
│ 4.1 Summarize (grouped_md) │
└────────────────────────────────┘
┌────────────────────────────────┐
│ 4.2 Final notes (v{VERSION}.md) │
└────────────────────────────────┘
```
| Step | Action | Details |
|------|--------|---------|
| 1.1 | Collect PRs | From previous release tag on `stable` branch → `sorted_prs.csv` |
| 1.2 | Assign Milestones | Ensure all PRs have correct milestone |
| 2.12.4 | Label PRs | Auto-suggest + human label low-confidence |
| 3.13.3 | Reviews & Grouping | Request Copilot reviews → refresh → group by label |
| 4.14.2 | Summaries & Final | Generate grouped summaries, then consolidate |
## Detailed workflow docs
Do not read all steps at once—only read the step you are executing.
- [Step 1: Collection & Milestones](./references/step1-collection.md)
- [Step 2: Labeling PRs](./references/step2-labeling.md)
- [Step 3: Reviews & Grouping](./references/step3-review-grouping.md)
- [Step 4: Summarization](./references/step4-summarization.md)
## Available Scripts
| Script | Purpose |
|--------|---------|
| [dump-prs-since-commit.ps1](./scripts/dump-prs-since-commit.ps1) | Fetch PRs between commits/tags |
| [group-prs-by-label.ps1](./scripts/group-prs-by-label.ps1) | Group PRs into CSVs |
| [collect-or-apply-milestones.ps1](./scripts/collect-or-apply-milestones.ps1) | Assign milestones |
| [diff_prs.ps1](./scripts/diff_prs.ps1) | Incremental PR diff |
## References
- [Sample Output](./references/SampleOutput.md) - Example summary formatting
- [Detailed Instructions](./references/Instruction.md) - Legacy full documentation
## Conventions
- **Terminal usage**: Disabled by default; only run scripts when user explicitly requests
- **Batch generation**: Generate ALL grouped_md files in one pass, then human reviews
- **PR order**: Preserve order from `sorted_prs.csv` in all outputs
- **Label filtering**: Keeps `Product-*`, `Area-*`, `GitHub*`, `*Plugin`, `Issue-*`
## Troubleshooting
| Issue | Solution |
|-------|----------|
| `gh` command not found | Install GitHub CLI and add to PATH |
| No PRs returned | Verify milestone title matches exactly |
| Empty CopilotSummary | Request Copilot reviews first, then re-run dump |
| Many unlabeled PRs | Return to labeling step before grouping |

View File

@@ -1,9 +0,0 @@
- Added mouse button actions so you can choose what left, right, or middle click does. Thanks [@PesBandi](https://github.com/PesBandi)!
- Aligned window styling with current Windows theme for a cleaner look. Thanks [@sadirano](https://github.com/sadirano)!
- Ensured screen readers are notified when the selected item in the list changes for better accessibility.
- Implemented configurable UI test pipeline that can use pre-built official releases instead of building everything from scratch, reducing test execution time from 2+ hours.
- Fixed Alt+Left Arrow navigation not working when search box contains text. Thanks [@jiripolasek](https://github.com/jiripolasek)!

View File

@@ -1,143 +0,0 @@
# Step 1: Collection and Milestones
## 1.0 To-do
- 1.0.1 Generate MemberList.md (REQUIRED)
- 1.1 Collect PRs
- 1.2 Assign Milestones (REQUIRED)
## Required Variables
⚠️ **Before starting**, confirm these values with the user:
| Variable | Description | Example |
|----------|-------------|---------|
| `{{ReleaseVersion}}` | Target release version | `0.97` |
| `{{PreviousReleaseTag}}` | Previous release tag from releases page | `v0.96.1` |
**If user hasn't specified `{{ReleaseVersion}}`, ASK:** "What release version are we generating notes for? (e.g., 0.97)"
**`{{PreviousReleaseTag}}` is derived from the releases page, not user input.** Use the latest published release tag (top of the page). You will use its tag name and tag commit SHA in Step 1.
---
## 1.0.1 Generate MemberList.md (REQUIRED)
Create `Generated Files/ReleaseNotes/MemberList.md` from the **PowerToys core team** section in [COMMUNITY.md](../../../COMMUNITY.md).
Rules:
- One GitHub username per line, **no** `@` prefix.
- Use the usernames exactly as listed in the core team section.
- Do not include former team members or other sections.
Example (format only):
```
example-user
another-user
```
---
## 1.1 Collect PRs
### 1.1.1 Get the previous release commit
1. Open the [PowerToys releases page](https://github.com/microsoft/PowerToys/releases/)
2. Find the latest release (e.g., v0.96.1, which should be at the top)
3. Set `{{PreviousReleaseTag}}` to that tag name (e.g., `v0.96.1`)
4. Copy the full tag commit SHA as `{{SHALastRelease}}`
**If the release SHA is not in your branch history:** Use the helper script to find an equivalent commit on the target branch by matching the commit title:
```powershell
pwsh ./.github/skills/release-note-generation/scripts/find-commit-by-title.ps1 `
-Commit '{{SHALastRelease}}' `
-Branch 'stable'
```
### 1.1.2 Run collection script against stable branch
```powershell
# Collect PRs from previous release to current HEAD of stable branch
pwsh ./.github/skills/release-note-generation/scripts/dump-prs-since-commit.ps1 `
-StartCommit '{{SHALastRelease}}' `
-Branch 'stable' `
-OutputDir 'Generated Files/ReleaseNotes'
```
**Parameters:**
- `-StartCommit` - Previous release tag or commit SHA (exclusive)
- `-Branch` - Always use `stable` branch, not `main` (script uses `origin/stable` as the end ref)
- `-EndCommit` - Optional override if you need a custom end ref
- `-OutputDir` - Output directory for generated files
**Reliability check:** If the script reports “No commits found”, the stable branch has not moved since the last release. In that case, either:
- Confirm this is expected and stop (no new release notes), or
- Re-run against `main` to gather pending changes for the next release cycle.
The script detects both merge commits (`Merge pull request #12345`) and squash commits (`Feature (#12345)`).
**Output** (in `Generated Files/ReleaseNotes/`):
- `milestone_prs.json` - raw PR data
- `sorted_prs.csv` - sorted PR list with columns: Id, Title, Labels, Author, Url, Body, CopilotSummary, NeedThanks
---
## 1.2 Assign Milestones (REQUIRED)
**Before generating release notes**, ensure all collected PRs have the correct milestone assigned.
⚠️ **CRITICAL:** Do NOT proceed to labeling until all PRs have milestones assigned.
### 1.2.1 Check current milestone status (dry run)
```powershell
# Dry run first to see what would be changed:
pwsh ./.github/skills/release-note-generation/scripts/collect-or-apply-milestones.ps1 `
-InputCsv 'Generated Files/ReleaseNotes/sorted_prs.csv' `
-OutputCsv 'Generated Files/ReleaseNotes/prs_with_milestone.csv' `
-DefaultMilestone 'PowerToys {{ReleaseVersion}}' `
-ApplyMissing -WhatIf
```
This queries GitHub for each PR's current milestone and shows which PRs would be updated.
### 1.2.2 Apply milestones to PRs missing them
```powershell
# Apply for real:
pwsh ./.github/skills/release-note-generation/scripts/collect-or-apply-milestones.ps1 `
-InputCsv 'Generated Files/ReleaseNotes/sorted_prs.csv' `
-OutputCsv 'Generated Files/ReleaseNotes/prs_with_milestone.csv' `
-DefaultMilestone 'PowerToys {{ReleaseVersion}}' `
-ApplyMissing
```
**Script Behavior:**
- Queries each PR's current milestone from GitHub
- PRs that already have a milestone are **skipped** (not overwritten)
- PRs missing a milestone get the default milestone applied
- Outputs `prs_with_milestone.csv` with (Id, Milestone) columns
- Produces summary: `Updated=X Skipped=Y Failed=Z`
**Validation:** After assignment, all PRs in `prs_with_milestone.csv` should have the target milestone.
---
## Additional Commands
### Collect milestones only (no changes to GitHub)
```powershell
pwsh ./.github/skills/release-note-generation/scripts/collect-or-apply-milestones.ps1 `
-InputCsv 'Generated Files/ReleaseNotes/sorted_prs.csv' `
-OutputCsv 'Generated Files/ReleaseNotes/prs_with_milestone.csv'
```
### Local assignment only (fill blanks in CSV, no GitHub changes)
```powershell
pwsh ./.github/skills/release-note-generation/scripts/collect-or-apply-milestones.ps1 `
-InputCsv 'Generated Files/ReleaseNotes/sorted_prs.csv' `
-OutputCsv 'Generated Files/ReleaseNotes/prs_with_milestone.csv' `
-DefaultMilestone 'PowerToys {{ReleaseVersion}}' `
-LocalAssign
```

View File

@@ -1,131 +0,0 @@
# Step 2: Label Unlabeled PRs
## 2.0 To-do
- 2.1 Identify unlabeled PRs (Agent Mode)
- 2.2 Suggest labels (Agent Mode)
- 2.3 Human label low-confidence PRs
- 2.4 Recheck labels, delete Unlabeled.csv, and re-collect
**Before grouping**, ensure all PRs have appropriate labels for categorization.
⚠️ **CRITICAL:** Do NOT proceed to grouping until all PRs have labels assigned. PRs without labels will end up in `Unlabeled.csv` and won't appear in the correct release note sections.
## 2.1 Identify unlabeled PRs (Agent Mode)
Read `sorted_prs.csv` and identify PRs with empty or missing `Labels` column.
For each unlabeled PR, analyze:
- **Title** - Often contains module name or feature
- **Body** - PR description with context
- **CopilotSummary** - AI-generated summary of changes
## 2.2 Suggest labels (Agent Mode)
For each unlabeled PR, suggest an appropriate label based on the content analysis.
**Output:** Create `Generated Files/ReleaseNotes/prs_label_review.md` with the following format:
```markdown
# PR Label Review
Generated: YYYY-MM-DD HH:mm:ss
## Summary
- Total unlabeled PRs: X
- High confidence: X
- Medium confidence: X
- Low confidence: X
---
## PRs Needing Review (sorted by confidence, low first)
| PR | Title | Suggested Label | Confidence | Reason |
|----|-------|-----------------|------------|--------|
| [#12347](url) | Some generic fix | ??? | Low | Unclear from content |
| [#12346](url) | Update dependencies | `Area-Build` | Medium | Body mentions NuGet packages |
```
Sort by confidence (low first) so human reviews uncertain ones first.
After writing `prs_label_review.md`, **generate `prs_to_label.csv`, apply labels, and re-run collection** so the CSV/labels stay in sync:
```powershell
# Generate CSV from suggestions (agent)
# Apply labels
pwsh ./.github/skills/release-note-generation/scripts/apply-labels.ps1 `
-InputCsv 'Generated Files/ReleaseNotes/prs_to_label.csv'
# Refresh collection
pwsh ./.github/skills/release-note-generation/scripts/dump-prs-since-commit.ps1 `
-StartCommit '{{PreviousReleaseTag}}' -Branch 'stable' `
-OutputDir 'Generated Files/ReleaseNotes'
```
## 2.3 Human label low-confidence PRs
Ask the human to label **low-confidence** PRs directly (in GitHub). Skip any they decide not to label.
## 2.4 Recheck labels, delete Unlabeled.csv, and re-collect
Recheck that all PRs now have labels. Delete `Unlabeled.csv` (if present), then re-run the collection script to update `sorted_prs.csv`:
```powershell
# Remove stale unlabeled output if it exists
Remove-Item 'Generated Files/ReleaseNotes/Unlabeled.csv' -ErrorAction SilentlyContinue
```
```powershell
pwsh ./.github/skills/release-note-generation/scripts/dump-prs-since-commit.ps1 `
-StartCommit '{{PreviousReleaseTag}}' -Branch 'stable' `
-OutputDir 'Generated Files/ReleaseNotes'
```
---
## Common Label Mappings
| Keywords/Patterns | Suggested Label |
| ----------------- | --------------- |
| Advanced Paste, AP, clipboard, paste | `Product-Advanced Paste` |
| CmdPal, Command Palette, cmdpal | `Product-Command Palette` |
| FancyZones, zones, layout | `Product-FancyZones` |
| ZoomIt, zoom, screen annotation | `Product-ZoomIt` |
| Settings, settings-ui, Quick Access, flyout | `Product-Settings` |
| Installer, setup, MSI, MSIX, WiX | `Area-Setup/Install` |
| Build, pipeline, CI/CD, msbuild | `Area-Build` |
| Test, unit test, UI test, fuzz | `Area-Tests` |
| Localization, loc, translation, resw | `Area-Localization` |
| Foundry, AI, LLM | `Product-Advanced Paste` (AI features) |
| Mouse Without Borders, MWB | `Product-Mouse Without Borders` |
| PowerRename, rename, regex | `Product-PowerRename` |
| Peek, preview, file preview | `Product-Peek` |
| Image Resizer, resize | `Product-Image Resizer` |
| LightSwitch, theme, dark mode | `Product-LightSwitch` |
| Quick Accent, accent, diacritics | `Product-Quick Accent` |
| Awake, keep awake, caffeine | `Product-Awake` |
| ColorPicker, color picker, eyedropper | `Product-ColorPicker` |
| Hosts, hosts file | `Product-Hosts` |
| Keyboard Manager, remap | `Product-Keyboard Manager` |
| Mouse Highlighter | `Product-Mouse Highlighter` |
| Mouse Jump | `Product-Mouse Jump` |
| Find My Mouse | `Product-Find My Mouse` |
| Mouse Pointer Crosshairs | `Product-Mouse Pointer Crosshairs` |
| Shortcut Guide | `Product-Shortcut Guide` |
| Text Extractor, OCR, PowerOCR | `Product-Text Extractor` |
| Workspaces | `Product-Workspaces` |
| File Locksmith | `Product-File Locksmith` |
| Crop And Lock | `Product-CropAndLock` |
| Environment Variables | `Product-Environment Variables` |
| New+ | `Product-New+` |
## Label Filtering Rules
The grouping script keeps labels matching these patterns:
- `Product-*`
- `Area-*`
- `GitHub*`
- `*Plugin`
- `Issue-*`
Other labels are ignored for grouping purposes.

View File

@@ -1,37 +0,0 @@
# Step 3: Copilot Reviews and Grouping
## 3.0 To-do
- 3.1 Request Copilot Reviews (Agent Mode)
- 3.2 Refresh PR Data
- 3.3 Group PRs by Label
## 3.1 Request Copilot Reviews (Agent Mode)
Use MCP tools to request Copilot reviews for all PRs in `Generated Files/ReleaseNotes/sorted_prs.csv`:
- Use `mcp_github_request_copilot_review` for each PR ID
- Do NOT generate or run scripts for this step
---
## 3.2 Refresh PR Data
Re-run the collection script to capture Copilot review summaries into the `CopilotSummary` column:
```powershell
pwsh ./.github/skills/release-note-generation/scripts/dump-prs-since-commit.ps1 `
-StartCommit '{{PreviousReleaseTag}}' -Branch 'stable' `
-OutputDir 'Generated Files/ReleaseNotes'
```
---
## 3.3 Group PRs by Label
```powershell
pwsh ./.github/skills/release-note-generation/scripts/group-prs-by-label.ps1 -CsvPath 'Generated Files/ReleaseNotes/sorted_prs.csv' -OutDir 'Generated Files/ReleaseNotes/grouped_csv'
```
Creates `Generated Files/ReleaseNotes/grouped_csv/` with one CSV per label combination.
**Validation:** The `Unlabeled.csv` file should be minimal (ideally empty). If many PRs remain unlabeled, return to Step 2 (see [step2-labeling.md](./step2-labeling.md)).

View File

@@ -1,88 +0,0 @@
# Step 4: Summaries and Final Release Notes
## 4.0 To-do
- 4.1 Generate Summary Markdown (Agent Mode)
- 4.2 Produce Final Release Notes File
## 4.1 Generate Summary Markdown (Agent Mode)
For each CSV in `Generated Files/ReleaseNotes/grouped_csv/`, create a markdown file in `Generated Files/ReleaseNotes/grouped_md/`.
⚠️ **IMPORTANT:** Generate **ALL** markdown files first. Do NOT pause between files or ask for feedback during generation. Complete the entire batch, then human reviews afterwards.
### Structure per file
**1. Bullet list** - one concise, user-facing line per PR:
- Use the “Verb-ed + Scenario + Impact” sentence structure—make readers think, “Thats exactly what I need” or “Yes, thats an awesome fix.”; The "impact" can be end-user focused (written to convey user excitement) or technical (performance/stability) when user-facing impact is minimal.
- If nothing special on impact or unclear impact, mark as needing human summary
- Source from Title, Body, and CopilotSummary (prefer CopilotSummary when available)
- If the column `NeedThanks` in CSV is `True`, append: `Thanks [@Author](https://github.com/Author)!`
- Do NOT include PR numbers in bullet lines
- Do NOT mention “security” or “privacy” issues, since these are not known and could be leveraged by attackers in earlier versions. Instead, describe the user-facing scenario, usage, or impact.
- If confidence < 70%, write: `Human Summary Needed: <PR full link>`
**See [SampleOutput.md](./SampleOutput.md) for examples of well-written bullet summaries.**
**2. Three-column table** (same PR order):
- Column 1: Concise summary (same as bullet)
- Column 2: PR link `[#ID](URL)`
- Column 3: Confidence level (High/Medium/Low)
### Review Process (AFTER all files generated)
- Human reviews each `grouped_md/*.md` file and requests rewrites as needed
- Human may say "rewrite Product-X" or "combine these bullets"—apply changes to that specific file
- Do NOT interrupt generation to ask for feedback
---
## 4.2 Produce Final Release Notes File
Once all `grouped_md/*.md` files are reviewed and approved, consolidate into a single release notes file.
**Output:** `Generated Files/ReleaseNotes/v{{ReleaseVersion}}-release-notes.md`
### Structure
**1. Highlights section** (top):
- 8-12 bullets covering the most user-visible features and impactful fixes
- Pattern: `**Module**: brief description`
- Avoid internal refactors; focus on what users will notice
**2. Module sections** (alphabetical order):
- One section per product (Advanced Paste, Awake, Command Palette, etc.)
- Migrate bullet summaries from the approved `grouped_md/Product-*.md` files
- One section 'Development' for all the rest summaries from the approved `grouped_md/Area-*.md` files
- Re-review E2E, group release improvements by section, and move the most important items to the top of each section.
Some items in the Development section may overlap and should be moved to the Module section where more applicable.
### Example Final Structure
```markdown
# PowerToys v{{ReleaseVersion}} Release Notes
## Highlights
- **Command Palette**: Added theme customization and drag-and-drop support
- **Advanced Paste**: Image input for AI, color detection in clipboard history
- **FancyZones**: New CLI tool for command-line layout management
...
---
## Advanced Paste
- Wrapped paste option lists in a single ScrollViewer
- Added image input handling for AI-powered transformations
...
## Awake
- Fixed timed mode expiration. Thanks [@daverayment](https://github.com/daverayment)!
...
---
## Development
...
```

View File

@@ -1,90 +0,0 @@
<#
.SYNOPSIS
Apply labels to PRs from a CSV file.
.DESCRIPTION
Reads a CSV with Id and Label columns and applies the specified label to each PR via GitHub CLI.
Supports dry-run mode to preview changes before applying.
.PARAMETER InputCsv
CSV file with Id and Label columns. Default: prs_to_label.csv
.PARAMETER Repo
GitHub repository (owner/name). Default: microsoft/PowerToys
.PARAMETER WhatIf
Dry run - show what would be applied without making changes.
.EXAMPLE
pwsh ./apply-labels.ps1 -InputCsv 'Generated Files/ReleaseNotes/prs_to_label.csv'
.EXAMPLE
pwsh ./apply-labels.ps1 -InputCsv 'Generated Files/ReleaseNotes/prs_to_label.csv' -WhatIf
.NOTES
Requires: gh CLI authenticated with repo write access.
Input CSV format:
Id,Label
12345,Product-Advanced Paste
12346,Product-Settings
#>
[CmdletBinding()] param(
[Parameter(Mandatory=$false)][string]$InputCsv = 'prs_to_label.csv',
[Parameter(Mandatory=$false)][string]$Repo = 'microsoft/PowerToys',
[switch]$WhatIf
)
$ErrorActionPreference = 'Stop'
function Write-Info($m){ Write-Host "[info] $m" -ForegroundColor Cyan }
function Write-Warn($m){ Write-Host "[warn] $m" -ForegroundColor Yellow }
function Write-Err($m){ Write-Host "[error] $m" -ForegroundColor Red }
function Write-OK($m){ Write-Host "[ok] $m" -ForegroundColor Green }
if (-not (Get-Command gh -ErrorAction SilentlyContinue)) { Write-Err "GitHub CLI 'gh' not found in PATH"; exit 1 }
if (-not (Test-Path -LiteralPath $InputCsv)) { Write-Err "Input CSV not found: $InputCsv"; exit 1 }
$rows = Import-Csv -LiteralPath $InputCsv
if (-not $rows) { Write-Info "No rows in CSV."; exit 0 }
$firstCols = $rows[0].PSObject.Properties.Name
if (-not ($firstCols -contains 'Id' -and $firstCols -contains 'Label')) {
Write-Err "CSV must contain 'Id' and 'Label' columns"; exit 1
}
Write-Info "Processing $($rows.Count) label assignments..."
if ($WhatIf) { Write-Warn "DRY RUN - no changes will be made" }
$applied = 0
$skipped = 0
$failed = 0
foreach ($row in $rows) {
$id = $row.Id
$label = $row.Label
if ([string]::IsNullOrWhiteSpace($id) -or [string]::IsNullOrWhiteSpace($label)) {
Write-Warn "Skipping row with empty Id or Label"
$skipped++
continue
}
if ($WhatIf) {
Write-Info "Would apply label '$label' to PR #$id"
$applied++
continue
}
try {
gh pr edit $id --repo $Repo --add-label $label 2>&1 | Out-Null
Write-OK "Applied '$label' to PR #$id"
$applied++
} catch {
Write-Warn "Failed to apply label to PR #${id}: $_"
$failed++
}
}
Write-Info ""
Write-Info "Summary: Applied=$applied Skipped=$skipped Failed=$failed"

View File

@@ -1,172 +0,0 @@
<#
.SYNOPSIS
Collect existing PR milestones or (optionally) assign/apply a milestone to missing PRs in one script.
.DESCRIPTION
This unified script merges the behaviors of the previous add-milestone-column (collector) and
set-milestones-missing (remote updater) scripts.
Modes (controlled by switches):
1. Collect (default) For each PR Id in the input CSV, queries GitHub for the current milestone and
outputs a two-column CSV (Id,Milestone) leaving blanks where none are set.
2. LocalAssign Same as Collect, but for rows that end up blank assigns the value of -DefaultMilestone
in memory (does NOT touch GitHub). Useful for quickly preparing a fully populated CSV.
3. ApplyMissing After determining which PRs have no milestone, call GitHub API to set their milestone
to -DefaultMilestone. Requires milestone to already exist (open). Network + write.
You can combine LocalAssign and ApplyMissing: the remote update uses the existing live state; LocalAssign only
affects the output CSV/pipeline objects.
.PARAMETER InputCsv
Source CSV with at least an Id column. Default: sorted_prs.csv
.PARAMETER OutputCsv
Destination CSV for collected (and optionally locally assigned) milestones. Default: prs_with_milestone.csv
.PARAMETER Repo
GitHub repository (owner/name). Default: microsoft/PowerToys
.PARAMETER DefaultMilestone
Milestone title used when -LocalAssign or -ApplyMissing is specified. Default: 'PowerToys 0.97'
.PARAMETER Offline
Skip ALL GitHub lookups / updates. Implies Collect-only with all Milestone cells blank (unless LocalAssign).
.PARAMETER LocalAssign
Populate empty Milestone cells in the output with -DefaultMilestone (does not modify GitHub).
.PARAMETER ApplyMissing
For PRs which currently have no milestone (live on GitHub), set them to -DefaultMilestone via Issues API.
.PARAMETER WhatIf
Dry run for ApplyMissing: show intended remote changes without performing PATCH requests.
.EXAMPLE
# Collect only
pwsh ./collect-or-apply-milestones.ps1
.EXAMPLE
# Collect and fill blanks locally in the output only
pwsh ./collect-or-apply-milestones.ps1 -LocalAssign
.EXAMPLE
# Collect and remotely apply milestone to missing PRs
pwsh ./collect-or-apply-milestones.ps1 -ApplyMissing
.EXAMPLE
# Dry run remote application
pwsh ./collect-or-apply-milestones.ps1 -ApplyMissing -WhatIf
.EXAMPLE
# Offline local assignment
pwsh ./collect-or-apply-milestones.ps1 -Offline -LocalAssign -DefaultMilestone 'PowerToys 0.96'
.NOTES
Requires gh CLI unless -Offline AND -ApplyMissing not specified.
Remote apply path queries milestones to resolve numeric ID.
#>
[CmdletBinding()] param(
[Parameter(Mandatory=$false)][string]$InputCsv = 'sorted_prs.csv',
[Parameter(Mandatory=$false)][string]$OutputCsv = 'prs_with_milestone.csv',
[Parameter(Mandatory=$false)][string]$Repo = 'microsoft/PowerToys',
[Parameter(Mandatory=$false)][string]$DefaultMilestone = 'PowerToys 0.97',
[switch]$Offline,
[switch]$LocalAssign,
[switch]$ApplyMissing,
[switch]$WhatIf
)
$ErrorActionPreference = 'Stop'
function Write-Info($m){ Write-Host "[info] $m" -ForegroundColor Cyan }
function Write-Warn($m){ Write-Host "[warn] $m" -ForegroundColor Yellow }
function Write-Err($m){ Write-Host "[error] $m" -ForegroundColor Red }
if (-not (Test-Path -LiteralPath $InputCsv)) { Write-Err "Input CSV not found: $InputCsv"; exit 1 }
$rows = Import-Csv -LiteralPath $InputCsv
if (-not $rows) { Write-Warn "Input CSV has no rows."; @() | Export-Csv -NoTypeInformation -LiteralPath $OutputCsv; exit 0 }
if (-not ($rows[0].PSObject.Properties.Name -contains 'Id')) { Write-Err "Input CSV missing 'Id' column."; exit 1 }
$needGh = (-not $Offline) -and ($ApplyMissing -or -not $Offline)
if ($needGh -and -not (Get-Command gh -ErrorAction SilentlyContinue)) { Write-Err "GitHub CLI 'gh' not found. Use -Offline or install gh."; exit 1 }
# Step 1: Collect current milestone titles
$milestoneCache = @{}
$collected = New-Object System.Collections.Generic.List[object]
$idx = 0
foreach ($row in $rows) {
$idx++
$id = $row.Id
if (-not $id) { Write-Warn "Row $idx missing Id; skipping"; continue }
$ms = ''
if (-not $Offline) {
if ($milestoneCache.ContainsKey($id)) { $ms = $milestoneCache[$id] }
else {
try {
$json = gh pr view $id --repo $Repo --json milestone 2>$null | ConvertFrom-Json
if ($json -and $json.milestone -and $json.milestone.title) { $ms = $json.milestone.title }
} catch {
Write-Warn "Failed to fetch PR #$id milestone: $_"
}
$milestoneCache[$id] = $ms
}
}
$collected.Add([PSCustomObject]@{ Id = $id; Milestone = $ms }) | Out-Null
}
# Step 2: Remote apply (if requested)
$applySummary = @()
if ($ApplyMissing) {
if ($Offline) { Write-Err "Cannot use -ApplyMissing with -Offline."; exit 1 }
Write-Info "Resolving milestone id for '$DefaultMilestone' ..."
$milestonesRaw = gh api repos/$Repo/milestones --paginate --jq '.[] | {number,title,state}'
$msObj = $milestonesRaw | ConvertFrom-Json | Where-Object { $_.title -eq $DefaultMilestone -and $_.state -eq 'open' } | Select-Object -First 1
if (-not $msObj) { Write-Err "Milestone '$DefaultMilestone' not found/open."; exit 1 }
$msNumber = $msObj.number
$targets = $collected | Where-Object { [string]::IsNullOrWhiteSpace($_.Milestone) }
Write-Info ("ApplyMissing: {0} PR(s) without milestone." -f $targets.Count)
foreach ($t in $targets) {
$id = $t.Id
try {
# Verify still missing live
$current = gh pr view $id --repo $Repo --json milestone --jq '.milestone.title // ""'
if ($current) {
$applySummary += [PSCustomObject]@{ Id=$id; Action='Skip (already has)'; Milestone=$current; Status='OK' }
continue
}
if ($WhatIf) {
$applySummary += [PSCustomObject]@{ Id=$id; Action='Would set'; Milestone=$DefaultMilestone; Status='DRY RUN' }
continue
}
gh api -X PATCH -H 'Accept: application/vnd.github+json' repos/$Repo/issues/$id -f milestone=$msNumber | Out-Null
$applySummary += [PSCustomObject]@{ Id=$id; Action='Set'; Milestone=$DefaultMilestone; Status='OK' }
# Reflect in collected object for CSV output if not LocalAssign already doing so
$t.Milestone = $DefaultMilestone
} catch {
$errText = $_ | Out-String
$applySummary += [PSCustomObject]@{ Id=$id; Action='Failed'; Milestone=$DefaultMilestone; Status=$errText.Trim() }
Write-Warn ("Failed to set milestone for PR #{0}: {1}" -f $id, ($errText.Trim()))
}
}
}
# Step 3: Local assignment (purely for output) AFTER remote so remote actual result not overwritten accidentally
if ($LocalAssign) {
foreach ($item in $collected) {
if ([string]::IsNullOrWhiteSpace($item.Milestone)) { $item.Milestone = $DefaultMilestone }
}
}
# Step 4: Export CSV
$collected | Export-Csv -LiteralPath $OutputCsv -NoTypeInformation -Encoding UTF8
Write-Info ("Wrote collected CSV -> {0}" -f (Resolve-Path -LiteralPath $OutputCsv))
# Step 5: Summaries
if ($ApplyMissing) {
$updated = ($applySummary | Where-Object { $_.Action -eq 'Set' }).Count
$skipped = ($applySummary | Where-Object { $_.Action -like 'Skip*' }).Count
$failed = ($applySummary | Where-Object { $_.Action -eq 'Failed' }).Count
Write-Info ("ApplyMissing summary: Updated={0} Skipped={1} Failed={2}" -f $updated, $skipped, $failed)
}
# Emit objects (final collected set)
return $collected

View File

@@ -1,100 +0,0 @@
<#
.SYNOPSIS
Produce an incremental PR CSV containing rows present in a newer full export but absent from a baseline export.
.DESCRIPTION
Compares two previously generated sorted PR CSV files (same schema). Any row whose key column value
(defaults to 'Number') does not exist in the baseline file is emitted to a new incremental CSV, preserving
the original column order. If no new rows are found, an empty CSV (with headers when determinable) is written.
.PARAMETER BaseCsv
Path to the baseline (earlier) PR CSV.
.PARAMETER AllCsv
Path to the newer full PR CSV containing superset (or equal set) of rows.
.PARAMETER OutCsv
Path to write the incremental CSV containing only new rows.
.PARAMETER Key
Column name used as unique identifier (defaults to 'Number'). Must exist in both CSVs.
.EXAMPLE
pwsh ./diff_prs.ps1 -BaseCsv sorted_prs_prev.csv -AllCsv sorted_prs.csv -OutCsv sorted_prs_incremental.csv
.NOTES
Requires: PowerShell 7+, both CSVs with identical column schemas.
Exit code 0 on success (even if zero incremental rows). Throws on missing files.
#>
[CmdletBinding()] param(
[Parameter(Mandatory=$false)][string]$BaseCsv = "./sorted_prs_93_round1.csv",
[Parameter(Mandatory=$false)][string]$AllCsv = "./sorted_prs.csv",
[Parameter(Mandatory=$false)][string]$OutCsv = "./sorted_prs_93_incremental.csv",
[Parameter(Mandatory=$false)][string]$Key = "Number"
)
Set-StrictMode -Version Latest
$ErrorActionPreference = 'Stop'
function Write-Info($m) { Write-Host "[info] $m" -ForegroundColor Cyan }
function Write-Warn($m) { Write-Host "[warn] $m" -ForegroundColor Yellow }
if (-not (Test-Path -LiteralPath $BaseCsv)) { throw "Base CSV not found: $BaseCsv" }
if (-not (Test-Path -LiteralPath $AllCsv)) { throw "All CSV not found: $AllCsv" }
# Load CSVs
$baseRows = Import-Csv -LiteralPath $BaseCsv
$allRows = Import-Csv -LiteralPath $AllCsv
if (-not $baseRows) { Write-Warn "Base CSV has no rows." }
if (-not $allRows) { Write-Warn "All CSV has no rows." }
# Validate key presence
if ($baseRows -and -not ($baseRows[0].PSObject.Properties.Name -contains $Key)) { throw "Key column '$Key' not found in base CSV." }
if ($allRows -and -not ($allRows[0].PSObject.Properties.Name -contains $Key)) { throw "Key column '$Key' not found in all CSV." }
# Build a set of existing keys from base
$set = New-Object 'System.Collections.Generic.HashSet[string]'
foreach ($row in $baseRows) {
$val = [string]($row.$Key)
if ($null -ne $val) { [void]$set.Add($val) }
}
# Filter rows in AllCsv whose key is not in base (these are the new / incremental rows)
$incremental = @()
foreach ($row in $allRows) {
$val = [string]($row.$Key)
if (-not $set.Contains($val)) { $incremental += $row }
}
# Preserve column order from the All CSV
$columns = @()
if ($allRows.Count -gt 0) {
$columns = $allRows[0].PSObject.Properties.Name
}
try {
if ($incremental.Count -gt 0) {
if ($columns.Count -gt 0) {
$incremental | Select-Object -Property $columns | Export-Csv -LiteralPath $OutCsv -NoTypeInformation -Encoding UTF8
} else {
$incremental | Export-Csv -LiteralPath $OutCsv -NoTypeInformation -Encoding UTF8
}
} else {
# Write an empty CSV with headers if we know them (facilitates downstream tooling expecting header row)
if ($columns.Count -gt 0) {
$obj = [PSCustomObject]@{}
foreach ($c in $columns) { $obj | Add-Member -NotePropertyName $c -NotePropertyValue $null }
$obj | Select-Object -Property $columns | Export-Csv -LiteralPath $OutCsv -NoTypeInformation -Encoding UTF8
} else {
'' | Out-File -LiteralPath $OutCsv -Encoding UTF8
}
}
Write-Info ("Incremental rows: {0}" -f $incremental.Count)
Write-Info ("Output: {0}" -f (Resolve-Path -LiteralPath $OutCsv))
}
catch {
Write-Host "[error] Failed writing output CSV: $_" -ForegroundColor Red
exit 1
}

View File

@@ -1,344 +0,0 @@
<#
.SYNOPSIS
Export merged PR metadata between two commits (exclusive start, inclusive end) to JSON and CSV.
.DESCRIPTION
Identifies merge/squash commits reachable from EndCommit but not StartCommit, extracts PR numbers,
queries GitHub for metadata plus (optionally) Copilot review/comment summaries, filters labels, then
emits a JSON artifact and a sorted CSV (first label alphabetical).
.PARAMETER StartCommit
Exclusive starting commit (SHA, tag, or ref). Commits AFTER this one are considered.
.PARAMETER EndCommit
Inclusive ending commit (SHA, tag, or ref). If not provided, uses origin/<Branch> when Branch is set; otherwise uses HEAD.
.PARAMETER Repo
GitHub repository (owner/name). Default: microsoft/PowerToys.
.PARAMETER OutputCsv
Destination CSV path. Default: sorted_prs.csv.
.PARAMETER OutputJson
Destination JSON path containing raw PR objects. Default: milestone_prs.json.
.EXAMPLE
pwsh ./dump-prs-since-commit.ps1 -StartCommit 0123abcd -Branch stable
.EXAMPLE
pwsh ./dump-prs-since-commit.ps1 -StartCommit 0123abcd -EndCommit 89ef7654 -OutputCsv delta.csv
.NOTES
Requires: git, gh (authenticated). No Set-StrictMode to keep parity with existing release scripts.
#>
[CmdletBinding()]
param(
[Parameter(Mandatory = $true)][string]$StartCommit, # exclusive start (commits AFTER this one)
[string]$EndCommit,
[string]$Branch,
[string]$Repo = "microsoft/PowerToys",
[string]$OutputDir,
[string]$OutputCsv = "sorted_prs.csv",
[string]$OutputJson = "milestone_prs.json"
)
<#
.SYNOPSIS
Dump merged PR information whose merge commits are reachable from EndCommit but not from StartCommit.
.DESCRIPTION
Uses git rev-list to compute commits in the (StartCommit, EndCommit] range, extracts PR numbers from merge commit messages,
queries GitHub (gh CLI) for details, then outputs a CSV.
PR merge commit messages in PowerToys generally contain patterns like:
Merge pull request #12345 from ...
.EXAMPLE
pwsh ./dump-prs-since-commit.ps1 -StartCommit 0123abcd -Branch stable
.EXAMPLE
pwsh ./dump-prs-since-commit.ps1 -StartCommit 0123abcd -EndCommit 89ef7654 -OutputCsv changes.csv
.NOTES
Requires: gh CLI authenticated; git available in working directory (must be inside PowerToys repo clone).
CopilotSummary behavior:
- Attempts to locate the latest GitHub Copilot authored review (preferred).
- If no review is found, lazily fetches PR comments to look for a Copilot-authored comment.
- Normalizes whitespace and strips newlines. Empty when no Copilot activity detected.
- Run with -Verbose to see whether the summary came from a 'review' or 'comment' source.
#>
function Write-Info($msg) { Write-Host $msg -ForegroundColor Cyan }
function Write-Warn($msg) { Write-Host $msg -ForegroundColor Yellow }
function Write-Err($msg) { Write-Host $msg -ForegroundColor Red }
function Write-DebugMsg($msg) { if ($PSBoundParameters.ContainsKey('Verbose') -or $VerbosePreference -eq 'Continue') { Write-Host "[VERBOSE] $msg" -ForegroundColor DarkGray } }
# Load member list from Generated Files/ReleaseNotes/MemberList.md (internal team - no thanks needed)
$scriptDir = Split-Path -Parent $MyInvocation.MyCommand.Path
$repoRoot = Resolve-Path (Join-Path $scriptDir "..\..\..\..")
$defaultMemberListPath = Join-Path $repoRoot "Generated Files\ReleaseNotes\MemberList.md"
$memberListPath = $defaultMemberListPath
if ($OutputDir) {
$memberListFromOutputDir = Join-Path $OutputDir "MemberList.md"
if (Test-Path $memberListFromOutputDir) {
$memberListPath = $memberListFromOutputDir
}
}
$memberList = @()
if (Test-Path $memberListPath) {
$memberListContent = Get-Content $memberListPath -Raw
# Extract usernames - skip markdown code fence lines, get all non-empty lines
$memberList = ($memberListContent -split "`n") | Where-Object { $_ -notmatch '^\s*```' -and $_.Trim() -ne '' } | ForEach-Object { $_.Trim() }
if (-not $memberList -or $memberList.Count -eq 0) {
Write-Err "MemberList.md is empty at $memberListPath"
exit 1
}
Write-DebugMsg "Loaded $($memberList.Count) members from MemberList.md"
} else {
Write-Err "MemberList.md not found at $memberListPath"
exit 1
}
# Validate we are in a git repo
#if (-not (Test-Path .git)) {
# Write-Err "Current directory does not appear to be the root of a git repository."
# exit 1
#}
# Resolve output directory (if specified)
if ($OutputDir) {
if (-not (Test-Path $OutputDir)) {
New-Item -ItemType Directory -Path $OutputDir -Force | Out-Null
}
if (-not [System.IO.Path]::IsPathRooted($OutputCsv)) {
$OutputCsv = Join-Path $OutputDir $OutputCsv
}
if (-not [System.IO.Path]::IsPathRooted($OutputJson)) {
$OutputJson = Join-Path $OutputDir $OutputJson
}
}
# Resolve commits
try {
if ($Branch) {
Write-Info "Fetching latest '$Branch' from origin (with tags)..."
git fetch origin $Branch --tags | Out-Null
if ($LASTEXITCODE -ne 0) { throw "git fetch origin $Branch --tags failed" }
}
$startSha = (git rev-parse --verify $StartCommit) 2>$null
if (-not $startSha) { throw "StartCommit '$StartCommit' not found" }
if ($Branch) {
$branchRef = $Branch
$branchSha = (git rev-parse --verify $branchRef) 2>$null
if (-not $branchSha) {
$branchRef = "origin/$Branch"
$branchSha = (git rev-parse --verify $branchRef) 2>$null
}
if (-not $branchSha) { throw "Branch '$Branch' not found" }
if (-not $PSBoundParameters.ContainsKey('EndCommit') -or [string]::IsNullOrWhiteSpace($EndCommit)) {
$EndCommit = $branchRef
}
}
if (-not $PSBoundParameters.ContainsKey('EndCommit') -or [string]::IsNullOrWhiteSpace($EndCommit)) {
$EndCommit = "HEAD"
}
$endSha = (git rev-parse --verify $EndCommit) 2>$null
if (-not $endSha) { throw "EndCommit '$EndCommit' not found" }
}
catch {
Write-Err $_
exit 1
}
Write-Info "Collecting commits between $startSha..$endSha (excluding start, including end)."
# Get list of commits reachable from end but not from start.
# IMPORTANT: In PowerShell, the .. operator creates a numeric/char range. If $startSha and $endSha look like hex strings,
# `$startSha..$endSha` must be passed as a single string argument.
$rangeArg = "$startSha..$endSha"
$commitList = git rev-list $rangeArg
# Normalize list (filter out empty strings)
$normalizedCommits = $commitList | Where-Object { $_ -and $_.Trim() -ne '' }
$commitCount = ($normalizedCommits | Measure-Object).Count
Write-DebugMsg ("Raw commitList length (including blanks): {0}" -f (($commitList | Measure-Object).Count))
Write-DebugMsg ("Normalized commit count: {0}" -f $commitCount)
if ($commitCount -eq 0) {
Write-Warn "No commits found in specified range ($startSha..$endSha)."; exit 0
}
Write-DebugMsg ("First 5 commits: {0}" -f (($normalizedCommits | Select-Object -First 5) -join ', '))
<#
Extract PR numbers from commits.
Patterns handled:
1. Merge commits: 'Merge pull request #12345 from ...'
2. Squash commits: 'Some feature change (#12345)' (GitHub default squash format)
We collect both. If a commit matches both (unlikely), it's deduped later.
#>
# Extract PR numbers from merge or squash commits
$mergeCommits = @()
foreach ($c in $normalizedCommits) {
$subject = git show -s --format=%s $c
$matched = $false
# Pattern 1: Traditional merge commit
if ($subject -match 'Merge pull request #([0-9]+) ') {
$prNumber = [int]$matches[1]
$mergeCommits += [PSCustomObject]@{ Sha = $c; Pr = $prNumber; Subject = $subject; Pattern = 'merge' }
Write-DebugMsg "Matched merge PR #$prNumber in commit $c"
$matched = $true
}
# Pattern 2: Squash merge subject line with ' (#12345)' at end (allow possible whitespace before paren)
if ($subject -match '\(#([0-9]+)\)$') {
$prNumber2 = [int]$matches[1]
# Avoid duplicate object if pattern 1 already captured same number for same commit
if (-not ($mergeCommits | Where-Object { $_.Sha -eq $c -and $_.Pr -eq $prNumber2 })) {
$mergeCommits += [PSCustomObject]@{ Sha = $c; Pr = $prNumber2; Subject = $subject; Pattern = 'squash' }
Write-DebugMsg "Matched squash PR #$prNumber2 in commit $c"
}
$matched = $true
}
if (-not $matched) {
Write-DebugMsg "No PR pattern in commit $c : $subject"
}
}
if (-not $mergeCommits -or $mergeCommits.Count -eq 0) {
Write-Warn "No merge commits with PR numbers found in range."; exit 0
}
# Deduplicate PR numbers (in case of revert or merges across branches)
$prNumbers = $mergeCommits | Select-Object -ExpandProperty Pr -Unique | Sort-Object
Write-Info ("Found {0} unique PRs: {1}" -f $prNumbers.Count, ($prNumbers -join ', '))
Write-DebugMsg ("Total merge commits examined: {0}" -f $mergeCommits.Count)
# Query GitHub for each PR
$prDetails = @()
function Get-CopilotSummaryFromPrJson {
param(
[Parameter(Mandatory=$true)]$PrJson,
[switch]$VerboseMode
)
# Returns a hashtable with Summary and Source keys.
$result = @{ Summary = ""; Source = "" }
if (-not $PrJson) { return $result }
$candidateAuthors = @(
'github-copilot[bot]', 'github-copilot', 'copilot'
)
# 1. Reviews (preferred) pick the LONGEST valid Copilot body, not the most recent
$reviews = $PrJson.reviews
if ($reviews) {
$copilotReviews = $reviews | Where-Object {
($candidateAuthors -contains $_.author.login -or $_.author.login -like '*copilot*') -and $_.body -and $_.body.Trim() -ne ''
}
if ($copilotReviews) {
$longest = $copilotReviews | Sort-Object { $_.body.Length } -Descending | Select-Object -First 1
if ($longest) {
$body = $longest.body
$norm = ($body -replace "`r", '') -replace "`n", ' '
$norm = $norm -replace '\s+', ' '
$result.Summary = $norm
$result.Source = 'review'
if ($VerboseMode) { Write-DebugMsg "Selected Copilot review length=$($body.Length) (longest)." }
return $result
}
}
}
# 2. Comments fallback (some repos surface Copilot summaries as PR comments rather than review objects)
if ($null -eq $PrJson.comments) {
try {
# Lazy fetch comments only if needed
$commentsJson = gh pr view $PrJson.number --repo $Repo --json comments 2>$null | ConvertFrom-Json
if ($commentsJson -and $commentsJson.comments) {
$PrJson | Add-Member -NotePropertyName comments -NotePropertyValue $commentsJson.comments -Force
}
} catch {
if ($VerboseMode) { Write-DebugMsg "Failed to fetch comments for PR #$($PrJson.number): $_" }
}
}
if ($PrJson.comments) {
$copilotComments = $PrJson.comments | Where-Object {
($candidateAuthors -contains $_.author.login -or $_.author.login -like '*copilot*') -and $_.body -and $_.body.Trim() -ne ''
}
if ($copilotComments) {
$longestC = $copilotComments | Sort-Object { $_.body.Length } -Descending | Select-Object -First 1
if ($longestC) {
$body = $longestC.body
$norm = ($body -replace "`r", '') -replace "`n", ' '
$norm = $norm -replace '\s+', ' '
$result.Summary = $norm
$result.Source = 'comment'
if ($VerboseMode) { Write-DebugMsg "Selected Copilot comment length=$($body.Length) (longest)." }
return $result
}
}
}
return $result
}
foreach ($pr in $prNumbers) {
Write-Info "Fetching PR #$pr ..."
try {
# Include comments only if Verbose asked; if not, we lazily pull when reviews are missing
$fields = 'number,title,labels,author,url,body,reviews'
if ($PSBoundParameters.ContainsKey('Verbose')) { $fields += ',comments' }
$json = gh pr view $pr --repo $Repo --json $fields 2>$null | ConvertFrom-Json
if ($null -eq $json) { throw "Empty response" }
$copilot = Get-CopilotSummaryFromPrJson -PrJson $json -VerboseMode:($PSBoundParameters.ContainsKey('Verbose'))
if ($copilot.Summary -and $copilot.Source -and $PSBoundParameters.ContainsKey('Verbose')) {
Write-DebugMsg "Copilot summary source=$($copilot.Source) chars=$($copilot.Summary.Length)"
} elseif (-not $copilot.Summary) {
Write-DebugMsg "No Copilot summary found for PR #$pr"
}
# Filter labels
$filteredLabels = $json.labels | Where-Object {
($_.name -like "Product-*") -or
($_.name -like "Area-*") -or
($_.name -like "GitHub*") -or
($_.name -like "*Plugin") -or
($_.name -like "Issue-*")
}
$labelNames = ($filteredLabels | ForEach-Object { $_.name }) -join ", "
$bodyValue = if ($json.body) { ($json.body -replace "`r", '') -replace "`n", ' ' } else { '' }
$bodyValue = $bodyValue -replace '\s+', ' '
# Determine if author needs thanks (not in member list)
$authorLogin = $json.author.login
$needThanks = $true
if ($memberList.Count -gt 0 -and $authorLogin) {
$needThanks = -not ($memberList -contains $authorLogin)
}
$prDetails += [PSCustomObject]@{
Id = $json.number
Title = $json.title
Labels = $labelNames
Author = $authorLogin
Url = $json.url
Body = $bodyValue
CopilotSummary = $copilot.Summary
NeedThanks = $needThanks
}
}
catch {
$err = $_
Write-Warn ("Failed to fetch PR #{0}: {1}" -f $pr, $err)
}
}
if (-not $prDetails) { Write-Warn "No PR details fetched."; exit 0 }
# Sort by Labels like original script (first label alphabetical)
$sorted = $prDetails | Sort-Object { ($_.Labels -split ',')[0] }
# Output JSON raw (optional)
$sorted | ConvertTo-Json -Depth 6 | Out-File -Encoding UTF8 $OutputJson
Write-Info "Saving CSV to $OutputCsv ..."
$sorted | Export-Csv $OutputCsv -NoTypeInformation
Write-Host "✅ Done. Generated $($prDetails.Count) PR rows." -ForegroundColor Green

View File

@@ -1,80 +0,0 @@
<#
.SYNOPSIS
Find a commit on a branch that has the same subject line as a reference commit.
.DESCRIPTION
Given a commit SHA (often from a release tag) and a branch name, this script
resolves the reference commit's subject, then searches the branch history for
commits with the exact same subject line. Useful when the release tag commit
is not reachable from your current branch history.
.PARAMETER Commit
The reference commit SHA or ref (e.g., v0.96.1 or a full SHA).
.PARAMETER Branch
The branch to search (e.g., stable or main). Defaults to stable.
.PARAMETER RepoPath
Path to the local repo. Defaults to current directory.
.EXAMPLE
pwsh ./find-commit-by-title.ps1 -Commit b62f6421845f7e5c92b8186868d98f46720db442 -Branch stable
#>
[CmdletBinding()]
param(
[Parameter(Mandatory = $true)][string]$Commit,
[string]$Branch = "stable",
[string]$RepoPath = "."
)
function Write-Info($msg) { Write-Host $msg -ForegroundColor Cyan }
function Write-Err($msg) { Write-Host $msg -ForegroundColor Red }
Push-Location $RepoPath
try {
Write-Info "Fetching latest '$Branch' from origin (with tags)..."
git fetch origin $Branch --tags | Out-Null
if ($LASTEXITCODE -ne 0) { throw "git fetch origin $Branch --tags failed" }
$commitSha = (git rev-parse --verify $Commit) 2>$null
if (-not $commitSha) { throw "Commit '$Commit' not found" }
$subject = (git show -s --format=%s $commitSha) 2>$null
if (-not $subject) { throw "Unable to read subject for '$commitSha'" }
$branchRef = $Branch
$branchSha = (git rev-parse --verify $branchRef) 2>$null
if (-not $branchSha) {
$branchRef = "origin/$Branch"
$branchSha = (git rev-parse --verify $branchRef) 2>$null
}
if (-not $branchSha) { throw "Branch '$Branch' not found" }
Write-Info "Reference commit: $commitSha"
Write-Info "Reference title: $subject"
Write-Info "Searching branch: $branchRef"
$matches = git log $branchRef --format="%H|%s" | Where-Object { $_ -match '\|' }
$results = @()
foreach ($line in $matches) {
$parts = $line -split '\|', 2
if ($parts.Count -eq 2 -and $parts[1] -eq $subject) {
$results += [PSCustomObject]@{ Sha = $parts[0]; Title = $parts[1] }
}
}
if (-not $results -or $results.Count -eq 0) {
Write-Info "No matching commit found on $branchRef for the given title."
exit 0
}
Write-Info ("Found {0} matching commit(s):" -f $results.Count)
$results | ForEach-Object { Write-Host ("{0} {1}" -f $_.Sha, $_.Title) }
}
catch {
Write-Err $_
exit 1
}
finally {
Pop-Location
}

View File

@@ -1,85 +0,0 @@
<#
.SYNOPSIS
Group PR rows by their Labels column and emit per-label CSV files.
.DESCRIPTION
Reads a milestone PR CSV (usually produced by dump-prs-information / dump-prs-since-commit scripts),
splits rows by label list, normalizes/sorts individual labels, and writes one CSV per unique label combination.
Each output preserves the original row ordering within that subset and column order from the source.
.PARAMETER CsvPath
Input CSV containing PR rows with a 'Labels' column (comma-separated list).
.PARAMETER OutDir
Output directory to place grouped CSVs (created if missing). Default: 'grouped_csv'.
.NOTES
Label combinations are joined using ' | ' when multiple labels present. Filenames are sanitized (invalid characters,
whitespace collapsed) and truncated to <= 120 characters.
#>
param(
[string]$CsvPath = "sorted_prs.csv",
[string]$OutDir = "grouped_csv"
)
$ErrorActionPreference = 'Stop'
function Write-Info($msg) { Write-Host "[info] $msg" -ForegroundColor Cyan }
function Write-Warn($msg) { Write-Host "[warn] $msg" -ForegroundColor Yellow }
if (-not (Test-Path -LiteralPath $CsvPath)) { throw "CSV not found: $CsvPath" }
Write-Info "Reading CSV: $CsvPath"
$rows = Import-Csv -LiteralPath $CsvPath
Write-Info ("Loaded {0} rows" -f $rows.Count)
function ConvertTo-SafeFileName {
[CmdletBinding()]
param(
[Parameter(Mandatory=$true)][string]$Name
)
if ([string]::IsNullOrWhiteSpace($Name)) { return 'Unnamed' }
$s = $Name -replace '[<>:"/\\|?*]', '-' # invalid path chars
$s = $s -replace '\s+', '-' # spaces to dashes
$s = $s -replace '-{2,}', '-' # collapse dashes
$s = $s.Trim('-')
if ($s.Length -gt 120) { $s = $s.Substring(0,120).Trim('-') }
if ([string]::IsNullOrWhiteSpace($s)) { return 'Unnamed' }
return $s
}
# Build groups keyed by normalized, sorted label combinations. Preserve original CSV row order.
$groups = @{}
foreach ($row in $rows) {
$labelsRaw = $row.Labels
if ([string]::IsNullOrWhiteSpace($labelsRaw)) {
$labelParts = @('Unlabeled')
} else {
$parts = $labelsRaw -split ',' | ForEach-Object { $_.Trim() } | Where-Object { $_ }
if (-not $parts -or $parts.Count -eq 0) { $labelParts = @('Unlabeled') }
else { $labelParts = $parts | Sort-Object }
}
$key = ($labelParts -join ' | ')
if (-not $groups.ContainsKey($key)) { $groups[$key] = New-Object System.Collections.ArrayList }
[void]$groups[$key].Add($row)
}
if (-not (Test-Path -LiteralPath $OutDir)) {
Write-Info "Creating output directory: $OutDir"
New-Item -ItemType Directory -Path $OutDir | Out-Null
}
Write-Info ("Generating {0} grouped CSV file(s) into: {1}" -f $groups.Count, $OutDir)
foreach ($key in $groups.Keys) {
$labelParts = if ($key -eq 'Unlabeled') { @('Unlabeled') } else { $key -split '\s\|\s' }
$safeName = ($labelParts | ForEach-Object { ConvertTo-SafeFileName -Name $_ }) -join '-'
$filePath = Join-Path $OutDir ("$safeName.csv")
# Keep same columns and order
$groups[$key] | Export-Csv -LiteralPath $filePath -NoTypeInformation -Encoding UTF8
}
Write-Info "Done. Sample output files:"
Get-ChildItem -LiteralPath $OutDir | Select-Object -First 10 Name | Format-Table -HideTableHeaders

1
.gitignore vendored
View File

@@ -359,4 +359,3 @@ src/common/Telemetry/*.etl
# PowerToysInstaller Build Temp Files
installer/*/*.wxs.bk
/src/modules/awake/.claude

View File

@@ -117,7 +117,6 @@
"WinUI3Apps\\PowerToys.FileLocksmithUI.dll",
"WinUI3Apps\\PowerToys.FileLocksmithContextMenu.dll",
"FileLocksmithContextMenuPackage.msix",
"FileLocksmithCLI.exe",
"WinUI3Apps\\Peek.Common.dll",
"WinUI3Apps\\Peek.FilePreviewer.dll",
@@ -125,10 +124,6 @@
"WinUI3Apps\\Powertoys.Peek.UI.exe",
"WinUI3Apps\\Powertoys.Peek.dll",
"WinUI3Apps\\PowerToys.QuickAccess.dll",
"WinUI3Apps\\PowerToys.QuickAccess.exe",
"WinUI3Apps\\PowerToys.Settings.UI.Controls.dll",
"WinUI3Apps\\PowerToys.EnvironmentVariablesModuleInterface.dll",
"WinUI3Apps\\PowerToys.EnvironmentVariablesUILib.dll",
"WinUI3Apps\\PowerToys.EnvironmentVariables.dll",
@@ -136,8 +131,6 @@
"PowerToys.ImageResizer.exe",
"PowerToys.ImageResizer.dll",
"WinUI3Apps\\PowerToys.ImageResizerCLI.exe",
"WinUI3Apps\\PowerToys.ImageResizerCLI.dll",
"PowerToys.ImageResizerExt.dll",
"PowerToys.ImageResizerContextMenu.dll",
"ImageResizerContextMenuPackage.msix",

View File

@@ -504,14 +504,6 @@ jobs:
Remove-Item -Force -Recurse "$(JobOutputDirectory)/_appx" -ErrorAction:Ignore
displayName: Re-pack the new CmdPal package after signing
- pwsh: |
$testsPath = "$(Build.SourcesDirectory)/$(BuildPlatform)/$(BuildConfiguration)/tests"
if (Test-Path $testsPath) {
Remove-Item -Path $testsPath -Recurse -Force
Write-Host "Removed tests folder to reduce signing workload: $testsPath"
}
displayName: Remove tests folder before signing
- template: steps-esrp-signing.yml
parameters:
displayName: Sign Core PowerToys

View File

@@ -68,13 +68,14 @@ jobs:
- template: .\steps-restore-nuget.yml
- task: MSBuild@1
- task: NuGetCommand@2
displayName: Restore solution-level NuGet packages
inputs:
solution: PowerToys.slnx
msbuildArguments: '/t:restore /p:RestorePackagesConfig=true'
platform: $(BuildPlatform)
configuration: $(BuildConfiguration)
command: restore
feedsToUse: config
configPath: nuget.config
restoreSolution: PowerToys.slnx
restoreDirectory: '$(Build.SourcesDirectory)\packages'
# Build all UI test projects if no specific modules are specified
- ${{ if eq(length(parameters.uiTestModules), 0) }}:

View File

@@ -10,7 +10,7 @@ parameters:
default: {}
steps:
- task: EsrpCodeSigning@6
- task: EsrpCodeSigning@5
displayName: 🔏 ${{ parameters.displayName }}
inputs:
ConnectedServiceName: ${{ parameters.signingIdentity.serviceName }}

13
.vscode/mcp.json vendored
View File

@@ -1,13 +0,0 @@
{
"servers": {
"github-artifacts": {
"command": "node",
"args": [
"tools/mcp/github-artifacts/launch.js"
],
"env": {
"GITHUB_TOKEN": "${env:GITHUB_TOKEN}"
}
}
}
}

17
.vscode/settings.json vendored
View File

@@ -1,17 +0,0 @@
{
"github.copilot.chat.reviewSelection.instructions": [
{
"file": ".github/prompts/review-pr.prompt.md"
}
],
"github.copilot.chat.commitMessageGeneration.instructions": [
{
"file": ".github/prompts/create-commit-title.prompt.md"
}
],
"github.copilot.chat.pullRequestDescriptionGeneration.instructions": [
{
"file": ".github/prompts/create-pr-summary.prompt.md"
}
]
}

106
.vscode/tasks.json vendored
View File

@@ -1,106 +0,0 @@
{
"version": "2.0.0",
"windows": {
"options": {
"shell": {
"executable": "cmd.exe",
"args": ["/d", "/c"]
}
}
},
"inputs": [
{
"id": "config",
"type": "pickString",
"description": "Configuration",
"options": ["Debug", "Release"],
"default": "Debug"
},
{
"id": "platform",
"type": "pickString",
"description": "Platform (leave empty to auto-detect host platform)",
"options": ["", "X64", "ARM64"],
"default": "X64"
},
{
"id": "msbuildExtra",
"type": "promptString",
"description": "Extra MSBuild args (optional). Example: /p:CIBuild=true /m",
"default": ""
}
],
"tasks": [
{
"label": "PT: Build (quick)",
"type": "shell",
"command": "\"${workspaceFolder}\\tools\\build\\build.cmd\"",
"args": [
"-Path",
"${fileDirname}"
],
"group": { "kind": "build", "isDefault": true },
"presentation": {
"reveal": "always",
"panel": "dedicated",
"clear": true
},
"problemMatcher": "$msCompile"
},
{
"label": "PT: Build (with options)",
"type": "shell",
"command": "\"${workspaceFolder}\\tools\\build\\build.cmd\"",
"args": [
"-Path",
"${fileDirname}",
"-Platform",
"${input:platform}",
"-Configuration",
"${input:config}",
"${input:msbuildExtra}"
],
"presentation": {
"reveal": "always",
"panel": "dedicated",
"clear": true
},
"problemMatcher": "$msCompile"
},
{
"label": "PT: Build Essentials (quick)",
"type": "shell",
"command": "\"${workspaceFolder}\\tools\\build\\build-essentials.cmd\"",
"args": [],
"presentation": {
"reveal": "always",
"panel": "dedicated",
"clear": true
},
"problemMatcher": "$msCompile"
},
{
"label": "PT: Build Essentials (with options)",
"type": "shell",
"command": "\"${workspaceFolder}\\tools\\build\\build-essentials.cmd\"",
"args": [
"-Platform",
"${input:platform}",
"-Configuration",
"${input:config}",
"${input:msbuildExtra}"
],
"presentation": {
"reveal": "always",
"panel": "dedicated",
"clear": true
},
"problemMatcher": "$msCompile"
}
]
}

165
AGENTS.md
View File

@@ -1,165 +0,0 @@
---
description: 'Top-level AI contributor guidance for developing PowerToys - a collection of Windows productivity utilities'
applyTo: '**'
---
# PowerToys AI Contributor Guide
This is the top-level guidance for AI contributions to PowerToys. Keep changes atomic, follow existing patterns, and cite exact paths in PRs.
## Overview
PowerToys is a set of utilities for power users to tune and streamline their Windows experience.
| Area | Location | Description |
|------|----------|-------------|
| Runner | `src/runner/` | Main executable, tray icon, module loader, hotkey management |
| Settings UI | `src/settings-ui/` | WinUI/WPF configuration app communicating via named pipes |
| Modules | `src/modules/` | Individual PowerToys utilities (each in its own subfolder) |
| Common Libraries | `src/common/` | Shared code: logging, IPC, settings, DPI, telemetry, utilities |
| Build Tools | `tools/build/` | Build scripts and automation |
| Documentation | `doc/devdocs/` | Developer documentation |
| Installer | `installer/` | WiX-based installer projects |
For architecture details and module types, see [Architecture Overview](doc/devdocs/core/architecture.md).
## Conventions
For detailed coding conventions, see:
- [Coding Guidelines](doc/devdocs/development/guidelines.md) Dependencies, testing, PR management
- [Coding Style](doc/devdocs/development/style.md) Formatting, C++/C#/XAML style rules
- [Logging](doc/devdocs/development/logging.md) C++ spdlog and C# Logger usage
### Component-Specific Instructions
These instruction files are automatically applied when working in their respective areas:
- [Runner & Settings UI](.github/instructions/runner-settings-ui.instructions.md) IPC contracts, schema migrations
- [Common Libraries](.github/instructions/common-libraries.instructions.md) ABI stability, shared code guidelines
## Build
### Prerequisites
- Visual Studio 2022 17.4+
- Windows 10 1803+ (April 2018 Update or newer)
- Initialize submodules once: `git submodule update --init --recursive`
### Build Commands
| Task | Command |
|------|---------|
| First build / NuGet restore | `tools\build\build-essentials.cmd` |
| Build current folder | `tools\build\build.cmd` |
| Build with options | `build.ps1 -Platform x64 -Configuration Release` |
### Build Discipline
1. One terminal per operation (build → test). Do not switch or open new ones mid-flow
2. After making changes, `cd` to the project folder that changed (`.csproj`/`.vcxproj`)
3. Use scripts to build: `tools/build/build.ps1` or `tools/build/build.cmd`
4. For first build or missing NuGet packages, run `build-essentials.cmd` first
5. **Exit code 0 = success; non-zero = failure** treat this as absolute
6. On failure, read the errors log: `build.<config>.<platform>.errors.log`
7. Do not start tests or launch Runner until the build succeeds
### Build Logs
Located next to the solution/project being built:
- `build.<configuration>.<platform>.errors.log` errors only (check this first)
- `build.<configuration>.<platform>.all.log` full log
- `build.<configuration>.<platform>.trace.binlog` for MSBuild Structured Log Viewer
For complete details, see [Build Guidelines](tools/build/BUILD-GUIDELINES.md).
## Tests
### Test Discovery
- Find test projects by product code prefix (e.g., `FancyZones`, `AdvancedPaste`)
- Look for sibling folders or 1-2 levels up named `<Product>*UnitTests` or `<Product>*UITests`
### Running Tests
1. **Build the test project first**, wait for exit code 0
2. Run via VS Test Explorer (`Ctrl+E, T`) or `vstest.console.exe` with filters
3. **Avoid `dotnet test`** in this repo use VS Test Explorer or vstest.console.exe
### Test Types
| Type | Requirements | Setup |
|------|--------------|-------|
| Unit Tests | Standard dev environment | None |
| UI Tests | WinAppDriver v1.2.1, Developer Mode | Install from [WinAppDriver releases](https://github.com/microsoft/WinAppDriver/releases/tag/v1.2.1) |
| Fuzz Tests | OneFuzz, .NET 8 | See [Fuzzing Tests](doc/devdocs/tools/fuzzingtesting.md) |
### Test Discipline
1. Add or adjust tests when changing behavior
2. If tests skipped, state why (e.g., comment-only change, string rename)
3. New modules handling file I/O or user input **must** implement fuzzing tests
### Special Requirements
- **Mouse Without Borders**: Requires 2+ physical computers (not VMs)
- **Multi-monitor utilities**: Test with 2+ monitors, different DPI settings
For UI test setup details, see [UI Tests](doc/devdocs/development/ui-tests.md).
## Boundaries
### Ask for Clarification When
- Ambiguous spec after scanning relevant docs
- Cross-module impact (shared enum/struct) is unclear
- Security, elevation, or installer changes involved
- GPO or policy handling modifications needed
### Areas Requiring Extra Care
| Area | Concern | Reference |
|------|---------|-----------|
| `src/common/` | ABI breaks | [Common Libraries Instructions](.github/instructions/common-libraries.instructions.md) |
| `src/runner/`, `src/settings-ui/` | IPC contracts, schema | [Runner & Settings UI Instructions](.github/instructions/runner-settings-ui.instructions.md) |
| Installer files | Release impact | Careful review required |
| Elevation/GPO logic | Security | Confirm no regression in policy handling |
### What NOT to Do
- Don't merge incomplete features into main (use feature branches)
- Don't break IPC/JSON contracts without updating both runner and settings-ui
- Don't add noisy logs in hot paths
- Don't introduce third-party deps without PM approval and `NOTICE.md` update
## Validation Checklist
Before finishing, verify:
- [ ] Build clean with exit code 0
- [ ] Tests updated and passing locally
- [ ] No unintended ABI breaks or schema changes
- [ ] IPC contracts consistent between runner and settings-ui
- [ ] New dependencies added to `NOTICE.md`
- [ ] PR is atomic (one logical change), with issue linked
## Documentation Index
### Core Architecture
- [Architecture Overview](doc/devdocs/core/architecture.md)
- [Runner](doc/devdocs/core/runner.md)
- [Settings System](doc/devdocs/core/settings/readme.md)
- [Module Interface](doc/devdocs/modules/interface.md)
### Development
- [Coding Guidelines](doc/devdocs/development/guidelines.md)
- [Coding Style](doc/devdocs/development/style.md)
- [Logging](doc/devdocs/development/logging.md)
- [UI Tests](doc/devdocs/development/ui-tests.md)
- [Fuzzing Tests](doc/devdocs/tools/fuzzingtesting.md)
### Build & Tools
- [Build Guidelines](tools/build/BUILD-GUIDELINES.md)
- [Tools Overview](doc/devdocs/tools/readme.md)
### Instructions (Auto-Applied)
- [Runner & Settings UI](.github/instructions/runner-settings-ui.instructions.md)
- [Common Libraries](.github/instructions/common-libraries.instructions.md)

View File

@@ -6,6 +6,9 @@ Names are in alphabetical order based on first name.
## High impact community members
### [@Noraa-Junker](https://github.com/Noraa-Junker) - [Noraa Junker](https://noraajunker.ch)
Noraa has helped triaging, discussing, and creating a substantial number of issues and contributed features/fixes. Noraa was the primary person for helping build the File Explorer preview pane handler for developer files.
### [@cgaarden](https://github.com/cgaarden) - [Christian Gaarden Gaardmark](https://www.onegreatworld.com)
Christian contributed New+ utility
@@ -39,12 +42,6 @@ Jay has helped triaging, discussing, creating a substantial number of issues and
### [@jefflord](https://github.com/Jjefflord) - Jeff Lord
Jeff added in multiple new features into Keyboard manager, such as key chord support and launching apps. He also contributed multiple features/fixes to PowerToys.
### [@snickler](https://github.com/snickler) - [Jeremy Sinclair](http://sinclairinat0r.com)
Jeremy has helped drive large sums of the ARM64 support inside PowerToys
### [@jiripolasek](https://github.com/jiripolasek) - [Jiří Polášek](https://github.com/jiripolasek)
Jiří has contributed a massive number of features and improvements to Command Palette, including drag & drop support, custom themes, Web Search enhancements, Remote Desktop extension fixes, and many UX improvements.
### [@TheJoeFin](https://github.com/TheJoeFin) - [Joe Finney](https://joefinapps.com)
Joe has helped triaging, discussing, issues as well as fixing bugs and building features for Text Extractor.
@@ -60,9 +57,6 @@ Color Picker is from Martin.
### [@mikeclayton](https://github.com/mikeclayton) - [Michael Clayton](https://michael-clayton.com)
Michael contributed the [initial version](https://github.com/microsoft/PowerToys/issues/23216) of the Mouse Jump tool and [a number of updates](https://github.com/microsoft/PowerToys/pulls?q=is%3Apr+author%3Amikeclayton) based on his FancyMouse utility.
### [@Noraa-Junker](https://github.com/Noraa-Junker) - [Noraa Junker](https://noraajunker.ch)
Noraa has helped triaging, discussing, and creating a substantial number of issues and contributed features/fixes. Noraa was the primary person for helping build the File Explorer preview pane handler for developer files.
### [@pedrolamas](https://github.com/pedrolamas/) - Pedro Lamas
Pedro helped create the thumbnail and File Explorer previewers for 3D files like STL and GCode. If you like 3D printing, these are very helpful.
@@ -75,12 +69,15 @@ Rafael has helped do the [upgrade from CppWinRT 1.x to 2.0](https://github.com/m
### [@royvou](https://github.com/royvou)
Roy has helped out contributing multiple features to PowerToys Run
### [@ThiefZero](https://github.com/ThiefZero)
ThiefZero has helped out contributing a features to PowerToys Run such as the unit converter plugin
### [@snickler](https://github.com/snickler) - [Jeremy Sinclair](http://sinclairinat0r.com)
Jeremy has helped drive large sums of the ARM64 support inside PowerToys
### [@TobiasSekan](https://github.com/TobiasSekan) - Tobias Sekan
Tobias Sekan has helped out contributing features to PowerToys Run such as Settings plugin, Registry plugin
### [@ThiefZero](https://github.com/ThiefZero)
ThiefZero has helped out contributing a features to PowerToys Run such as the unit converter plugin
## Open source projects
As PowerToys creates new utilities, some will be based off existing technology. We'll continue to do our best to contribute back to these projects but their efforts were the base of some of our projects. We want to be sure their work is directly recognized.
@@ -190,10 +187,18 @@ ZoomIt source code was originally implemented by [Sysinternals](https://sysinter
- [@niels9001](https://github.com/niels9001/) - Niels Laute - Product Manager
- [@dhowett](https://github.com/dhowett) - Dustin Howett - Dev Lead
- [@yeelam-gordon](https://github.com/yeelam-gordon) - Gordon Lam - Dev Lead
- [@jamrobot](https://github.com/jamrobot) - Jerry Xu - Dev Lead
- [@lei9444](https://github.com/lei9444) - Leilei Zhang - Dev
- [@shuaiyuanxx](https://github.com/shuaiyuanxx) - Shawn Yuan - Dev
- [@moooyo](https://github.com/moooyo) - Yu Leng - Dev
- [@haoliuu](https://github.com/haoliuu) - Hao Liu - Dev
- [@chenmy77](https://github.com/chenmy77) - Mengyuan Chen - Dev
- [@chemwolf6922](https://github.com/chemwolf6922) - Feng Wang - Dev
- [@yaqingmi](https://github.com/yaqingmi) - Yaqing Mi - Dev
- [@zhaoqpcn](https://github.com/zhaoqpcn) - Qingpeng Zhao - Dev
- [@urnotdfs](https://github.com/urnotdfs) - Xiaofeng Wang - Dev
- [@zhaopy536](https://github.com/zhaopy536) - Peiyao Zhao - Dev
- [@wang563681252](https://github.com/wang563681252) - Zhaopeng Wang - Dev
- [@vanzue](https://github.com/vanzue) - Kai Tao - Dev
- [@zadjii-msft](https://github.com/zadjii-msft) - Mike Griese - Dev
- [@khmyznikov](https://github.com/khmyznikov) - Gleb Khmyznikov - Dev
@@ -224,12 +229,3 @@ ZoomIt source code was originally implemented by [Sysinternals](https://sysinter
- [@SeraphimaZykova](https://github.com/SeraphimaZykova) - Seraphima Zykova - Dev
- [@stefansjfw](https://github.com/stefansjfw) - Stefan Markovic - Dev
- [@jaimecbernardo](https://github.com/jaimecbernardo) - Jaime Bernardo - Dev Lead
- [@haoliuu](https://github.com/haoliuu) - Hao Liu - Dev
- [@chenmy77](https://github.com/chenmy77) - Mengyuan Chen - Dev
- [@chemwolf6922](https://github.com/chemwolf6922) - Feng Wang - Dev
- [@yaqingmi](https://github.com/yaqingmi) - Yaqing Mi - Dev
- [@zhaoqpcn](https://github.com/zhaoqpcn) - Qingpeng Zhao - Dev
- [@urnotdfs](https://github.com/urnotdfs) - Xiaofeng Wang - Dev
- [@zhaopy536](https://github.com/zhaopy536) - Peiyao Zhao - Dev
- [@wang563681252](https://github.com/wang563681252) - Zhaopeng Wang - Dev
- [@jamrobot](https://github.com/jamrobot) - Jerry Xu - Dev Lead

View File

@@ -71,14 +71,6 @@ _If you want to find diagnostic data events in the source code, these two links
<td>Microsoft.PowerToys.Uninstall_Success</td>
<td>Logs when PowerToys is successfully uninstalled (who would do such a thing!).</td>
</tr>
<tr>
<td>Microsoft.PowerToys.UpdateCheck_Completed</td>
<td>Logs when an auto-update check completes, including success status, whether an update is available, and version information.</td>
</tr>
<tr>
<td>Microsoft.PowerToys.UpdateDownload_Completed</td>
<td>Logs when an update download completes, including success status and version.</td>
</tr>
</table>
### OOBE (Out-of-box experience)
@@ -452,10 +444,6 @@ _If you want to find diagnostic data events in the source code, these two links
<td>Microsoft.PowerToys.FancyZones_ZoneWindowKeyUp</td>
<td>Occurs when a key is released while interacting with zones.</td>
</tr>
<tr>
<td>Microsoft.PowerToys.FancyZones_CLICommand</td>
<td>Triggered when a FancyZones CLI command is executed, logging the command name and success status.</td>
</tr>
</table>
### FileExplorerAddOns
@@ -578,7 +566,7 @@ _If you want to find diagnostic data events in the source code, these two links
</tr>
<tr>
<td>Microsoft.PowerToys.FindMyMouse_MousePointerFocused</td>
<td>Occurs when the mouse pointer is focused using Find My Mouse, including the activation method (double-tap left/right Ctrl, shake mouse, or shortcut).</td>
<td>Occurs when the mouse pointer is focused using Find My Mouse.</td>
</tr>
</table>
@@ -702,30 +690,6 @@ _If you want to find diagnostic data events in the source code, these two links
</tr>
</table>
### Light Switch
<table style="width:100%">
<tr>
<th>Event Name</th>
<th>Description</th>
</tr>
<tr>
<td>Microsoft.PowerToys.LightSwitch_EnableLightSwitch</td>
<td>Triggered when Light Switch is enabled or disabled.</td>
</tr>
<tr>
<td>Microsoft.PowerToys.LightSwitch_ShortcutInvoked</td>
<td>Occurs when the shortcut for Light Switch is invoked.</td>
</tr>
<tr>
<td>Microsoft.PowerToys.LightSwitch_ScheduleModeToggled</td>
<td>Occurs when a new schedule mode is selected for Light Switch.</td>
</tr>
<tr>
<td>Microsoft.PowerToys.LightSwitch_ThemeTargetChanged</td>
<td>Occurs when the options for targeting the system or apps is updated.</td>
</tr>
</table>
### Mouse Highlighter
<table style="width:100%">
<tr>

View File

@@ -13,8 +13,6 @@
<PackageVersion Include="Microsoft.Bot.AdaptiveExpressions.Core" Version="4.23.0" />
<PackageVersion Include="Appium.WebDriver" Version="4.4.5" />
<PackageVersion Include="CoenM.ImageSharp.ImageHash" Version="1.3.6" />
<!-- Pin the SixLabors.ImageSharp version (a transitive dependency of CoenM.ImageSharp.ImageHash) to restore functionality and apply patches. -->
<PackageVersion Include="SixLabors.ImageSharp" Version="2.1.12" />
<PackageVersion Include="CommunityToolkit.Common" Version="8.4.0" />
<PackageVersion Include="CommunityToolkit.Mvvm" Version="8.4.0" />
<PackageVersion Include="CommunityToolkit.WinUI.Animations" Version="8.2.250402" />
@@ -26,7 +24,7 @@
<PackageVersion Include="CommunityToolkit.WinUI.Converters" Version="8.2.250402" />
<PackageVersion Include="CommunityToolkit.WinUI.Extensions" Version="8.2.250402" />
<PackageVersion Include="CommunityToolkit.WinUI.UI.Controls.DataGrid" Version="7.1.2" />
<PackageVersion Include="CommunityToolkit.Labs.WinUI.Controls.MarkdownTextBlock" Version="0.1.260116-build.2514" />
<PackageVersion Include="CommunityToolkit.Labs.WinUI.Controls.MarkdownTextBlock" Version="0.1.251002-build.2316" />
<PackageVersion Include="ControlzEx" Version="6.0.0" />
<PackageVersion Include="HelixToolkit" Version="2.24.0" />
<PackageVersion Include="HelixToolkit.Core.Wpf" Version="2.24.0" />

View File

@@ -420,7 +420,6 @@
</Project>
</Folder>
<Folder Name="/modules/FileLocksmith/">
<Project Path="src/modules/FileLocksmith/FileLocksmithCLI/FileLocksmithCLI.vcxproj" Id="49D456D3-F485-45AF-8875-45B44F193DDC" />
<Project Path="src/modules/FileLocksmith/FileLocksmithContextMenu/FileLocksmithContextMenu.vcxproj" Id="799a50d8-de89-4ed1-8ff8-ad5a9ed8c0ca" />
<Project Path="src/modules/FileLocksmith/FileLocksmithExt/FileLocksmithExt.vcxproj" Id="57175ec7-92a5-4c1e-8244-e3fbca2a81de" />
<Project Path="src/modules/FileLocksmith/FileLocksmithLib/FileLocksmithLib.vcxproj" Id="9d52fd25-ef90-4f9a-a015-91efc5daf54f" />
@@ -430,9 +429,6 @@
<Platform Solution="*|x64" Project="x64" />
</Project>
</Folder>
<Folder Name="/modules/FileLocksmith/Tests/">
<Project Path="src/modules/FileLocksmith/FileLocksmithCLI/tests/FileLocksmithCLIUnitTests.vcxproj" Id="A1B2C3D4-E5F6-7890-1234-567890ABCDEF" />
</Folder>
<Folder Name="/modules/Hosts/">
<Project Path="src/modules/Hosts/Hosts/Hosts.csproj">
<Platform Solution="*|ARM64" Project="ARM64" />
@@ -463,10 +459,6 @@
<Platform Solution="*|ARM64" Project="ARM64" />
<Platform Solution="*|x64" Project="x64" />
</Project>
<Project Path="src/modules/imageresizer/ImageResizerCLI/ImageResizerCLI.csproj">
<Platform Solution="*|ARM64" Project="ARM64" />
<Platform Solution="*|x64" Project="x64" />
</Project>
</Folder>
<Folder Name="/modules/imageresizer/Tests/">
<Project Path="src/modules/imageresizer/tests/ImageResizer.UnitTests.csproj">
@@ -665,7 +657,6 @@
</Project>
</Folder>
<Folder Name="/modules/LightSwitch/">
<Project Path="src/modules/LightSwitch/LightSwitchLib/LightSwitchLib.vcxproj" Id="79267138-2895-4346-9021-21408d65379f" />
<Project Path="src/modules/LightSwitch/LightSwitchModuleInterface/LightSwitchModuleInterface.vcxproj" Id="38177d56-6ad1-4adf-88c9-2843a7932166" />
<Project Path="src/modules/LightSwitch/LightSwitchService/LightSwitchService.vcxproj" Id="08e71c67-6a7e-4ca1-b04e-2fb336410bac" />
</Folder>
@@ -1006,14 +997,6 @@
<Project Path="src/modules/ZoomIt/ZoomItSettingsInterop/ZoomItSettingsInterop.vcxproj" Id="ca7d8106-30b9-4aec-9d05-b69b31b8c461" />
</Folder>
<Folder Name="/settings-ui/">
<Project Path="src/settings-ui/QuickAccess.UI/PowerToys.QuickAccess.csproj">
<Platform Solution="*|ARM64" Project="ARM64" />
<Platform Solution="*|x64" Project="x64" />
</Project>
<Project Path="src/settings-ui/Settings.UI.Controls/Settings.UI.Controls.csproj">
<Platform Solution="*|ARM64" Project="ARM64" />
<Platform Solution="*|x64" Project="x64" />
</Project>
<Project Path="src/settings-ui/Settings.UI.Library/Settings.UI.Library.csproj">
<Platform Solution="*|ARM64" Project="ARM64" />
<Platform Solution="*|x64" Project="x64" />

291
README.md
View File

@@ -48,22 +48,22 @@ But to get started quickly, choose one of the installation methods below:
<details open>
<summary><strong>Download .exe from GitHub</strong></summary>
<br/>
Go to the <a href="https://aka.ms/installPowerToys">PowerToys GitHub releases</a>, click Assets to reveal the downloads, and choose the installer that matches your architecture and install scope. For most devices, that's the x64 per-user installer.
Go to the [PowerToys GitHub releases][github-release-link], click Assets to reveal the downloads, and choose the installer that matches your architecture and install scope. For most devices, that's the x64 per-user installer.
<!-- items that need to be updated release to release -->
[github-next-release-work]: https://github.com/microsoft/PowerToys/issues?q=is%3Aissue+milestone%3A%22PowerToys+0.97%22
[github-current-release-work]: https://github.com/microsoft/PowerToys/issues?q=is%3Aissue+milestone%3A%22PowerToys+0.96%22
[ptUserX64]: https://github.com/microsoft/PowerToys/releases/download/v0.97.0/PowerToysUserSetup-0.97.0-x64.exe
[ptUserArm64]: https://github.com/microsoft/PowerToys/releases/download/v0.97.0/PowerToysUserSetup-0.97.0-arm64.exe
[ptMachineX64]: https://github.com/microsoft/PowerToys/releases/download/v0.97.0/PowerToysSetup-0.97.0-x64.exe
[ptMachineArm64]: https://github.com/microsoft/PowerToys/releases/download/v0.97.0/PowerToysSetup-0.97.0-arm64.exe
[ptUserX64]: https://github.com/microsoft/PowerToys/releases/download/v0.96.1/PowerToysUserSetup-0.96.1-x64.exe
[ptUserArm64]: https://github.com/microsoft/PowerToys/releases/download/v0.96.1/PowerToysUserSetup-0.96.1-arm64.exe
[ptMachineX64]: https://github.com/microsoft/PowerToys/releases/download/v0.96.1/PowerToysSetup-0.96.1-x64.exe
[ptMachineArm64]: https://github.com/microsoft/PowerToys/releases/download/v0.96.1/PowerToysSetup-0.96.1-arm64.exe
| Description | Filename |
|----------------|----------|
| Per user - x64 | [PowerToysUserSetup-0.97.0-x64.exe][ptUserX64] |
| Per user - ARM64 | [PowerToysUserSetup-0.97.0-arm64.exe][ptUserArm64] |
| Machine wide - x64 | [PowerToysSetup-0.97.0-x64.exe][ptMachineX64] |
| Machine wide - ARM64 | [PowerToysSetup-0.97.0-arm64.exe][ptMachineArm64] |
| Per user - x64 | [PowerToysUserSetup-0.96.1-x64.exe][ptUserX64] |
| Per user - ARM64 | [PowerToysUserSetup-0.96.1-arm64.exe][ptUserArm64] |
| Machine wide - x64 | [PowerToysSetup-0.96.1-x64.exe][ptMachineX64] |
| Machine wide - ARM64 | [PowerToysSetup-0.96.1-arm64.exe][ptMachineArm64] |
</details>
@@ -83,7 +83,7 @@ You can easily install PowerToys from the Microsoft Store:
<details>
<summary><strong>WinGet</strong></summary>
<br/>
Download PowerToys from <a href="https://github.com/microsoft/winget-cli#installing-the-client">WinGet</a>. Updating PowerToys via winget will respect the current PowerToys installation scope. To install PowerToys, run the following command from the command line / PowerShell:
Download PowerToys from [WinGet][winget-link]. Updating PowerToys via winget will respect the current PowerToys installation scope. To install PowerToys, run the following command from the command line / PowerShell:
*User scope installer [default]*
```powershell
@@ -99,197 +99,138 @@ winget install --scope machine Microsoft.PowerToys -s winget
<details>
<summary><strong>Other methods</strong></summary>
<br/>
There are <a href="https://learn.microsoft.com/windows/powertoys/install#community-driven-install-tools">community driven install methods</a> such as Chocolatey and Scoop. If these are your preferred install solutions, you can find the install instructions there.
There are [community driven install methods](./doc/unofficialInstallMethods.md) such as Chocolatey and Scoop. If these are your preferred install solutions, you can find the install instructions there.
</details>
## ✨ What's new
**Version 0.97 (January 2026)**
**Version 0.96 (November 2025)**
For an in-depth look at the latest changes, visit the [Windows Command Line blog](https://aka.ms/powertoys-releaseblog).
**✨ Highlights**
- **Command Palette**: Major expansion with PowerToys extension (Windows 11 only), Remote Desktop built-in extension, theme customization, drag-and-drop support, fallback ranking controls, sections/separators for pages, pinyin Chinese matching, and many UX refinements.
- **Settings**: Quick Access flyout is now a standalone process for significantly faster startup, theme-adaptive tray icon, AOT serialization, and multiple UI/accessibility fixes
- **CursorWrap (New!)**: New mouse utility that lets your cursor wrap around screen edges, making multi-monitor navigation faster and more seamless.
- **Advanced Paste**: Image input for AI, color detection in clipboard history, Foundry Local improvements, Azure AI icons, and multiple bug fixes
- **CLI Support Expanded**: FancyZones, Image Resizer, and File Locksmith can now be controlled from the command line for layout management, batch image resizing, and file lock inspection.
- **LightSwitch**: Added support for automatically following Windows Night Light mode.
- **Release Experience & Quality**: Refreshed "Whats new" dialog, plus many performance improvements, stability fixes, and refinements across PowerToys.
- Advanced Paste now supports multiple online and on-device AI model providers: Azure OpenAI, OpenAI, Google Gemini, Mistral, Foundry Local and Ollama.
- Command Palette received extensive improvements including file search filters, better clipboard history metadata, context-menu styling, and dozens of bug fixes and enhancements.
- PowerRename can now extract and use photo metadata (EXIF, XMP) in renaming patterns like `%Camera`, `%Lens`, and `%ExposureTime`.
## Advanced Paste
### Advanced Paste
- Advanced Paste now lets you connect to multiple AI providers instead of being limited to a single OpenAI provider. See [Advanced Paste documentation](https://learn.microsoft.com/windows/powertoys/advanced-paste) for usage.
- Added hex color previews in clipboard history. Thanks [@crramirez](https://github.com/crramirez)!
- Added automatic placeholder endpoints when required fields are left empty.
- Fixed a grammar issue in the AI settings description. Thanks [@erik-anderson](https://github.com/erik-anderson)!
- Fixed loading order so custom action hotkeys are read correctly.
- Updated Advanced Paste descriptions to reflect support for online and local models.
- Fixed clipboard history item selection so it doesnt duplicate entries.
- Prevented placeholder endpoints from being saved for providers that dont need them.
- Added image input support for AI transforms and improved clipboard change tracking.
### Awake
- The Awake countdown timer now stays accurate over long periods. Thanks [@daverayment](https://github.com/daverayment)!
- Fixed Awake context menu positioning. The fix removed the conversion of the mouse cursor from screen to client-window coordinates, instead using the raw screen coordinates returned by GetCursorPos; the context menu now appears at the correct screen position. Thanks [@lzandman](https://github.com/lzandman)!
## Awake
### Command Palette
- The search field in context menus now matches the look of the Command Palette, with a smoke backdrop and improved padding.
- Fallback items such as math calculations or the Run command now appear in results more quickly. Thanks [@jiripolasek](https://github.com/jiripolasek)!
- Ensured the command bar updates correctly after navigating to another page and commands are displayed correctly. Thanks [@jiripolasek](https://github.com/jiripolasek)!
- The Command Palette settings page has been reorganized. Activation-key options are grouped under an expander and extension settings are framed for improved readability.
- When you modify a command, its alias, hotkey, and tags now update in the top-level list, keeping the displayed information in sync. Thanks [@jiripolasek](https://github.com/jiripolasek)!
- Press `Ctrl + ,` to open Command Palette settings from anywhere. Thanks [@jiripolasek](https://github.com/jiripolasek)!
- You can use `Page Up` and `Page Down` to navigate the list while focus is in the search box. Thanks [@samrueby](https://github.com/samrueby)!
- Fixed an issue where the search box could disappear when navigating pages. Thanks [@jiripolasek](https://github.com/jiripolasek)!
- Ensured search text is selected when *Go home when activated* and *Highlight search on activate* are both enabled. Thanks [@jiripolasek](https://github.com/jiripolasek)!
- Fixed an issue where Command Palette window occasionally appeared on the taskbar under certain Windows settings. Thanks [@jiripolasek](https://github.com/jiripolasek)!
- Ensured that labels and icons of list items and menu items update when they change. Thanks [@jiripolasek](https://github.com/jiripolasek)!
- Fixed visibility of list filters when navigating to a content page. Thanks [@DevLGuilherme](https://github.com/DevLGuilherme)!
- Added search to the extension list and a link to extensions on the Microsoft Store. Thanks [@jiripolasek](https://github.com/jiripolasek)!
- Added options to open the Command Palette window at its last position or re-center it.
- The Command Palette now remembers its window size after restarting.
- Added a global error handler that logs fatal errors and provides feedback when unexpected failures force Command Palette to close. Thanks [@jiripolasek](https://github.com/jiripolasek)!
- Fixed forms and extension settings not showing on some machines due to a missing VC++ runtime.
- Restored ranking of fallback commands for built-in extensions (Sleep, Shutdown, Windows settings, Web search, etc.). Thanks [@jiripolasek](https://github.com/jiripolasek).
- Improved and unified labels and texts across the application!
- Maintainance: Resolved numerous build warnings in Command Palette projects; no user-visible impact. Thanks [@jiripolasek](https://github.com/jiripolasek)!
- Maintainance: Fixed a logging issue so exception messages are properly recorded instead of placeholder text, improving troubleshooting. Thanks [@jiripolasek](https://github.com/jiripolasek)!
- Fixed Awake CLI so help, errors, and logs appear correctly in the console. Thanks [@daverayment](https://github.com/daverayment)!
### Command Palette Extensions
- Bookmarks: Added hints about bookmark placeholders to the Add/Edit Bookmark form. — Thanks [@jiripolasek](https://github.com/jiripolasek)!
- Bookmarks: Improved migration of bookmarks from older versions and fixed an issue where aliases or keyboard shortcuts could be lost after restart. Thanks [@jiripolasek](https://github.com/jiripolasek)!
- Clipboard history: Items shown in Command Palettes clipboard history now include helpful metadata. For example, image items show dimensions, text files show names and sizes, web links include page titles, and text entries display word counts. Thanks [@jiripolasek](https://github.com/jiripolasek)!
- File search: Added filter buttons to show *all items*, *files only*, or *folders only*. Selecting a filter adds `kind:folders` or `kind:not folders` to narrow results.
- System commands: Replaced the `:red_circle:` placeholder with an actual red-circle emoji so the correct icon appears in the UI. Thanks [@samrueby](https://github.com/samrueby)!
- WinGet: Search performance feels more responsive because typed input is now processed via a task queue rather than complex cancellation tokens!
- Window Walker: UWP apps no longer show a "not responding" label when suspended. Thanks [@jiripolasek](https://github.com/jiripolasek)!
- Window Walker: Now displays the actual icon of each window rather than using the process icon, improving recognition of PWAs and Python GUIs. Thanks [@Lee-WonJun](https://github.com/Lee-WonJun)!
- Windows Terminal profiles: Fixed a rare crash in the Windows Terminal extension when the `LOCALAPPDATA` environment variable was missing. The path is now retrieved via a reliable API. Thanks [@jiripolasek](https://github.com/jiripolasek)!
## Command Palette
### Find My Mouse
- Activating Find My Mouse no longer makes the cursor change to the busy (hourglass) icon or steals focus from your active application.
- Fixed background image loading in BlurImageControl. Thanks [@jiripolasek](https://github.com/jiripolasek)!
- Fixed SDK packaging paths and added a CI SDK build stage.
- Aligned naming and spell-checking with .NET conventions. Thanks [@jiripolasek](https://github.com/jiripolasek)!
- Added drag-and-drop support for Command Palette items. Thanks [@jiripolasek](https://github.com/jiripolasek)!
- Added a PowerToys Command Palette extension to discover and launch PowerToys utilities.
- Fixed grid view bindings and layout issues. Thanks [@jiripolasek](https://github.com/jiripolasek)!
- Fixed a line-break issue in RDC extension toast messages. Thanks [@jiripolasek](https://github.com/jiripolasek)!
- Made the Settings button text localizable. Thanks [@jiripolasek](https://github.com/jiripolasek)!
- Hid the RDC fallback on the home page and fixed MSTSC working directory handling. Thanks [@jiripolasek](https://github.com/jiripolasek)!
- Optimized result list merging for better performance. Thanks [@daverayment](https://github.com/daverayment)!
- Added Small/Medium/Large detail sizes in the extensions API. Thanks [@DevLGuilherme](https://github.com/DevLGuilherme)!
- Hid fallback commands on the home page when no query is entered. Thanks [@jiripolasek](https://github.com/jiripolasek)!
- Added back navigation support in the Settings window. Thanks [@jiripolasek](https://github.com/jiripolasek)!
- Added a Command Palette solution filter. Thanks [@jiripolasek](https://github.com/jiripolasek)!
- Updated Extension SDK documentation links to Microsoft Learn. Thanks [@RubenFricke](https://github.com/RubenFricke)!
- Added a custom search engine URL setting for Web Search. Thanks [@jiripolasek](https://github.com/jiripolasek)!
- Added pinyin matching for Chinese input. Thanks [@frg2089](https://github.com/frg2089)!
- Bumped Command Palette version to 0.8.
- Removed subtitles from built-in top-level commands. Thanks [@jiripolasek](https://github.com/jiripolasek)!
- Refined separator styling in the details pane. Thanks [@jiripolasek](https://github.com/jiripolasek)!
- Added a built-in Remote Desktop extension.
- Added a Peek command to the Indexer extension.
- Improved default browser detection using the Windows Shell API. Thanks [@jiripolasek](https://github.com/jiripolasek)!
- Added Escape key behavior options. Thanks [@jiripolasek](https://github.com/jiripolasek)!
- Added theme and background customization options. Thanks [@jiripolasek](https://github.com/jiripolasek)!
- Improved WinGet package app matching. Thanks [@jiripolasek](https://github.com/jiripolasek)!
- Added an auto-return-home delay setting. Thanks [@jiripolasek](https://github.com/jiripolasek)!
- Added fallback ranking and global results settings.
- Removed the selection indicator in the context menu list. Thanks [@jiripolasek](https://github.com/jiripolasek)!
- Added a developer ribbon with build and log info. Thanks [@jiripolasek](https://github.com/jiripolasek)!
- Updated the “Learn more” string for Command Palette. Thanks [@pratnala](https://github.com/pratnala)!
- Added arrow-key navigation for grid views. Thanks [@samrueby](https://github.com/samrueby)!
- Fixed version display when running unpackaged. Thanks [@jiripolasek](https://github.com/jiripolasek)!
- Added a native debugging launch profile. Thanks [@jiripolasek](https://github.com/jiripolasek)!
- Reduced redundant property change notifications in the SDK. Thanks [@jiripolasek](https://github.com/jiripolasek)!
- Improved section readability and accessibility. Thanks [@jiripolasek](https://github.com/jiripolasek)!
- Made gallery spacing uniform. Thanks [@jiripolasek](https://github.com/jiripolasek)!
- Added sections and separators for list and grid pages. Thanks [@DevLGuilherme](https://github.com/DevLGuilherme)!
### Hosts File Editor
- Added customizable backup settings allowing users to configure backup frequency, location, and auto-deletion policies. Thanks [@davidegiacometti](https://github.com/davidegiacometti)!
## Crop & Lock
### Image Resizer
- Fixed settings consistency during batch resize operations by capturing settings once before processing. Thanks [@daverayment](https://github.com/daverayment)!
- Added a screenshot mode that freezes a cropped region into its own window. Thanks [@fm-sys](https://github.com/fm-sys)!
### Light Switch
- Introduced new UI to allow users to manually enter their latitude and longitude in Sunrise to Sunset mode.
- Refactored service with cleaner state management for stability.
- Removed logs from every tick, only logging key events to largely reduce log size.
## Cursor Wrap
### Mouse Pointer Crosshairs
- Enabled switching between Mouse Pointer Crosshairs and Gliding Cursor modes. Thanks [@mikehall-ms](https://github.com/mikehall-ms)!
- Improved Cursor Wrap behavior on multi-monitor setups by wrapping only at outer edges. Thanks [@mikehall-ms](https://github.com/mikehall-ms)!
### Mouse Without Borders
- Added horizontal scrolling support. Thanks [@MasonBergstrom](https://github.com/MasonBergstrom)!
## FancyZones
### Peek
- Fixed media files remaining locked after preview window closes. Thanks [@daverayment](https://github.com/daverayment)!
- Added a command-line interface for file previewing. See the [Peek documentation](https://learn.microsoft.com/windows/powertoys/peek) for usage. Thanks [@prochan2](https://github.com/prochan2)!
- Fixed editor overlay positioning on mixed-DPI multi-monitor setups. Thanks [@Memphizzz](https://github.com/Memphizzz)!
- Added a FancyZones CLI for command-line layout management.
### PowerRename
- PowerRename no longer crashes due to a missing resources file.
- Added photo metadata extraction support using EXIF and XMP for pattern-based renaming with camera info, GPS coordinates, and date taken. See [PowerRename Documentation](https://learn.microsoft.com/en-us/windows/powertoys/powerrename).
## File Locksmith
### PowerToys Run
- Added retry logic with exponential backoff to handle DWM composition errors during theme changes. Thanks [@jiripolasek](https://github.com/jiripolasek)!
- Updated OneNote icons to reflect new Microsoft 365 design. Thanks [@trevorNgo](https://github.com/trevorNgo)!
- Added a File Locksmith CLI for querying, waiting on, or killing file locks.
### Quick Accent
- Added diameter symbol (⌀) for Shift+O in Special Characters mode, thanks to [@anselumjuju](https://github.com/anselumjuju)!
## Find My Mouse
### Zoomit
- Smoothed out zoom-animation in ZoomIt by coalescing mouse-move and timer events, thanks to [@foxmsft](https://github.com/foxmsft)!
- Enabled GIF support for ZoomIt, thanks to [@MarioHewardt](https://github.com/MarioHewardt)!
- Fixed spelling mistakes, and refactored some literal strings to string constants, thanks to [@lzandman](https://github.com/lzandman)!
- Fixed inaccurate "actual size" screenshots in ZoomIt and resolves a GDI handle leak, improving capture fidelity and long-session stability. thanks to [@daverayment](https://github.com/daverayment)!
- Improved spotlight edge rendering for clearer Find My Mouse visuals.
- Added telemetry to track how Find My Mouse is triggered.
### Settings
- Fixed title bar overlapping issue at smaller window sizes.
- Refined shortcut control visual design with improved consistency and spacing.
- Added dashboard utilities sorting by name or status.
- Made update notification InfoBar in flyout clickable for direct navigation to update page.
- Expanded installation instructions by default in README.
- Improved accessibility for shortcut conflict button with static resource-based automation properties.
- Added ScrollViewer to Command Palette page in PowerToys Settings. Thanks [@jiripolasek](https://github.com/jiripolasek)!
- Fixed module list glitches and Sort Status checkmark issue. Thanks [@daverayment](https://github.com/daverayment)!
## Image Resizer
- Fixed Fill mode cropping when Shrink Only is enabled. Thanks [@daverayment](https://github.com/daverayment)!
- Added a dedicated Image Resizer CLI for scripted resizing.
## Light Switch
- Added telemetry events for Light Switch usage and settings changes.
- Added a Follow Night Light mode to sync theme changes with Night Light.
- Clarified LightSwitchService and LightSwitchStateManager roles in docs.
- Added a Quick Access dashboard button to toggle Light Switch quickly.
- Ensured Light Switch honors GPO policy states with clear status messaging.
## Mouse Without Borders
- Continued refactoring Mouse Without Borders by splitting the large Common class into focused components. Thanks [@mikeclayton](https://github.com/mikeclayton)!
- Completed the Common class refactor with Core and IPC helper extraction. Thanks [@mikeclayton](https://github.com/mikeclayton)!
## Peek
- Hardened Peek previews with strict resource filtering and safer external link warnings.
- Improved SVG preview compatibility by rendering via WebView2.
## PowerRename
- Added HEIF/AVIF EXIF metadata extraction and extension status guidance for related previews.
- Fixed undefined behavior in file time handling. Thanks [@safocl](https://github.com/safocl)!
- Optimized memory allocation for depth-based rename processing.
- Fixed Unicode normalization and nonbreaking space matching. Thanks [@daverayment](https://github.com/daverayment)!
- Fixed date token replacements followed by capital letters. Thanks [@daverayment](https://github.com/daverayment)!
## PowerToys Run Plugins
- Fixed a plugin name typo and added Project Launcher to the thirdparty list. Thanks [@artickc](https://github.com/artickc)!
- Added the Open With Antigravity plugin to the thirdparty list. Thanks [@artickc](https://github.com/artickc)!
## PowerToys Run
- Avoided unnecessary hotkey conflict checks when settings change.
- Added QuickAI to the third-party PowerToys Run plugin list. Thanks [@ruslanlap](https://github.com/ruslanlap)!
## Quick Accent
- Added localized quotation marks to Quick Accent. Thanks [@warquys](https://github.com/warquys)!
- Fixed duplicate and redundant characters in Quick Accent sets. Thanks [@noraa-junker](https://github.com/noraa-junker)!
- Fixed DPI positioning issues for Quick Accent on mixed-DPI setups. Thanks [@noraa-junker](https://github.com/noraa-junker)!
## Settings
- Added a new tray icon that adapts to theme changes. Thanks [@HO-COOH](https://github.com/HO-COOH)!
- Centralized module enable/disable logic for cleaner Settings UI updates.
- Simplified Settings utilities by removing ISettingsUtils/ISettingsPath interfaces. Thanks [@noraa-junker](https://github.com/noraa-junker)!
- Improved Settings UI consistency and disabled-state visuals.
- Added semantic headings to the Dashboard for better accessibility.
- Introduced Quick Access as a standalone host with updated Settings integration.
- Fixed Dashboard toggle flicker and sort menu checkmarks. Thanks [@daverayment](https://github.com/daverayment)!
- Added Native AOT-compatible settings serialization.
- Standardized mouse tool description text. Thanks [@daverayment](https://github.com/daverayment)!
- Added a global SettingsUtils singleton to reduce repeated initialization.
## Development
- Fixed broken devdocs links to the coding style guide. Thanks [@RubenFricke](https://github.com/RubenFricke)!
- Migrated main and installer solutions to .slnx for improved build tooling.
- Restored local installer builds after the WiX v5 upgrade with signing and versioning fixes.
- Added incremental review tooling and structured AI prompts for PR/issue reviews.
- Documented bot commands and cleaned up devdocs structure. Thanks [@noraa-junker](https://github.com/noraa-junker)!
- Updated WinAppSDK pipeline defaults to 1.8 and fixed restore handling.
- Updated the COMMUNITY list to reflect current roles.
- Maintained community member ordering and added a new entry.
- Re-enabled centralized PackageReference for native projects with VS auto-restore.
- Disabled MSBuild caching by default in CI to avoid build instability.
- Updated the latest WinAppSDK daily pipeline for split-dependency restores.
- Suppressed experimental build warnings and aligned WrapPanel stretch handling.
- Reordered the spell-check expect list for consistent automation.
- Migrated native projects to centralized PackageReference management.
- Cleaned spell-check dictionary entries and capitalization.
- Synced commit/PR prompts and wired VS Code to repo prompt files.
- Added VS Code build tasks and improved build script path handling.
- Updated Windows App SDK package versions in central package management.
- Migrated cmdpal extension native project to PackageReference and fixed outputs.
- Reverted PackageReference changes back to packages.config where needed.
- Bypassed a release version check for a failing DLL to keep pipelines green.
- Consolidated Copilot instructions and fixed prompt frontmatter.
- Added signing entries for new Quick Access binaries and CLI version metadata.
- Fixed install scope detection to avoid mixed per-user/per-machine installs.
- Added a Module Loader tool to quickly test PowerToys modules without full builds. Thanks [@mikehall-ms](https://github.com/mikehall-ms)!
- Added update telemetry to understand auto-update checks and downloads.
- Updated the telemetry package for new compliance requirements. Thanks [@carlos-zamora](https://github.com/carlos-zamora)!
- Documented missing telemetry events in DATA_AND_PRIVACY.
- Fixed UI test pipeline restores for .slnx solutions.
- Added UI automation coverage for Advanced Paste clipboard history flows.
- Stabilized FancyZones UI tests with more reliable selectors and screen recordings.
### Development
- Fixed accessibility by associating controls with labels for screen readers.
- Added accessible name to Shortcut Conflicts button for screen readers.
- Excluded TitleBars from tab navigation across multiple utilities. Thanks [@jiripolasek](https://github.com/jiripolasek)!
- Migrated build infrastructure from Windows Server 2019 to Server 2022 with improved failure logging and predictable NuGet package paths.
- Configured build agents to use larger P: drive for release builds to address disk space constraints.
- Enhanced DSC v3 support by organizing resource manifests in a dedicated subfolder with PATH configuration.
- Reduced installer bundle size by 6-7MB through centralized Hybrid CRT configuration across all C++ projects.
- Updated .NET packages to version 9.0.10 for security fixes. Thanks [@snickler](https://github.com/snickler)!
- Fixed spell check dictionary entries for consistency.
- Restored accidentally deleted NuGet configuration file for Command Palette extensions.
- Fixed package identity build by updating AppxManifest entry points to use PowerShell Core.
- Optimized CI pipeline by replacing file copy operations with hard links and moves, reducing build time and disk usage by 10-15GB.
- Updated Copilot guidance and PR prompt workflow.
- Included high-volume bugs in issue template header. Thanks [@daverayment](https://github.com/daverayment)!
- Fixed incorrect HRESULT logging for inner exceptions. Thanks [@jiripolasek](https://github.com/jiripolasek)!
- Introduced shared sparse package identity for PowerToys Win32 components to enable access to Windows platform APIs.
- Consolidated installer builds to produce both machine and user installers simultaneously, reducing build time and complexity.
- Migrated exclusively to WiX v5 installer infrastructure, removing legacy WiX v3 support.
- Temporarily removed PowerToys installer path from PATH environment variable to prevent application crashes.
- Added complete OCR UI test coverage with automated tests for activation, settings, language selection, and text extraction.
- Fixed test input for drive path normalization in bookmark resolver unit tests.
- Fixed Peek UI tests by restoring Ctrl+Space activation shortcut for test scenarios.
- Hided apps in PowerToys.SpareApps package from Start Menu. Thanks [@jiripolasek](https://github.com/jiripolasek)!
## 🛣️ Roadmap
We are planning some nice new features and improvements for the next releases PowerDisplay, Command Palette improvements and a brand-new Shortcut Guide experience! Stay tuned for [v0.97][github-next-release-work]!
We are planning some nice new features and improvements for the next releases a revamped Keyboard Manager UI, custom endpoint and local model support for Advanced Paste, Command Palette improvements and a brand-new Shortcut Guide experience! Stay tuned for [v0.96][github-next-release-work]!
## ❤️ PowerToys Community
The PowerToys team is extremely grateful to have the [support of an amazing active community][community-link]. The work you do is incredibly important. PowerToys wouldn't be nearly what it is today without your help filing bugs, updating documentation, guiding the design, or writing features. We want to say thank you and take time to recognize your work. Your contributions and feedback improve PowerToys month after month!

View File

@@ -1,93 +0,0 @@
# CLI Conventions
This document describes the conventions for implementing command-line interfaces (CLI) in PowerToys modules.
## Library
Use the **System.CommandLine** library for CLI argument parsing. This is already defined in `Directory.Packages.props`:
```xml
<PackageReference Include="System.CommandLine" Version="2.0.0-beta4.22272.1" />
```
Add the reference to your project:
```xml
<PackageReference Include="System.CommandLine" />
```
## Option Naming and Definition
- Use `--kebab-case` for long form (e.g., `--shrink-only`).
- Use single `-x` for short form (e.g., `-s`, `-w`).
- Define aliases as static readonly arrays: `["--silent", "-s"]`.
- Create options using `Option<T>` with descriptive help text.
- Add validators for options that require range or format checking.
## RootCommand Setup
- Create a `RootCommand` with a brief description.
- Add all options and arguments to the command.
## Parsing
- Use `Parser(rootCommand).Parse(args)` to parse CLI arguments.
- Extract option values using `parseResult.GetValueForOption()`.
- Note: Use `Parser` directly; `RootCommand.Parse()` may not be available with the pinned System.CommandLine version.
### Parse/Validation Errors
- On parse/validation errors, print error messages and usage, then exit with non-zero code.
## Examples
Reference implementations:
- Awake: `src/modules/Awake/Awake/Program.cs`
- ImageResizer: `src/modules/imageresizer/ui/Cli/`
## Help Output
- Provide a `PrintUsage()` method for custom help formatting if needed.
## Best Practices
1. **Consistency**: Follow existing module patterns.
2. **Documentation**: Always provide help text for each option.
3. **Validation**: Validate input and provide clear error messages.
4. **Atomicity**: Make one logical change per PR; avoid drive-by refactors.
5. **Build/Test Discipline**: Build and test synchronously, one terminal per operation.
6. **Style**: Follow repo analyzers (`.editorconfig`, StyleCop) and formatting rules.
## Logging Requirements
- Use `ManagedCommon.Logger` for consistent logging.
- Initialize logging early in `Main()`.
- Use dual output (console + log file) for errors and warnings to ensure visibility.
- Reference: `src/modules/imageresizer/ui/Cli/CliLogger.cs`
## Error Handling
### Exit Codes
- `0`: Success
- `1`: General error (parsing, validation, runtime)
- `2`: Invalid arguments (optional)
### Exception Handling
- Always wrap `Main()` in try-catch for unhandled exceptions.
- Log exceptions before exiting with non-zero code.
- Display user-friendly error messages to stderr.
- Preserve detailed stack traces in log files only.
## Testing Requirements
- Include tests for argument parsing, validation, and edge cases.
- Place CLI tests in module-specific test projects (e.g., `src/modules/[module]/tests/*CliTests.cs`).
## Signing and Deployment
- CLI executables are signed automatically in CI/CD.
- **New CLI tools**: Add your executable and dll to `.pipelines/ESRPSigning_core.json` in the signing list.
- CLI executables are deployed alongside their parent module (e.g., `C:\Program Files\PowerToys\modules\[ModuleName]\`).
- Use self-contained deployment (import `Common.SelfContained.props`).

View File

@@ -96,40 +96,3 @@ The Shell Process Debugging Tool is a Visual Studio extension that helps debug m
- Logs are stored in the local app directory: `%LOCALAPPDATA%\Microsoft\PowerToys`
- Check Event Viewer for application crashes related to `PowerToys.Settings.exe`
- Crash dumps can be obtained from Event Viewer
## Troubleshooting Build Errors
### Missing Image Files or Corrupted Build State
If you encounter build errors about missing image files (e.g., `.png`, `.ico`, or other assets), this typically indicates a corrupted build state. To resolve:
1. **Clean the solution in Visual Studio**: Build > Clean Solution
Or from the command line (Developer Command Prompt for VS 2022):
```pwsh
msbuild PowerToys.slnx /t:Clean /p:Platform=x64 /p:Configuration=Debug
```
2. **Delete build output and package folders** from the repository root:
- `x64/`
- `ARM64/`
- `Debug/`
- `Release/`
- `packages/`
3. **Rebuild the solution**
#### Helper Script
A PowerShell script is available to automate this cleanup:
```pwsh
.\tools\build\clean-artifacts.ps1
```
This script will run MSBuild Clean and remove the build folders listed above. Use `-SkipMSBuildClean` if you only want to delete the folders without running MSBuild Clean.
After cleaning, rebuild with:
```pwsh
msbuild -restore -p:RestorePackagesConfig=true -p:Platform=x64 -m PowerToys.slnx
```

View File

@@ -58,8 +58,8 @@ string validUIDisplayString = Resources.ValidUIDisplayString;
## More On Coding Guidance
Please review these brief docs below relating to our coding standards, etc.
* [Coding Style](development/style.md)
* [Code Organization](readme.md)
* [Coding Style](./style.md)
* [Code Organization](./readme.md)
[VS Resource Editor]: https://learn.microsoft.com/cpp/windows/resource-editors?view=vs-2019

View File

@@ -20,9 +20,6 @@ Creates a window showing the selected area of the original window. Changes in th
### Reparent Mode
Creates a window that replaces the original window, showing only the selected area. The application is controlled through the cropped window.
### Screenshot Mode
Creates a window showing a freezed snapshot of the original window.
## Code Structure
### Project Layout
@@ -33,7 +30,6 @@ The Crop and Lock module is part of the PowerToys solution. All the logic-relate
- **OverlayWindow.cpp**: Thumbnail module type's window concrete implementation.
- **ReparentCropAndLockWindow.cpp**: Defines the UI for the reparent mode.
- **ChildWindow.cpp**: Reparent module type's window concrete implementation.
- **ScreenshotCropAndLockWindow.cpp**: Defines the UI for the screenshot mode.
## Known Issues

View File

@@ -57,7 +57,7 @@ Once you've discussed your proposed feature/fix/etc. with a team member, and an
## Rules
- **Follow the pattern of what you already see in the code.**
- [Coding style](development/style.md).
- [Coding style](style.md).
- Try to package new functionality/components into libraries that have nicely defined interfaces.
- Package new functionality into classes or refactor existing functionality into a class as you extend the code.
- When adding new classes/methods/changing existing code, add new unit tests or update the existing tests.
@@ -83,40 +83,14 @@ Once you've discussed your proposed feature/fix/etc. with a team member, and an
1. A local clone of the PowerToys repository
1. Enable long paths in Windows (see [Enable Long Paths](https://docs.microsoft.com/windows/win32/fileio/maximum-file-path-limitation#enabling-long-paths-in-windows-10-version-1607-and-later) for details)
### Automated Setup (Recommended)
Run the setup script to automatically configure your development environment:
```powershell
.\tools\build\setup-dev-environment.ps1
```
This script will:
- Enable Windows long path support (requires administrator privileges)
- Enable Windows Developer Mode (requires administrator privileges)
- Guide you through installing required Visual Studio components from `.vsconfig`
- Initialize git submodules
Run with `-Help` to see all available options:
```powershell
.\tools\build\setup-dev-environment.ps1 -Help
```
### Manual Setup
If you prefer to set up manually, follow these steps:
#### Install Visual Studio dependencies
### Install Visual Studio dependencies
1. Open the `PowerToys.slnx` file.
1. If you see a dialog that says `install extra components` in the solution explorer pane, click `install`
Alternatively, import the `.vsconfig` file from the repository root using Visual Studio Installer to install all required workloads.
### Get Submodules to compile
#### Get Submodules to compile
We have submodules that need to be initialized before you can compile most parts of PowerToys. This should be a one-time step.
We have submodules that need to be initialized before you can compile most parts of PowerToys. This should be a one-time step.
1. Open a terminal
1. Navigate to the folder you cloned PowerToys to.
@@ -124,32 +98,12 @@ We have submodules that need to be initialized before you can compile most parts
### Compiling Source Code
#### Using Visual Studio
- Open `PowerToys.slnx` in Visual Studio.
- In the `Solutions Configuration` drop-down menu select `Release` or `Debug`.
- From the `Build` menu choose `Build Solution`, or press <kbd>Control</kbd>+<kbd>Shift</kbd>+<kbd>b</kbd> on your keyboard.
- The build process may take several minutes depending on your computer's performance. Once it completes, the PowerToys binaries will be in your repo under `x64\Release\`.
- You can run `x64\Release\PowerToys.exe` directly without installing PowerToys, but some modules (i.e. PowerRename, ImageResizer, File Explorer extension etc.) will not be available unless you also build the installer and install PowerToys.
#### Using Command Line
You can also build from the command line using the provided scripts in `tools\build\`:
```powershell
# Build the full solution (auto-detects platform)
.\tools\build\build.ps1
# Build with specific configuration
.\tools\build\build.ps1 -Platform x64 -Configuration Release
# Build only essential projects (runner + settings) for faster iteration
.\tools\build\build-essentials.ps1
# Build everything including the installer (Release only)
.\tools\build\build-installer.ps1
```
## Compile the installer
Our installer is two parts, an EXE and an MSI. The EXE (Bootstrapper) contains the MSI and handles more complex installation logic.

View File

@@ -1,5 +1,5 @@
---
last-update: 1-18-2026
last-update: 7-16-2024
---
# PowerToys Awake Changelog
@@ -12,7 +12,6 @@ The build ID moniker is made up of two components - a reference to a [Halo](http
| Build ID | Build Date |
|:-------------------------------------------------------------------|:------------------|
| [`DIDACT_01182026`](#DIDACT_01182026-january-18-2026) | January 18, 2026 |
| [`TILLSON_11272024`](#TILLSON_11272024-november-27-2024) | November 27, 2024 |
| [`PROMETHEAN_09082024`](#PROMETHEAN_09082024-september-8-2024) | September 8, 2024 |
| [`VISEGRADRELAY_08152024`](#VISEGRADRELAY_08152024-august-15-2024) | August 15, 2024 |
@@ -21,22 +20,6 @@ The build ID moniker is made up of two components - a reference to a [Halo](http
| [`LIBRARIAN_03202022`](#librarian_03202022-march-20-2022) | March 20, 2022 |
| `ARBITER_01312022` | January 31, 2022 |
### `DIDACT_01182026` (January 18, 2026)
>[!NOTE]
>See pull request: [Awake - `DIDACT_01182026`](https://github.com/microsoft/PowerToys/pull/44795)
- [#32544](https://github.com/microsoft/PowerToys/issues/32544) Fixed an issue where Awake settings became non-functional after the PC wakes from sleep. Added `WM_POWERBROADCAST` handling to detect system resume events (`PBT_APMRESUMEAUTOMATIC`, `PBT_APMRESUMESUSPEND`) and re-apply `SetThreadExecutionState` to restore the awake state.
- [#36150](https://github.com/microsoft/PowerToys/issues/36150) Fixed an issue where Awake would not prevent sleep when AC power is connected. Added `PBT_APMPOWERSTATUSCHANGE` handling to re-apply `SetThreadExecutionState` when the power source changes (AC/battery transitions).
- Fixed an issue where toggling "Keep screen on" during an active timed session would disrupt the countdown timer. The display setting now updates directly without restarting the timer, preserving the exact remaining time.
- [#41918](https://github.com/microsoft/PowerToys/issues/41918) Fixed `WM_COMMAND` message processing flaw in `TrayHelper.WndProc` that incorrectly compared enum values against enum count. Added proper bounds checking for custom tray time entries.
- Investigated [#44134](https://github.com/microsoft/PowerToys/issues/44134) - documented that `ES_DISPLAY_REQUIRED` (used when "Keep display on" is enabled) blocks Task Scheduler idle detection, preventing scheduled maintenance tasks like SSD TRIM. Workaround: disable "Keep display on" or manually run `Optimize-Volume -DriveLetter C -ReTrim`. Additional investigation needed for potential "idle window" feature.
- [#41738](https://github.com/microsoft/PowerToys/issues/41738) Fixed `--display-on` CLI flag default from `true` to `false` to align with documentation and PowerToys settings behavior. This is a breaking change for scripts relying on the undocumented default.
- [#41674](https://github.com/microsoft/PowerToys/issues/41674) Fixed silent failure when `SetThreadExecutionState` fails. The monitor thread now handles the return value, logs an error, and reverts to passive mode with updated tray icon.
- [#38770](https://github.com/microsoft/PowerToys/issues/38770) Fixed tray icon failing to appear after Windows updates. Increased retry attempts and delays for icon Add operations (10 attempts, up to ~15.5 seconds total) while keeping existing fast retry behavior for Update/Delete operations.
- [#40501](https://github.com/microsoft/PowerToys/issues/40501) Fixed tray icon not disappearing when Awake is disabled. The `SetShellIcon` function was incorrectly requiring an icon for Delete operations, causing the `NIM_DELETE` message to never be sent.
- [#40659](https://github.com/microsoft/PowerToys/issues/40659) Fixed potential stack overflow crash in EXPIRABLE mode. Added early return after SaveSettings when correcting past expiration times, matching the pattern used by other mode handlers to prevent reentrant execution.
### `TILLSON_11272024` (November 27, 2024)
>[!NOTE]

View File

@@ -50,8 +50,6 @@ Contact the developers of a plugin directly for assistance with a specific plugi
| [Hotkeys](https://github.com/ruslanlap/PowerToysRun-Hotkeys) | [ruslanlap](https://github.com/ruslanlap) | Create, manage, and trigger custom keyboard shortcuts directly from PowerToys Run. |
| [RandomGen](https://github.com/ruslanlap/PowerToysRun-RandomGen) | [ruslanlap](https://github.com/ruslanlap) | 🎲 Generate random data instantly with a single keystroke. Perfect for developers, testers, designers, and anyone who needs quick access to random data. Features include secure passwords, PINs, names, business data, dates, numbers, GUIDs, color codes, and more. Especially useful for designers who need random color codes and placeholder content. |
| [Open With Cursor](https://github.com/VictorNoxx/PowerToys-Run-Cursor/) | [VictorNoxx](https://github.com/VictorNoxx) | Open Visual Studio, VS Code recents with Cursor AI |
| [Open With Antigravity](https://github.com/artickc/PowerToys-Run-Antygravity) | [artickc](https://github.com/artickc) | Open Visual Studio, VS Code recents with Antigravity AI |
| [Project Launcher Plugin](https://github.com/artickc/ProjectLauncherPowerToysPlugin) | [artickc](https://github.com/artickc) | Access your projects using Project Launcher and PowerToys Run |
| [CheatSheets](https://github.com/ruslanlap/PowerToysRun-CheatSheets) | [ruslanlap](https://github.com/ruslanlap) | 📚 Find cheat sheets and command examples instantly from tldr pages, cheat.sh, and devhints.io. Features include favorites system, categories, offline mode, and smart caching. |
| [QuickAI](https://github.com/ruslanlap/PowerToysRun-QuickAi) | [ruslanlap](https://github.com/ruslanlap) | AI-powered assistance with instant, smart responses from multiple providers (Groq, Together, Fireworks, OpenRouter, Cohere) |

View File

@@ -1549,7 +1549,7 @@ UINT __stdcall TerminateProcessesCA(MSIHANDLE hInstall)
}
processes.resize(bytes / sizeof(processes[0]));
std::array<std::wstring_view, 44> processesToTerminate = {
std::array<std::wstring_view, 42> processesToTerminate = {
L"PowerToys.PowerLauncher.exe",
L"PowerToys.Settings.exe",
L"PowerToys.AdvancedPaste.exe",
@@ -1584,14 +1584,12 @@ UINT __stdcall TerminateProcessesCA(MSIHANDLE hInstall)
L"PowerToys.MouseWithoutBordersService.exe",
L"PowerToys.CropAndLock.exe",
L"PowerToys.EnvironmentVariables.exe",
L"PowerToys.QuickAccess.exe",
L"PowerToys.WorkspacesSnapshotTool.exe",
L"PowerToys.WorkspacesLauncher.exe",
L"PowerToys.WorkspacesLauncherUI.exe",
L"PowerToys.WorkspacesEditor.exe",
L"PowerToys.WorkspacesWindowArranger.exe",
L"Microsoft.CmdPal.UI.exe",
L"Microsoft.CmdPal.Ext.PowerToys.exe",
L"PowerToys.ZoomIt.exe",
L"PowerToys.exe",
};

View File

@@ -61,16 +61,6 @@
</RegistryKey>
<File Source="$(var.RepoDir)\Notice.md" Id="Notice.md" />
</Component>
<Directory Id="SvgsFolder" Name="svgs">
<Component Id="svgs_icons" Guid="A9B7C5D3-E1F2-4A6B-8C9D-0E1F2A3B4C5D" Bitness="always64">
<RegistryKey Root="$(var.RegistryScope)" Key="Software\Classes\powertoys\components">
<RegistryValue Type="string" Name="svgs_icons" Value="" KeyPath="yes" />
</RegistryKey>
<File Id="icon.ico" Source="$(var.BinDir)svgs\icon.ico" />
<File Id="PowerToysWhite.ico" Source="$(var.BinDir)svgs\PowerToysWhite.ico" />
<File Id="PowerToysDark.ico" Source="$(var.BinDir)svgs\PowerToysDark.ico" />
</Component>
</Directory>
</DirectoryRef>
<?if $(var.PerUser) = "true" ?>
@@ -122,7 +112,6 @@
<RemoveFolder Id="RemoveBaseApplicationsAssetsFolder" Directory="BaseApplicationsAssetsFolder" On="uninstall" />
<RemoveFolder Id="RemoveWinUI3AppsInstallFolder" Directory="WinUI3AppsInstallFolder" On="uninstall" />
<RemoveFolder Id="RemoveWinUI3AppsAssetsFolder" Directory="WinUI3AppsAssetsFolder" On="uninstall" />
<RemoveFolder Id="RemoveSvgsFolder" Directory="SvgsFolder" On="uninstall" />
<RemoveFolder Id="RemoveINSTALLFOLDER" Directory="INSTALLFOLDER" On="uninstall" />
</Component>
<ComponentRef Id="powertoys_exe" />
@@ -131,7 +120,6 @@
<ComponentRef Id="powertoys_toast_clsid" />
<ComponentRef Id="License_rtf" />
<ComponentRef Id="Notice_md" />
<ComponentRef Id="svgs_icons" />
<ComponentRef Id="DesktopShortcut" />
<?if $(var.PerUser) = "true" ?>
<ComponentRef Id="powertoys_env_path_user" />

View File

@@ -66,10 +66,5 @@ namespace PowerToys.GPOWrapperProjection
{
return (GpoRuleConfigured)PowerToys.GPOWrapper.GPOWrapper.GetConfiguredWorkspacesEnabledValue();
}
public static GpoRuleConfigured GetConfiguredLightSwitchEnabledValue()
{
return (GpoRuleConfigured)PowerToys.GPOWrapper.GPOWrapper.GetConfiguredLightSwitchEnabledValue();
}
}
}

View File

@@ -36,6 +36,5 @@ namespace ManagedCommon
PowerOCR,
Workspaces,
ZoomIt,
GeneralSettings,
}
}

View File

@@ -10,7 +10,7 @@ namespace ManagedCommon
{
public static bool IsWindows10()
{
return Environment.OSVersion.Version.Major >= 10 && Environment.OSVersion.Version.Build < 22000;
return Environment.OSVersion.Version.Major >= 10 && Environment.OSVersion.Version.Minor < 22000;
}
public static bool IsWindows11()

View File

@@ -119,16 +119,6 @@ namespace PowerToysSettings
class HotkeyObject
{
public:
HotkeyObject() :
m_json(json::JsonObject())
{
m_json.SetNamedValue(L"win", json::value(false));
m_json.SetNamedValue(L"ctrl", json::value(false));
m_json.SetNamedValue(L"alt", json::value(false));
m_json.SetNamedValue(L"shift", json::value(false));
m_json.SetNamedValue(L"code", json::value(0));
m_json.SetNamedValue(L"key", json::value(L""));
}
static HotkeyObject from_json(json::JsonObject json)
{
return HotkeyObject(std::move(json));

View File

@@ -0,0 +1,16 @@
---
applyTo: "**/*.cs,**/*.cpp,**/*.c,**/*.h,**/*.hpp"
---
# Common shared libraries guidance (concise)
Scope
- Logging, IPC, settings, DPI, telemetry, utilities consumed by multiple modules.
Guidelines
- Avoid breaking public headers/APIs; if changed, search & update all callers.
- Coordinate ABI-impacting struct/class layout changes; keep binary compatibility.
- Watch perf in hot paths (hooks, timers, serialization); avoid avoidable allocations.
- Ask before adding thirdparty deps or changing serialization formats.
Acceptance
- No unintended ABI breaks, no noisy logs, new non-obvious symbols briefly commented.

View File

@@ -223,10 +223,6 @@ namespace winrt::PowerToys::Interop::implementation
{
return CommonSharedConstants::CROP_AND_LOCK_REPARENT_EVENT;
}
hstring Constants::CropAndLockScreenshotEvent()
{
return CommonSharedConstants::CROP_AND_LOCK_SCREENSHOT_EVENT;
}
hstring Constants::ShowEnvironmentVariablesSharedEvent()
{
return CommonSharedConstants::SHOW_ENVIRONMENT_VARIABLES_EVENT;

View File

@@ -59,7 +59,6 @@ namespace winrt::PowerToys::Interop::implementation
static hstring TerminateHostsSharedEvent();
static hstring CropAndLockThumbnailEvent();
static hstring CropAndLockReparentEvent();
static hstring CropAndLockScreenshotEvent();
static hstring ShowEnvironmentVariablesSharedEvent();
static hstring ShowEnvironmentVariablesAdminSharedEvent();
static hstring WorkspacesLaunchEditorEvent();

View File

@@ -56,7 +56,6 @@ namespace PowerToys
static String TerminateHostsSharedEvent();
static String CropAndLockThumbnailEvent();
static String CropAndLockReparentEvent();
static String CropAndLockScreenshotEvent();
static String ShowEnvironmentVariablesSharedEvent();
static String ShowEnvironmentVariablesAdminSharedEvent();
static String WorkspacesLaunchEditorEvent();

View File

@@ -132,7 +132,6 @@ namespace CommonSharedConstants
// Path to the events used by CropAndLock
const wchar_t CROP_AND_LOCK_REPARENT_EVENT[] = L"Local\\PowerToysCropAndLockReparentEvent-6060860a-76a1-44e8-8d0e-6355785e9c36";
const wchar_t CROP_AND_LOCK_THUMBNAIL_EVENT[] = L"Local\\PowerToysCropAndLockThumbnailEvent-1637be50-da72-46b2-9220-b32b206b2434";
const wchar_t CROP_AND_LOCK_SCREENSHOT_EVENT[] = L"Local\\PowerToysCropAndLockScreenshotEvent-ff077ab2-8360-4bd1-864a-637389d35593";
const wchar_t CROP_AND_LOCK_EXIT_EVENT[] = L"Local\\PowerToysCropAndLockExitEvent-d995d409-7b70-482b-bad6-e7c8666f375a";
// Path to the events used by EnvironmentVariables

View File

@@ -112,7 +112,6 @@
<ClCompile Include="ChildWindow.cpp" />
<ClCompile Include="main.cpp" />
<ClCompile Include="ReparentCropAndLockWindow.cpp" />
<ClCompile Include="ScreenshotCropAndLockWindow.cpp" />
<ClCompile Include="ThumbnailCropAndLockWindow.cpp" />
<ClCompile Include="OverlayWindow.cpp" />
<ClCompile Include="pch.cpp">
@@ -127,7 +126,6 @@
<ClInclude Include="DisplaysUtil.h" />
<ClInclude Include="ModuleConstants.h" />
<ClInclude Include="ReparentCropAndLockWindow.h" />
<ClInclude Include="ScreenshotCropAndLockWindow.h" />
<ClInclude Include="ThumbnailCropAndLockWindow.h" />
<ClInclude Include="SettingsWindow.h" />
<ClInclude Include="OverlayWindow.h" />

View File

@@ -12,7 +12,6 @@
<ClCompile Include="ReparentCropAndLockWindow.cpp" />
<ClCompile Include="ChildWindow.cpp" />
<ClCompile Include="trace.cpp" />
<ClCompile Include="ScreenshotCropAndLockWindow.cpp" />
</ItemGroup>
<ItemGroup>
<ClInclude Include="pch.h" />
@@ -29,7 +28,6 @@
<ClInclude Include="trace.h" />
<ClInclude Include="ModuleConstants.h" />
<ClInclude Include="DispatcherQueue.desktop.interop.h" />
<ClInclude Include="ScreenshotCropAndLockWindow.h" />
</ItemGroup>
<ItemGroup>
<ResourceCompile Include="CropAndLock.rc" />

View File

@@ -1,178 +0,0 @@
#include "pch.h"
#include "ScreenshotCropAndLockWindow.h"
const std::wstring ScreenshotCropAndLockWindow::ClassName = L"CropAndLock.ScreenshotCropAndLockWindow";
std::once_flag ScreenshotCropAndLockWindowClassRegistration;
void ScreenshotCropAndLockWindow::RegisterWindowClass()
{
auto instance = winrt::check_pointer(GetModuleHandleW(nullptr));
WNDCLASSEXW wcex = {};
wcex.cbSize = sizeof(wcex);
wcex.style = CS_HREDRAW | CS_VREDRAW;
wcex.lpfnWndProc = WndProc;
wcex.hInstance = instance;
wcex.hIcon = LoadIconW(instance, IDI_APPLICATION);
wcex.hCursor = LoadCursorW(nullptr, IDC_ARROW);
wcex.hbrBackground = static_cast<HBRUSH>(GetStockObject(BLACK_BRUSH));
wcex.lpszClassName = ClassName.c_str();
wcex.hIconSm = LoadIconW(wcex.hInstance, IDI_APPLICATION);
winrt::check_bool(RegisterClassExW(&wcex));
}
ScreenshotCropAndLockWindow::ScreenshotCropAndLockWindow(std::wstring const& titleString, int width, int height)
{
auto instance = winrt::check_pointer(GetModuleHandleW(nullptr));
std::call_once(ScreenshotCropAndLockWindowClassRegistration, []() { RegisterWindowClass(); });
auto exStyle = 0;
auto style = WS_OVERLAPPEDWINDOW | WS_CLIPCHILDREN;
RECT rect = { 0, 0, width, height };
winrt::check_bool(AdjustWindowRectEx(&rect, style, false, exStyle));
auto adjustedWidth = rect.right - rect.left;
auto adjustedHeight = rect.bottom - rect.top;
winrt::check_bool(CreateWindowExW(exStyle, ClassName.c_str(), titleString.c_str(), style, CW_USEDEFAULT, CW_USEDEFAULT, adjustedWidth, adjustedHeight, nullptr, nullptr, instance, this));
WINRT_ASSERT(m_window);
}
ScreenshotCropAndLockWindow::~ScreenshotCropAndLockWindow()
{
DestroyWindow(m_window);
}
LRESULT ScreenshotCropAndLockWindow::MessageHandler(UINT const message, WPARAM const wparam, LPARAM const lparam)
{
switch (message)
{
case WM_DESTROY:
if (m_closedCallback != nullptr && !m_destroyed)
{
m_destroyed = true;
m_closedCallback(m_window);
}
break;
case WM_PAINT:
if (m_captured && m_bitmap)
{
PAINTSTRUCT ps;
HDC hdc = BeginPaint(m_window, &ps);
HDC memDC = CreateCompatibleDC(hdc);
SelectObject(memDC, m_bitmap.get());
RECT clientRect = {};
GetClientRect(m_window, &clientRect);
int clientWidth = clientRect.right - clientRect.left;
int clientHeight = clientRect.bottom - clientRect.top;
int srcWidth = m_destRect.right - m_destRect.left;
int srcHeight = m_destRect.bottom - m_destRect.top;
float srcAspect = static_cast<float>(srcWidth) / srcHeight;
float dstAspect = static_cast<float>(clientWidth) / clientHeight;
int drawWidth = clientWidth;
int drawHeight = static_cast<int>(clientWidth / srcAspect);
if (dstAspect > srcAspect)
{
drawHeight = clientHeight;
drawWidth = static_cast<int>(clientHeight * srcAspect);
}
int offsetX = (clientWidth - drawWidth) / 2;
int offsetY = (clientHeight - drawHeight) / 2;
SetStretchBltMode(hdc, HALFTONE);
StretchBlt(hdc, offsetX, offsetY, drawWidth, drawHeight, memDC, 0, 0, srcWidth, srcHeight, SRCCOPY);
DeleteDC(memDC);
EndPaint(m_window, &ps);
}
break;
default:
return base_type::MessageHandler(message, wparam, lparam);
}
return 0;
}
void ScreenshotCropAndLockWindow::CropAndLock(HWND windowToCrop, RECT cropRect)
{
if (m_captured)
{
return;
}
// Get full window bounds
RECT windowRect{};
winrt::check_hresult(DwmGetWindowAttribute(
windowToCrop,
DWMWA_EXTENDED_FRAME_BOUNDS,
&windowRect,
sizeof(windowRect)));
RECT clientRect = ClientAreaInScreenSpace(windowToCrop);
auto offsetX = clientRect.left - windowRect.left;
auto offsetY = clientRect.top - windowRect.top;
m_sourceRect = {
cropRect.left + offsetX,
cropRect.top + offsetY,
cropRect.right + offsetX,
cropRect.bottom + offsetY
};
int fullWidth = windowRect.right - windowRect.left;
int fullHeight = windowRect.bottom - windowRect.top;
HDC fullDC = CreateCompatibleDC(nullptr);
HDC screenDC = GetDC(nullptr);
HBITMAP fullBitmap = CreateCompatibleBitmap(screenDC, fullWidth, fullHeight);
HGDIOBJ oldFullBitmap = SelectObject(fullDC, fullBitmap);
// Capture full window
winrt::check_bool(PrintWindow(windowToCrop, fullDC, PW_RENDERFULLCONTENT));
// Crop
int cropWidth = m_sourceRect.right - m_sourceRect.left;
int cropHeight = m_sourceRect.bottom - m_sourceRect.top;
HDC cropDC = CreateCompatibleDC(nullptr);
HBITMAP cropBitmap = CreateCompatibleBitmap(screenDC, cropWidth, cropHeight);
HGDIOBJ oldCropBitmap = SelectObject(cropDC, cropBitmap);
ReleaseDC(nullptr, screenDC);
BitBlt(
cropDC,
0,
0,
cropWidth,
cropHeight,
fullDC,
m_sourceRect.left,
m_sourceRect.top,
SRCCOPY);
SelectObject(fullDC, oldFullBitmap);
DeleteObject(fullBitmap);
DeleteDC(fullDC);
SelectObject(cropDC, oldCropBitmap);
DeleteDC(cropDC);
m_bitmap.reset(cropBitmap);
// Resize our window
RECT dest{ 0, 0, cropWidth, cropHeight };
LONG_PTR exStyle = GetWindowLongPtrW(m_window, GWL_EXSTYLE);
LONG_PTR style = GetWindowLongPtrW(m_window, GWL_STYLE);
winrt::check_bool(AdjustWindowRectEx(&dest, static_cast<DWORD>(style), FALSE, static_cast<DWORD>(exStyle)));
winrt::check_bool(SetWindowPos(
m_window, HWND_TOPMOST, 0, 0, dest.right - dest.left, dest.bottom - dest.top, SWP_NOMOVE | SWP_SHOWWINDOW));
m_destRect = { 0, 0, cropWidth, cropHeight };
m_captured = true;
InvalidateRect(m_window, nullptr, FALSE);
}

View File

@@ -1,27 +0,0 @@
#pragma once
#include <robmikh.common/DesktopWindow.h>
#include "CropAndLockWindow.h"
struct ScreenshotCropAndLockWindow : robmikh::common::desktop::DesktopWindow<ScreenshotCropAndLockWindow>, CropAndLockWindow
{
static const std::wstring ClassName;
ScreenshotCropAndLockWindow(std::wstring const& titleString, int width, int height);
~ScreenshotCropAndLockWindow() override;
LRESULT MessageHandler(UINT const message, WPARAM const wparam, LPARAM const lparam);
HWND Handle() override { return m_window; }
void CropAndLock(HWND windowToCrop, RECT cropRect) override;
void OnClosed(std::function<void(HWND)> callback) override { m_closedCallback = callback; }
private:
static void RegisterWindowClass();
private:
std::unique_ptr<void, decltype(&DeleteObject)> m_bitmap{ nullptr, &DeleteObject };
RECT m_destRect = {};
RECT m_sourceRect = {};
bool m_captured = false;
bool m_destroyed = false;
std::function<void(HWND)> m_closedCallback;
};

View File

@@ -4,5 +4,4 @@ enum class CropAndLockType
{
Reparent,
Thumbnail,
Screenshot,
};

View File

@@ -2,7 +2,6 @@
#include "SettingsWindow.h"
#include "OverlayWindow.h"
#include "CropAndLockWindow.h"
#include "ScreenshotCropAndLockWindow.h"
#include "ThumbnailCropAndLockWindow.h"
#include "ReparentCropAndLockWindow.h"
#include "ModuleConstants.h"
@@ -134,7 +133,6 @@ int WINAPI wWinMain(_In_ HINSTANCE, _In_opt_ HINSTANCE, _In_ PWSTR lpCmdLine, _I
// Handles and thread for the events sent from runner
HANDLE m_reparent_event_handle;
HANDLE m_thumbnail_event_handle;
HANDLE m_screenshot_event_handle;
HANDLE m_exit_event_handle;
std::thread m_event_triggers_thread;
@@ -183,11 +181,6 @@ int WINAPI wWinMain(_In_ HINSTANCE, _In_opt_ HINSTANCE, _In_ PWSTR lpCmdLine, _I
Logger::trace(L"Creating a thumbnail window");
Trace::CropAndLock::CreateThumbnailWindow();
break;
case CropAndLockType::Screenshot:
croppedWindow = std::make_shared<ScreenshotCropAndLockWindow>(title, 800, 600);
Logger::trace(L"Creating a screenshot window");
Trace::CropAndLock::CreateScreenshotWindow();
break;
default:
return;
}
@@ -222,9 +215,8 @@ int WINAPI wWinMain(_In_ HINSTANCE, _In_opt_ HINSTANCE, _In_ PWSTR lpCmdLine, _I
// Start a thread to listen on the events.
m_reparent_event_handle = CreateEventW(nullptr, false, false, CommonSharedConstants::CROP_AND_LOCK_REPARENT_EVENT);
m_thumbnail_event_handle = CreateEventW(nullptr, false, false, CommonSharedConstants::CROP_AND_LOCK_THUMBNAIL_EVENT);
m_screenshot_event_handle = CreateEventW(nullptr, false, false, CommonSharedConstants::CROP_AND_LOCK_SCREENSHOT_EVENT);
m_exit_event_handle = CreateEventW(nullptr, false, false, CommonSharedConstants::CROP_AND_LOCK_EXIT_EVENT);
if (!m_reparent_event_handle || !m_thumbnail_event_handle || !m_screenshot_event_handle || !m_exit_event_handle)
if (!m_reparent_event_handle || !m_thumbnail_event_handle || !m_exit_event_handle)
{
Logger::warn(L"Failed to create events. {}", get_last_error_or_default(GetLastError()));
return 1;
@@ -232,10 +224,10 @@ int WINAPI wWinMain(_In_ HINSTANCE, _In_opt_ HINSTANCE, _In_ PWSTR lpCmdLine, _I
m_event_triggers_thread = std::thread([&]() {
MSG msg;
HANDLE event_handles[4] = { m_reparent_event_handle, m_thumbnail_event_handle, m_screenshot_event_handle, m_exit_event_handle };
HANDLE event_handles[3] = { m_reparent_event_handle, m_thumbnail_event_handle, m_exit_event_handle };
while (m_running)
{
DWORD dwEvt = MsgWaitForMultipleObjects(4, event_handles, false, INFINITE, QS_ALLINPUT);
DWORD dwEvt = MsgWaitForMultipleObjects(3, event_handles, false, INFINITE, QS_ALLINPUT);
if (!m_running)
{
break;
@@ -267,25 +259,13 @@ int WINAPI wWinMain(_In_ HINSTANCE, _In_opt_ HINSTANCE, _In_ PWSTR lpCmdLine, _I
break;
}
case WAIT_OBJECT_0 + 2:
{
// Screenshot Event
bool enqueueSucceeded = controller.DispatcherQueue().TryEnqueue([&]() {
ProcessCommand(CropAndLockType::Screenshot);
});
if (!enqueueSucceeded)
{
Logger::error("Couldn't enqueue message to screenshot a window.");
}
break;
}
case WAIT_OBJECT_0 + 3:
{
// Exit Event
Logger::trace(L"Received an exit event.");
PostThreadMessage(mainThreadId, WM_QUIT, 0, 0);
break;
}
case WAIT_OBJECT_0 + 4:
case WAIT_OBJECT_0 + 3:
if (PeekMessageW(&msg, nullptr, 0, 0, PM_REMOVE))
{
TranslateMessage(&msg);
@@ -315,7 +295,6 @@ int WINAPI wWinMain(_In_ HINSTANCE, _In_opt_ HINSTANCE, _In_ PWSTR lpCmdLine, _I
SetEvent(m_reparent_event_handle);
CloseHandle(m_reparent_event_handle);
CloseHandle(m_thumbnail_event_handle);
CloseHandle(m_screenshot_event_handle);
CloseHandle(m_exit_event_handle);
m_event_triggers_thread.join();

View File

@@ -41,15 +41,6 @@ void Trace::CropAndLock::ActivateThumbnail() noexcept
TraceLoggingKeyword(PROJECT_KEYWORD_MEASURE));
}
void Trace::CropAndLock::ActivateScreenshot() noexcept
{
TraceLoggingWriteWrapper(
g_hProvider,
"CropAndLock_ActivateScreenshot",
ProjectTelemetryPrivacyDataTag(ProjectTelemetryTag_ProductAndServicePerformance),
TraceLoggingKeyword(PROJECT_KEYWORD_MEASURE));
}
void Trace::CropAndLock::CreateReparentWindow() noexcept
{
TraceLoggingWriteWrapper(
@@ -68,17 +59,8 @@ void Trace::CropAndLock::CreateThumbnailWindow() noexcept
TraceLoggingKeyword(PROJECT_KEYWORD_MEASURE));
}
void Trace::CropAndLock::CreateScreenshotWindow() noexcept
{
TraceLoggingWriteWrapper(
g_hProvider,
"CropAndLock_CreateScreenshotWindow",
ProjectTelemetryPrivacyDataTag(ProjectTelemetryTag_ProductAndServicePerformance),
TraceLoggingKeyword(PROJECT_KEYWORD_MEASURE));
}
// Event to send settings telemetry.
void Trace::CropAndLock::SettingsTelemetry(PowertoyModuleIface::Hotkey& reparentHotkey, PowertoyModuleIface::Hotkey& thumbnailHotkey, PowertoyModuleIface::Hotkey& screenshotHotkey) noexcept
void Trace::CropAndLock::SettingsTelemetry(PowertoyModuleIface::Hotkey& reparentHotkey, PowertoyModuleIface::Hotkey& thumbnailHotkey) noexcept
{
std::wstring hotKeyStrReparent =
std::wstring(reparentHotkey.win ? L"Win + " : L"") +
@@ -94,19 +76,11 @@ void Trace::CropAndLock::SettingsTelemetry(PowertoyModuleIface::Hotkey& reparent
std::wstring(thumbnailHotkey.alt ? L"Alt + " : L"") +
std::wstring(L"VK ") + std::to_wstring(thumbnailHotkey.key);
std::wstring hotKeyStrScreenshot =
std::wstring(screenshotHotkey.win ? L"Win + " : L"") +
std::wstring(screenshotHotkey.ctrl ? L"Ctrl + " : L"") +
std::wstring(screenshotHotkey.shift ? L"Shift + " : L"") +
std::wstring(screenshotHotkey.alt ? L"Alt + " : L"") +
std::wstring(L"VK ") + std::to_wstring(screenshotHotkey.key);
TraceLoggingWriteWrapper(
g_hProvider,
"CropAndLock_Settings",
ProjectTelemetryPrivacyDataTag(ProjectTelemetryTag_ProductAndServicePerformance),
TraceLoggingKeyword(PROJECT_KEYWORD_MEASURE),
TraceLoggingWideString(hotKeyStrReparent.c_str(), "ReparentHotKey"),
TraceLoggingWideString(hotKeyStrThumbnail.c_str(), "ThumbnailHotkey"),
TraceLoggingWideString(hotKeyStrScreenshot.c_str(), "ScreenshotHotkey"));
TraceLoggingWideString(hotKeyStrThumbnail.c_str(), "ThumbnailHotkey"));
}

View File

@@ -12,10 +12,8 @@ public:
static void Enable(bool enabled) noexcept;
static void ActivateReparent() noexcept;
static void ActivateThumbnail() noexcept;
static void ActivateScreenshot() noexcept;
static void CreateReparentWindow() noexcept;
static void CreateThumbnailWindow() noexcept;
static void CreateScreenshotWindow() noexcept;
static void SettingsTelemetry(PowertoyModuleIface::Hotkey&, PowertoyModuleIface::Hotkey&, PowertoyModuleIface::Hotkey&) noexcept;
static void SettingsTelemetry(PowertoyModuleIface::Hotkey&, PowertoyModuleIface::Hotkey&) noexcept;
};
};

View File

@@ -29,7 +29,6 @@ namespace
const wchar_t JSON_KEY_CODE[] = L"code";
const wchar_t JSON_KEY_REPARENT_HOTKEY[] = L"reparent-hotkey";
const wchar_t JSON_KEY_THUMBNAIL_HOTKEY[] = L"thumbnail-hotkey";
const wchar_t JSON_KEY_SCREENSHOT_HOTKEY[] = L"screenshot-hotkey";
const wchar_t JSON_KEY_VALUE[] = L"value";
}
@@ -125,10 +124,6 @@ public:
SetEvent(m_thumbnail_event_handle);
Trace::CropAndLock::ActivateThumbnail();
}
if (hotkeyId == 2) { // Same order as set by get_hotkeys
SetEvent(m_screenshot_event_handle);
Trace::CropAndLock::ActivateScreenshot();
}
return true;
}
@@ -138,13 +133,12 @@ public:
virtual size_t get_hotkeys(Hotkey* hotkeys, size_t buffer_size) override
{
if (hotkeys && buffer_size >= 3)
if (hotkeys && buffer_size >= 2)
{
hotkeys[0] = m_reparent_hotkey;
hotkeys[1] = m_thumbnail_hotkey;
hotkeys[2] = m_screenshot_hotkey;
}
return 3;
return 2;
}
// Enable the powertoy
@@ -177,7 +171,7 @@ public:
virtual void send_settings_telemetry() override
{
Logger::info("Send settings telemetry");
Trace::CropAndLock::SettingsTelemetry(m_reparent_hotkey, m_thumbnail_hotkey, m_screenshot_hotkey);
Trace::CropAndLock::SettingsTelemetry(m_reparent_hotkey, m_thumbnail_hotkey);
}
CropAndLockModuleInterface()
@@ -188,7 +182,6 @@ public:
m_reparent_event_handle = CreateDefaultEvent(CommonSharedConstants::CROP_AND_LOCK_REPARENT_EVENT);
m_thumbnail_event_handle = CreateDefaultEvent(CommonSharedConstants::CROP_AND_LOCK_THUMBNAIL_EVENT);
m_screenshot_event_handle = CreateDefaultEvent(CommonSharedConstants::CROP_AND_LOCK_SCREENSHOT_EVENT);
m_exit_event_handle = CreateDefaultEvent(CommonSharedConstants::CROP_AND_LOCK_EXIT_EVENT);
init_settings();
@@ -209,7 +202,6 @@ private:
ResetEvent(m_reparent_event_handle);
ResetEvent(m_thumbnail_event_handle);
ResetEvent(m_screenshot_event_handle);
ResetEvent(m_exit_event_handle);
SHELLEXECUTEINFOW sei{ sizeof(sei) };
@@ -242,7 +234,6 @@ private:
ResetEvent(m_reparent_event_handle);
ResetEvent(m_thumbnail_event_handle);
ResetEvent(m_screenshot_event_handle);
// Log telemetry
if (traceEvent)
@@ -292,21 +283,6 @@ private:
{
Logger::error("Failed to initialize CropAndLock thumbnail shortcut from settings. Value will keep unchanged.");
}
try
{
Hotkey _temp_screenshot;
auto jsonHotkeyObject = settingsObject.GetNamedObject(JSON_KEY_PROPERTIES).GetNamedObject(JSON_KEY_SCREENSHOT_HOTKEY).GetNamedObject(JSON_KEY_VALUE);
_temp_screenshot.win = jsonHotkeyObject.GetNamedBoolean(JSON_KEY_WIN);
_temp_screenshot.alt = jsonHotkeyObject.GetNamedBoolean(JSON_KEY_ALT);
_temp_screenshot.shift = jsonHotkeyObject.GetNamedBoolean(JSON_KEY_SHIFT);
_temp_screenshot.ctrl = jsonHotkeyObject.GetNamedBoolean(JSON_KEY_CTRL);
_temp_screenshot.key = static_cast<unsigned char>(jsonHotkeyObject.GetNamedNumber(JSON_KEY_CODE));
m_screenshot_hotkey = _temp_screenshot;
}
catch (...)
{
Logger::error("Failed to initialize CropAndLock screenshot shortcut from settings. Value will keep unchanged.");
}
}
else
{
@@ -345,11 +321,9 @@ private:
// TODO: actual default hotkey setting in line with other PowerToys.
Hotkey m_reparent_hotkey = { .win = true, .ctrl = true, .shift = true, .alt = false, .key = 'R' };
Hotkey m_thumbnail_hotkey = { .win = true, .ctrl = true, .shift = true, .alt = false, .key = 'T' };
Hotkey m_screenshot_hotkey = { .win = true, .ctrl = true, .shift = true, .alt = false, .key = 'S' };
HANDLE m_reparent_event_handle;
HANDLE m_thumbnail_event_handle;
HANDLE m_screenshot_event_handle;
HANDLE m_exit_event_handle;
};

View File

@@ -1,248 +0,0 @@
#include "pch.h"
#include "CLILogic.h"
#include <common/utils/json.h>
#include <iostream>
#include <sstream>
#include <chrono>
#include "resource.h"
#include <common/logger/logger.h>
#include <common/utils/logger_helper.h>
#include <type_traits>
template<typename T>
DWORD_PTR ToDwordPtr(T val)
{
if constexpr (std::is_pointer_v<T>)
{
return reinterpret_cast<DWORD_PTR>(val);
}
else
{
return static_cast<DWORD_PTR>(val);
}
}
template<typename... Args>
std::wstring FormatString(IStringProvider& strings, UINT id, Args... args)
{
std::wstring format = strings.GetString(id);
if (format.empty()) return L"";
DWORD_PTR arguments[] = { ToDwordPtr(args)..., 0 };
LPWSTR buffer = nullptr;
FormatMessageW(FORMAT_MESSAGE_ALLOCATE_BUFFER | FORMAT_MESSAGE_FROM_STRING | FORMAT_MESSAGE_ARGUMENT_ARRAY,
format.c_str(),
0,
0,
reinterpret_cast<LPWSTR>(&buffer),
0,
reinterpret_cast<va_list*>(arguments));
if (buffer)
{
std::wstring result(buffer);
LocalFree(buffer);
return result;
}
return L"";
}
std::wstring get_usage(IStringProvider& strings)
{
return strings.GetString(IDS_USAGE);
}
std::wstring get_json(const std::vector<ProcessResult>& results)
{
json::JsonObject root;
json::JsonArray processes;
for (const auto& result : results)
{
json::JsonObject process;
process.SetNamedValue(L"pid", json::JsonValue::CreateNumberValue(result.pid));
process.SetNamedValue(L"name", json::JsonValue::CreateStringValue(result.name));
process.SetNamedValue(L"user", json::JsonValue::CreateStringValue(result.user));
json::JsonArray files;
for (const auto& file : result.files)
{
files.Append(json::JsonValue::CreateStringValue(file));
}
process.SetNamedValue(L"files", files);
processes.Append(process);
}
root.SetNamedValue(L"processes", processes);
return root.Stringify().c_str();
}
std::wstring get_text(const std::vector<ProcessResult>& results, IStringProvider& strings)
{
std::wstringstream ss;
if (results.empty())
{
ss << strings.GetString(IDS_NO_PROCESSES);
return ss.str();
}
ss << strings.GetString(IDS_HEADER);
for (const auto& result : results)
{
ss << result.pid << L"\t"
<< result.user << L"\t"
<< result.name << std::endl;
}
return ss.str();
}
std::wstring kill_processes(const std::vector<ProcessResult>& results, IProcessTerminator& terminator, IStringProvider& strings)
{
std::wstringstream ss;
for (const auto& result : results)
{
if (terminator.terminate(result.pid))
{
ss << FormatString(strings, IDS_TERMINATED, result.pid, result.name.c_str());
}
else
{
ss << FormatString(strings, IDS_FAILED_TERMINATE, result.pid, result.name.c_str());
}
}
return ss.str();
}
CommandResult run_command(int argc, wchar_t* argv[], IProcessFinder& finder, IProcessTerminator& terminator, IStringProvider& strings)
{
Logger::info("Parsing arguments");
if (argc < 2)
{
Logger::warn("No arguments provided");
return { 1, get_usage(strings) };
}
bool json_output = false;
bool kill = false;
bool wait = false;
int timeout_ms = -1;
std::vector<std::wstring> paths;
for (int i = 1; i < argc; ++i)
{
std::wstring arg = argv[i];
if (arg == L"--json")
{
json_output = true;
}
else if (arg == L"--kill")
{
kill = true;
}
else if (arg == L"--wait")
{
wait = true;
}
else if (arg == L"--timeout")
{
if (i + 1 < argc)
{
try
{
timeout_ms = std::stoi(argv[++i]);
}
catch (...)
{
Logger::error("Invalid timeout value");
return { 1, strings.GetString(IDS_ERROR_INVALID_TIMEOUT) };
}
}
else
{
Logger::error("Timeout argument missing");
return { 1, strings.GetString(IDS_ERROR_TIMEOUT_ARG) };
}
}
else if (arg == L"--help")
{
return { 0, get_usage(strings) };
}
else
{
paths.push_back(arg);
}
}
if (paths.empty())
{
Logger::error("No paths specified");
return { 1, strings.GetString(IDS_ERROR_NO_PATHS) };
}
Logger::info("Processing {} paths", paths.size());
if (wait)
{
std::wstringstream ss;
if (json_output)
{
Logger::warn("Wait is incompatible with JSON output");
ss << strings.GetString(IDS_WARN_JSON_WAIT);
json_output = false;
}
ss << strings.GetString(IDS_WAITING);
auto start_time = std::chrono::steady_clock::now();
while (true)
{
auto results = finder.find(paths);
if (results.empty())
{
Logger::info("Files unlocked");
ss << strings.GetString(IDS_UNLOCKED);
break;
}
if (timeout_ms >= 0)
{
auto current_time = std::chrono::steady_clock::now();
auto elapsed = std::chrono::duration_cast<std::chrono::milliseconds>(current_time - start_time).count();
if (elapsed > timeout_ms)
{
Logger::warn("Timeout waiting for files to be unlocked");
ss << strings.GetString(IDS_TIMEOUT);
return { 1, ss.str() };
}
}
Sleep(200);
}
return { 0, ss.str() };
}
auto results = finder.find(paths);
Logger::info("Found {} processes locking the files", results.size());
std::wstringstream output_ss;
if (kill)
{
Logger::info("Killing processes");
output_ss << kill_processes(results, terminator, strings);
// Re-check after killing
results = finder.find(paths);
Logger::info("Remaining processes: {}", results.size());
}
if (json_output)
{
output_ss << get_json(results) << std::endl;
}
else
{
output_ss << get_text(results, strings);
}
return { 0, output_ss.str() };
}

View File

@@ -1,31 +0,0 @@
#pragma once
#include <vector>
#include <string>
#include "FileLocksmithLib/FileLocksmith.h"
#include <Windows.h>
struct CommandResult
{
int exit_code;
std::wstring output;
};
struct IProcessFinder
{
virtual std::vector<ProcessResult> find(const std::vector<std::wstring>& paths) = 0;
virtual ~IProcessFinder() = default;
};
struct IProcessTerminator
{
virtual bool terminate(DWORD pid) = 0;
virtual ~IProcessTerminator() = default;
};
struct IStringProvider
{
virtual std::wstring GetString(UINT id) = 0;
virtual ~IStringProvider() = default;
};
CommandResult run_command(int argc, wchar_t* argv[], IProcessFinder& finder, IProcessTerminator& terminator, IStringProvider& strings);

View File

@@ -1,62 +0,0 @@
#include "resource.h"
#include <windows.h>
#include "../../../common/version/version.h"
#define APSTUDIO_READONLY_SYMBOLS
#include "winres.h"
#undef APSTUDIO_READONLY_SYMBOLS
/////////////////////////////////////////////////////////////////////////////
//
// Version
//
VS_VERSION_INFO VERSIONINFO
FILEVERSION FILE_VERSION
PRODUCTVERSION PRODUCT_VERSION
FILEFLAGSMASK 0x3fL
#ifdef _DEBUG
FILEFLAGS 0x1L
#else
FILEFLAGS 0x0L
#endif
FILEOS 0x40004L
FILETYPE 0x1L
FILESUBTYPE 0x0L
BEGIN
BLOCK "StringFileInfo"
BEGIN
BLOCK "040904b0"
BEGIN
VALUE "CompanyName", COMPANY_NAME
VALUE "FileDescription", "File Locksmith CLI"
VALUE "FileVersion", FILE_VERSION_STRING
VALUE "InternalName", "FileLocksmithCLI.exe"
VALUE "LegalCopyright", COPYRIGHT_NOTE
VALUE "OriginalFilename", "FileLocksmithCLI.exe"
VALUE "ProductName", PRODUCT_NAME
VALUE "ProductVersion", PRODUCT_VERSION_STRING
END
END
BLOCK "VarFileInfo"
BEGIN
VALUE "Translation", 0x409, 1200
END
END
STRINGTABLE
BEGIN
IDS_USAGE "Usage: FileLocksmithCLI.exe [options] <path1> [path2] ...\nOptions:\n --kill Kill processes locking the files\n --json Output results in JSON format\n --wait Wait for files to be unlocked\n --timeout Timeout in milliseconds for --wait\n --help Show this help message\n"
IDS_NO_PROCESSES "No processes found locking the file(s).\n"
IDS_HEADER "PID\tUser\tProcess\n"
IDS_TERMINATED "Terminated process %1!d! (%2)\n"
IDS_FAILED_TERMINATE "Failed to terminate process %1!d! (%2)\n"
IDS_FAILED_OPEN "Failed to open process %1!d! (%2)\n"
IDS_ERROR_NO_PATHS "Error: No paths specified.\n"
IDS_WARN_JSON_WAIT "Warning: --wait is incompatible with --json. Ignoring --json.\n"
IDS_WAITING "Waiting for files to be unlocked...\n"
IDS_UNLOCKED "Files unlocked.\n"
IDS_TIMEOUT "Timeout waiting for files to be unlocked.\n"
IDS_ERROR_INVALID_TIMEOUT "Error: Invalid timeout value.\n"
IDS_ERROR_TIMEOUT_ARG "Error: --timeout requires an argument.\n"
END

View File

@@ -1,119 +0,0 @@
<?xml version="1.0" encoding="utf-8"?>
<Project DefaultTargets="Build" xmlns="http://schemas.microsoft.com/developer/msbuild/2003">
<Import Project="..\..\..\..\packages\Microsoft.Windows.CppWinRT.2.0.240111.5\build\native\Microsoft.Windows.CppWinRT.props" Condition="Exists('..\..\..\..\packages\Microsoft.Windows.CppWinRT.2.0.240111.5\build\native\Microsoft.Windows.CppWinRT.props')" />
<PropertyGroup Label="Globals">
<VCProjectVersion>17.0</VCProjectVersion>
<Keyword>Win32Proj</Keyword>
<ProjectGuid>{49D456D3-F485-45AF-8875-45B44F193DDC}</ProjectGuid>
<RootNamespace>FileLocksmithCLI</RootNamespace>
<WindowsTargetPlatformVersion>10.0</WindowsTargetPlatformVersion>
<ProjectName>FileLocksmithCLI</ProjectName>
</PropertyGroup>
<Import Project="$(VCTargetsPath)\Microsoft.Cpp.Default.props" />
<Import Project="..\..\..\..\deps\spdlog.props" />
<PropertyGroup Condition="'$(Configuration)'=='Debug'" Label="Configuration">
<ConfigurationType>Application</ConfigurationType>
<UseDebugLibraries>true</UseDebugLibraries>
<PlatformToolset>v143</PlatformToolset>
<CharacterSet>Unicode</CharacterSet>
</PropertyGroup>
<PropertyGroup Condition="'$(Configuration)'=='Release'" Label="Configuration">
<ConfigurationType>Application</ConfigurationType>
<UseDebugLibraries>false</UseDebugLibraries>
<PlatformToolset>v143</PlatformToolset>
<CharacterSet>Unicode</CharacterSet>
</PropertyGroup>
<Import Project="$(VCTargetsPath)\Microsoft.Cpp.props" />
<ImportGroup Label="ExtensionSettings">
</ImportGroup>
<ImportGroup Label="Shared">
</ImportGroup>
<ImportGroup Label="PropertySheets">
<Import Project="$(UserRootDir)\Microsoft.Cpp.$(Platform).user.props" Condition="exists('$(UserRootDir)\Microsoft.Cpp.$(Platform).user.props')" Label="LocalAppDataPlatform" />
</ImportGroup>
<PropertyGroup Label="UserMacros" />
<PropertyGroup>
<OutDir>..\..\..\..\$(Platform)\$(Configuration)</OutDir>
</PropertyGroup>
<ItemDefinitionGroup Condition="'$(Configuration)'=='Debug'">
<ClCompile>
<WarningLevel>Level3</WarningLevel>
<SDLCheck>true</SDLCheck>
<PreprocessorDefinitions>WIN32;_DEBUG;_CONSOLE;%(PreprocessorDefinitions)</PreprocessorDefinitions>
<ConformanceMode>false</ConformanceMode>
<AdditionalIncludeDirectories>$(ProjectDir)..;$(ProjectDir)..\..\..;$(ProjectDir)..\..;%(AdditionalIncludeDirectories)</AdditionalIncludeDirectories>
<PrecompiledHeader>Use</PrecompiledHeader>
<PrecompiledHeaderFile>pch.h</PrecompiledHeaderFile>
<RunCodeAnalysis>false</RunCodeAnalysis>
</ClCompile>
<Link>
<SubSystem>Console</SubSystem>
<GenerateDebugInformation>true</GenerateDebugInformation>
<AdditionalDependencies>shlwapi.lib;ntdll.lib;%(AdditionalDependencies)</AdditionalDependencies>
</Link>
</ItemDefinitionGroup>
<ItemDefinitionGroup Condition="'$(Configuration)'=='Release'">
<ClCompile>
<WarningLevel>Level3</WarningLevel>
<FunctionLevelLinking>true</FunctionLevelLinking>
<IntrinsicFunctions>true</IntrinsicFunctions>
<SDLCheck>true</SDLCheck>
<PreprocessorDefinitions>WIN32;NDEBUG;_CONSOLE;%(PreprocessorDefinitions)</PreprocessorDefinitions>
<ConformanceMode>false</ConformanceMode>
<AdditionalIncludeDirectories>$(ProjectDir)..;$(ProjectDir)..\..\..;$(ProjectDir)..\..;%(AdditionalIncludeDirectories)</AdditionalIncludeDirectories>
<PrecompiledHeader>Use</PrecompiledHeader>
<PrecompiledHeaderFile>pch.h</PrecompiledHeaderFile>
<RunCodeAnalysis>false</RunCodeAnalysis>
</ClCompile>
<Link>
<SubSystem>Console</SubSystem>
<EnableCOMDATFolding>true</EnableCOMDATFolding>
<OptimizeReferences>true</OptimizeReferences>
<GenerateDebugInformation>true</GenerateDebugInformation>
<AdditionalDependencies>shlwapi.lib;ntdll.lib;%(AdditionalDependencies)</AdditionalDependencies>
</Link>
</ItemDefinitionGroup>
<ItemGroup>
<ClCompile Include="CLILogic.cpp" />
<ClCompile Include="main.cpp" />
<ClCompile Include="pch.cpp">
<PrecompiledHeader Condition="'$(UsePrecompiledHeaders)' != 'false'">Create</PrecompiledHeader>
</ClCompile>
</ItemGroup>
<ItemGroup>
<ClInclude Include="CLILogic.h" />
<ClInclude Include="pch.h" />
<ClInclude Include="resource.h" />
</ItemGroup>
<ItemGroup>
<ResourceCompile Include="FileLocksmithCLI.rc" />
</ItemGroup>
<ItemGroup>
<ProjectReference Include="..\FileLocksmithLib\FileLocksmithLib.vcxproj">
<Project>{9d52fd25-ef90-4f9a-a015-91efc5daf54f}</Project>
</ProjectReference>
<ProjectReference Include="..\..\..\common\logger\logger.vcxproj">
<Project>{d9b8fc84-322a-4f9f-bbb9-20915c47ddfd}</Project>
</ProjectReference>
<ProjectReference Include="..\..\..\common\SettingsAPI\SettingsAPI.vcxproj">
<Project>{6955446d-23f7-4023-9bb3-8657f904af99}</Project>
</ProjectReference>
<ProjectReference Include="..\..\..\common\version\version.vcxproj">
<Project>{1248566c-272a-43c0-88d6-e6675d569a09}</Project>
</ProjectReference>
</ItemGroup>
<ItemGroup>
<None Include="packages.config" />
</ItemGroup>
<Import Project="$(VCTargetsPath)\Microsoft.Cpp.targets" />
<ImportGroup Label="ExtensionTargets">
<Import Project="..\..\..\..\packages\Microsoft.Windows.CppWinRT.2.0.240111.5\build\native\Microsoft.Windows.CppWinRT.targets" Condition="Exists('..\..\..\..\packages\Microsoft.Windows.CppWinRT.2.0.240111.5\build\native\Microsoft.Windows.CppWinRT.targets')" />
</ImportGroup>
<Target Name="EnsureNuGetPackageBuildImports" BeforeTargets="PrepareForBuild">
<PropertyGroup>
<ErrorText>This project references NuGet package(s) that are missing on this computer. Use NuGet Package Restore to download them. For more information, see http://go.microsoft.com/fwlink/?LinkID=322105. The missing file is {0}.</ErrorText>
</PropertyGroup>
<Error Condition="!Exists('..\..\..\..\packages\Microsoft.Windows.CppWinRT.2.0.240111.5\build\native\Microsoft.Windows.CppWinRT.props')" Text="$([System.String]::Format('$(ErrorText)', '..\..\..\..\packages\Microsoft.Windows.CppWinRT.2.0.240111.5\build\native\Microsoft.Windows.CppWinRT.props'))" />
<Error Condition="!Exists('..\..\..\..\packages\Microsoft.Windows.CppWinRT.2.0.240111.5\build\native\Microsoft.Windows.CppWinRT.targets')" Text="$([System.String]::Format('$(ErrorText)', '..\..\..\..\packages\Microsoft.Windows.CppWinRT.2.0.240111.5\build\native\Microsoft.Windows.CppWinRT.targets'))" />
</Target>
</Project>

View File

@@ -1,42 +0,0 @@
<?xml version="1.0" encoding="utf-8"?>
<Project ToolsVersion="4.0" xmlns="http://schemas.microsoft.com/developer/msbuild/2003">
<ItemGroup>
<Filter Include="Source Files">
<UniqueIdentifier>{4FC737F1-C7A5-4376-A066-2A32D752A2FF}</UniqueIdentifier>
<Extensions>cpp;c;cc;cxx;c++;cppm;ixx;def;odl;idl;hpj;bat;asm;asmx</Extensions>
</Filter>
<Filter Include="Header Files">
<UniqueIdentifier>{93995380-89BD-4b04-88EB-625FBE52EBFB}</UniqueIdentifier>
<Extensions>h;hh;hpp;hxx;h++;hm;inl;inc;ipp;xsd</Extensions>
</Filter>
<Filter Include="Resource Files">
<UniqueIdentifier>{67DA6AB6-F800-4c08-8B7A-83BB121AAD01}</UniqueIdentifier>
<Extensions>rc;ico;cur;bmp;dlg;rc2;rct;bin;rgs;gif;jpg;jpeg;jpe;resx;tiff;tif;png;wav;mfcribbon-ms</Extensions>
</Filter>
</ItemGroup>
<ItemGroup>
<ClCompile Include="CLILogic.cpp">
<Filter>Source Files</Filter>
</ClCompile>
<ClCompile Include="main.cpp">
<Filter>Source Files</Filter>
</ClCompile>
<ClCompile Include="pch.cpp">
<Filter>Source Files</Filter>
</ClCompile>
</ItemGroup>
<ItemGroup>
<ClInclude Include="CLILogic.h">
<Filter>Header Files</Filter>
</ClInclude>
<ClInclude Include="pch.h">
<Filter>Header Files</Filter>
</ClInclude>
<ClInclude Include="resource.h">
<Filter>Header Files</Filter>
</ClInclude>
</ItemGroup>
<ItemGroup>
<None Include="packages.config" />
</ItemGroup>
</Project>

View File

@@ -1,71 +0,0 @@
#include "pch.h"
#include "CLILogic.h"
#include "FileLocksmithLib/FileLocksmith.h"
#include <iostream>
#include "resource.h"
#include <common/logger/logger.h>
#include <common/utils/logger_helper.h>
struct RealProcessFinder : IProcessFinder
{
std::vector<ProcessResult> find(const std::vector<std::wstring>& paths) override
{
return find_processes_recursive(paths);
}
};
struct RealProcessTerminator : IProcessTerminator
{
bool terminate(DWORD pid) override
{
HANDLE hProcess = OpenProcess(PROCESS_TERMINATE, FALSE, pid);
if (hProcess)
{
bool result = TerminateProcess(hProcess, 0);
CloseHandle(hProcess);
return result;
}
return false;
}
};
struct RealStringProvider : IStringProvider
{
std::wstring GetString(UINT id) override
{
wchar_t buffer[4096];
int len = LoadStringW(GetModuleHandle(NULL), id, buffer, ARRAYSIZE(buffer));
if (len > 0)
{
return std::wstring(buffer, len);
}
return L"";
}
};
#ifndef UNIT_TEST
int wmain(int argc, wchar_t* argv[])
{
winrt::init_apartment();
LoggerHelpers::init_logger(L"FileLocksmithCLI", L"", LogSettings::fileLocksmithLoggerName);
Logger::info("FileLocksmithCLI started");
RealProcessFinder finder;
RealProcessTerminator terminator;
RealStringProvider strings;
auto result = run_command(argc, argv, finder, terminator, strings);
if (result.exit_code != 0)
{
Logger::error("Command failed with exit code {}", result.exit_code);
}
else
{
Logger::info("Command succeeded");
}
std::wcout << result.output;
return result.exit_code;
}
#endif

View File

@@ -1,4 +0,0 @@
<?xml version="1.0" encoding="utf-8"?>
<packages>
<package id="Microsoft.Windows.CppWinRT" version="2.0.240111.5" targetFramework="native" />
</packages>

View File

@@ -1 +0,0 @@
#include "pch.h"

View File

@@ -1,22 +0,0 @@
#pragma once
#ifndef PCH_H
#define PCH_H
#define NOMINMAX
#define WIN32_LEAN_AND_MEAN
#include <Windows.h>
#include <winternl.h>
#include <Psapi.h>
#include <shellapi.h>
#include <iostream>
#include <vector>
#include <string>
#include <map>
#include <set>
#include <algorithm>
#include <winrt/base.h>
#endif // PCH_H

View File

@@ -1,16 +0,0 @@
// resource.h
#pragma once
#define IDS_USAGE 101
#define IDS_NO_PROCESSES 102
#define IDS_HEADER 103
#define IDS_TERMINATED 104
#define IDS_FAILED_TERMINATE 105
#define IDS_FAILED_OPEN 106
#define IDS_ERROR_NO_PATHS 107
#define IDS_WARN_JSON_WAIT 108
#define IDS_WAITING 109
#define IDS_UNLOCKED 110
#define IDS_TIMEOUT 111
#define IDS_ERROR_INVALID_TIMEOUT 112
#define IDS_ERROR_TIMEOUT_ARG 113

View File

@@ -1,130 +0,0 @@
#include "pch.h"
#include "CppUnitTest.h"
#include "../CLILogic.h"
#include <map>
using namespace Microsoft::VisualStudio::CppUnitTestFramework;
namespace FileLocksmithCLIUnitTests
{
struct MockProcessFinder : IProcessFinder
{
std::vector<ProcessResult> results;
std::vector<ProcessResult> find(const std::vector<std::wstring>& paths) override
{
(void)paths;
return results;
}
};
struct MockProcessTerminator : IProcessTerminator
{
bool shouldSucceed = true;
std::vector<DWORD> terminatedPids;
bool terminate(DWORD pid) override
{
terminatedPids.push_back(pid);
return shouldSucceed;
}
};
struct MockStringProvider : IStringProvider
{
std::map<UINT, std::wstring> strings;
std::wstring GetString(UINT id) override
{
if (strings.count(id)) return strings[id];
return L"String_" + std::to_wstring(id);
}
};
TEST_CLASS(CLITests)
{
public:
TEST_METHOD(TestNoArgs)
{
MockProcessFinder finder;
MockProcessTerminator terminator;
MockStringProvider strings;
wchar_t* argv[] = { (wchar_t*)L"exe" };
auto result = run_command(1, argv, finder, terminator, strings);
Assert::AreEqual(1, result.exit_code);
}
TEST_METHOD(TestHelp)
{
MockProcessFinder finder;
MockProcessTerminator terminator;
MockStringProvider strings;
wchar_t* argv[] = { (wchar_t*)L"exe", (wchar_t*)L"--help" };
auto result = run_command(2, argv, finder, terminator, strings);
Assert::AreEqual(0, result.exit_code);
}
TEST_METHOD(TestFindProcesses)
{
MockProcessFinder finder;
finder.results = { { L"process", 123, L"user", { L"file1" } } };
MockProcessTerminator terminator;
MockStringProvider strings;
wchar_t* argv[] = { (wchar_t*)L"exe", (wchar_t*)L"file1" };
auto result = run_command(2, argv, finder, terminator, strings);
Assert::AreEqual(0, result.exit_code);
Assert::IsTrue(result.output.find(L"123") != std::wstring::npos);
Assert::IsTrue(result.output.find(L"process") != std::wstring::npos);
}
TEST_METHOD(TestJsonOutput)
{
MockProcessFinder finder;
finder.results = { { L"process", 123, L"user", { L"file1" } } };
MockProcessTerminator terminator;
MockStringProvider strings;
wchar_t* argv[] = { (wchar_t*)L"exe", (wchar_t*)L"file1", (wchar_t*)L"--json" };
auto result = run_command(3, argv, finder, terminator, strings);
Microsoft::VisualStudio::CppUnitTestFramework::Logger::WriteMessage(result.output.c_str());
Assert::AreEqual(0, result.exit_code);
Assert::IsTrue(result.output.find(L"\"pid\"") != std::wstring::npos);
Assert::IsTrue(result.output.find(L"123") != std::wstring::npos);
}
TEST_METHOD(TestKill)
{
MockProcessFinder finder;
finder.results = { { L"process", 123, L"user", { L"file1" } } };
MockProcessTerminator terminator;
MockStringProvider strings;
wchar_t* argv[] = { (wchar_t*)L"exe", (wchar_t*)L"file1", (wchar_t*)L"--kill" };
auto result = run_command(3, argv, finder, terminator, strings);
Assert::AreEqual(0, result.exit_code);
Assert::AreEqual((size_t)1, terminator.terminatedPids.size());
Assert::AreEqual((DWORD)123, terminator.terminatedPids[0]);
}
TEST_METHOD(TestTimeout)
{
MockProcessFinder finder;
// Always return results so it waits
finder.results = { { L"process", 123, L"user", { L"file1" } } };
MockProcessTerminator terminator;
MockStringProvider strings;
wchar_t* argv[] = { (wchar_t*)L"exe", (wchar_t*)L"file1", (wchar_t*)L"--wait", (wchar_t*)L"--timeout", (wchar_t*)L"100" };
auto result = run_command(5, argv, finder, terminator, strings);
Assert::AreEqual(1, result.exit_code);
}
};
}

View File

@@ -1,84 +0,0 @@
<?xml version="1.0" encoding="utf-8"?>
<Project DefaultTargets="Build" ToolsVersion="15.0" xmlns="http://schemas.microsoft.com/developer/msbuild/2003">
<Import Project="..\..\..\..\..\packages\Microsoft.Windows.CppWinRT.2.0.240111.5\build\native\Microsoft.Windows.CppWinRT.props" Condition="Exists('..\..\..\..\..\packages\Microsoft.Windows.CppWinRT.2.0.240111.5\build\native\Microsoft.Windows.CppWinRT.props')" />
<PropertyGroup Label="Globals">
<ProjectGuid>{A1B2C3D4-E5F6-7890-1234-567890ABCDEF}</ProjectGuid>
<Keyword>Win32Proj</Keyword>
<RootNamespace>FileLocksmithCLIUnitTests</RootNamespace>
<ProjectName>FileLocksmithCLI.UnitTests</ProjectName>
<WindowsTargetPlatformVersion>10.0</WindowsTargetPlatformVersion>
</PropertyGroup>
<Import Project="$(VCTargetsPath)\Microsoft.Cpp.Default.props" />
<Import Project="..\..\..\..\..\deps\spdlog.props" />
<PropertyGroup Label="Configuration">
<PlatformToolset>v143</PlatformToolset>
<ConfigurationType>DynamicLibrary</ConfigurationType>
<CharacterSet>Unicode</CharacterSet>
</PropertyGroup>
<Import Project="$(VCTargetsPath)\Microsoft.Cpp.props" />
<ImportGroup Label="ExtensionSettings">
</ImportGroup>
<ImportGroup Label="Shared">
</ImportGroup>
<ImportGroup Label="PropertySheets">
<Import Project="$(UserRootDir)\Microsoft.Cpp.$(Platform).user.props" Condition="exists('$(UserRootDir)\Microsoft.Cpp.$(Platform).user.props')" Label="LocalAppDataPlatform" />
</ImportGroup>
<PropertyGroup Label="UserMacros" />
<PropertyGroup>
<OutDir>..\..\..\..\..\$(Platform)\$(Configuration)\tests\FileLocksmithCLI\</OutDir>
</PropertyGroup>
<ItemDefinitionGroup>
<ClCompile>
<AdditionalIncludeDirectories>..\;..\..\;..\..\..\..\;$(VCInstallDir)UnitTest\include;%(AdditionalIncludeDirectories)</AdditionalIncludeDirectories>
<PreprocessorDefinitions>WIN32;UNIT_TEST;%(PreprocessorDefinitions)</PreprocessorDefinitions>
<UseFullPaths>true</UseFullPaths>
<PrecompiledHeader>Use</PrecompiledHeader>
<PrecompiledHeaderFile>pch.h</PrecompiledHeaderFile>
<DisableSpecificWarnings>26466;%(DisableSpecificWarnings)</DisableSpecificWarnings>
</ClCompile>
<Link>
<AdditionalLibraryDirectories>$(VCInstallDir)UnitTest\lib;%(AdditionalLibraryDirectories)</AdditionalLibraryDirectories>
<AdditionalDependencies>shlwapi.lib;ntdll.lib;%(AdditionalDependencies)</AdditionalDependencies>
</Link>
</ItemDefinitionGroup>
<ItemGroup>
<ClInclude Include="pch.h" />
</ItemGroup>
<ItemGroup>
<ClCompile Include="pch.cpp">
<PrecompiledHeader Condition="'$(UsePrecompiledHeaders)' != 'false'">Create</PrecompiledHeader>
</ClCompile>
<ClCompile Include="FileLocksmithCLITests.cpp" />
<ClCompile Include="..\CLILogic.cpp">
<PrecompiledHeader>NotUsing</PrecompiledHeader>
</ClCompile>
</ItemGroup>
<ItemGroup>
<ProjectReference Include="..\..\FileLocksmithLib\FileLocksmithLib.vcxproj">
<Project>{9d52fd25-ef90-4f9a-a015-91efc5daf54f}</Project>
</ProjectReference>
<ProjectReference Include="..\..\..\..\common\logger\logger.vcxproj">
<Project>{d9b8fc84-322a-4f9f-bbb9-20915c47ddfd}</Project>
</ProjectReference>
<ProjectReference Include="..\..\..\..\common\SettingsAPI\SettingsAPI.vcxproj">
<Project>{6955446d-23f7-4023-9bb3-8657f904af99}</Project>
</ProjectReference>
<ProjectReference Include="..\..\..\..\common\version\version.vcxproj">
<Project>{1248566c-272a-43c0-88d6-e6675d569a09}</Project>
</ProjectReference>
</ItemGroup>
<ItemGroup>
<None Include="packages.config" />
</ItemGroup>
<Import Project="$(VCTargetsPath)\Microsoft.Cpp.targets" />
<ImportGroup Label="ExtensionTargets">
<Import Project="..\..\..\..\..\packages\Microsoft.Windows.CppWinRT.2.0.240111.5\build\native\Microsoft.Windows.CppWinRT.targets" Condition="Exists('..\..\..\..\..\packages\Microsoft.Windows.CppWinRT.2.0.240111.5\build\native\Microsoft.Windows.CppWinRT.targets')" />
</ImportGroup>
<Target Name="EnsureNuGetPackageBuildImports" BeforeTargets="PrepareForBuild">
<PropertyGroup>
<ErrorText>This project references NuGet package(s) that are missing on this computer. Use NuGet Package Restore to download them. For more information, see http://go.microsoft.com/fwlink/?LinkID=322105. The missing file is {0}.</ErrorText>
</PropertyGroup>
<Error Condition="!Exists('..\..\..\..\..\packages\Microsoft.Windows.CppWinRT.2.0.240111.5\build\native\Microsoft.Windows.CppWinRT.props')" Text="$([System.String]::Format('$(ErrorText)', '..\..\..\..\..\packages\Microsoft.Windows.CppWinRT.2.0.240111.5\build\native\Microsoft.Windows.CppWinRT.props'))" />
<Error Condition="!Exists('..\..\..\..\..\packages\Microsoft.Windows.CppWinRT.2.0.240111.5\build\native\Microsoft.Windows.CppWinRT.targets')" Text="$([System.String]::Format('$(ErrorText)', '..\..\..\..\..\packages\Microsoft.Windows.CppWinRT.2.0.240111.5\build\native\Microsoft.Windows.CppWinRT.targets'))" />
</Target>
</Project>

View File

@@ -1,4 +0,0 @@
<?xml version="1.0" encoding="utf-8"?>
<packages>
<package id="Microsoft.Windows.CppWinRT" version="2.0.240111.5" targetFramework="native" />
</packages>

View File

@@ -1 +0,0 @@
#include "pch.h"

View File

@@ -1,9 +0,0 @@
#pragma once
#include <winrt/base.h>
#include <Windows.h>
#include <shellapi.h>
#include <string>
#include <vector>
#include <iostream>
#include <sstream>
#include "CppUnitTest.h"

View File

@@ -1,9 +0,0 @@
#pragma once
#include "ProcessResult.h"
// Second version, checks handles towards files and all subfiles and folders of given dirs, if any.
std::vector<ProcessResult> find_processes_recursive(const std::vector<std::wstring>& paths);
// Gives the full path of the executable, given the process id
std::wstring pid_to_full_path(DWORD pid);

View File

@@ -34,9 +34,9 @@
<ClCompile>
<WarningLevel>Level3</WarningLevel>
<SDLCheck>true</SDLCheck>
<PreprocessorDefinitions>WIN32;_DEBUG;_LIB;FILELOCKSMITH_LIB_STATIC;%(PreprocessorDefinitions)</PreprocessorDefinitions>
<PreprocessorDefinitions>WIN32;_DEBUG;_LIB;%(PreprocessorDefinitions)</PreprocessorDefinitions>
<ConformanceMode>true</ConformanceMode>
<AdditionalIncludeDirectories>..\FileLocksmithLibInterop;../../..;../..;</AdditionalIncludeDirectories>
<AdditionalIncludeDirectories>../../..;../..;</AdditionalIncludeDirectories>
</ClCompile>
<Link>
<SubSystem>
@@ -50,9 +50,9 @@
<FunctionLevelLinking>true</FunctionLevelLinking>
<IntrinsicFunctions>true</IntrinsicFunctions>
<SDLCheck>true</SDLCheck>
<PreprocessorDefinitions>WIN32;NDEBUG;_LIB;FILELOCKSMITH_LIB_STATIC;%(PreprocessorDefinitions)</PreprocessorDefinitions>
<PreprocessorDefinitions>WIN32;NDEBUG;_LIB;%(PreprocessorDefinitions)</PreprocessorDefinitions>
<ConformanceMode>true</ConformanceMode>
<AdditionalIncludeDirectories>..\FileLocksmithLibInterop;../../..;../..;</AdditionalIncludeDirectories>
<AdditionalIncludeDirectories>../../..;../..;</AdditionalIncludeDirectories>
</ClCompile>
<Link>
<SubSystem>
@@ -68,15 +68,13 @@
<ClInclude Include="Settings.h" />
<ClInclude Include="Trace.h" />
<ClInclude Include="framework.h" />
<ClInclude Include="pch.h" />
</ItemGroup>
<ItemGroup>
<ClCompile Include="IPC.cpp" />
<ClCompile Include="Settings.cpp" />
<ClCompile Include="Trace.cpp" />
<ClCompile Include="FileLocksmithLib.cpp" />
<ClCompile Include="..\FileLocksmithLibInterop\FileLocksmith.cpp" />
<ClCompile Include="..\FileLocksmithLibInterop\NtdllBase.cpp" />
<ClCompile Include="..\FileLocksmithLibInterop\NtdllExtensions.cpp" />
<ClCompile Include="pch.cpp">
<PrecompiledHeader Condition="'$(UsePrecompiledHeaders)' != 'false'">Create</PrecompiledHeader>
</ClCompile>

View File

@@ -38,15 +38,6 @@
<ClCompile Include="FileLocksmithLib.cpp">
<Filter>Source Files</Filter>
</ClCompile>
<ClCompile Include="FileLocksmith.cpp">
<Filter>Source Files</Filter>
</ClCompile>
<ClCompile Include="NtdllBase.cpp">
<Filter>Source Files</Filter>
</ClCompile>
<ClCompile Include="NtdllExtensions.cpp">
<Filter>Source Files</Filter>
</ClCompile>
<ClCompile Include="pch.cpp">
<Filter>Source Files</Filter>
</ClCompile>

View File

@@ -1,12 +0,0 @@
#pragma once
#include <string>
#include <vector>
#include <Windows.h>
struct ProcessResult
{
std::wstring name;
DWORD pid;
std::wstring user;
std::vector<std::wstring> files;
};

View File

@@ -2,7 +2,6 @@
#include "Settings.h"
#include "Constants.h"
#include <filesystem>
#include <common/utils/json.h>
#include <common/SettingsAPI/settings_helpers.h>

View File

@@ -0,0 +1,13 @@
// pch.h: This is a precompiled header file.
// Files listed below are compiled only once, improving build performance for future builds.
// This also affects IntelliSense performance, including code completion and many code browsing features.
// However, files listed here are ALL re-compiled if any one of them is updated between builds.
// Do not add files here that you will be updating frequently as this negates the performance advantage.
#ifndef PCH_H
#define PCH_H
// add headers that you want to pre-compile here
#include "framework.h"
#endif //PCH_H

View File

@@ -18,6 +18,4 @@
#include <algorithm>
#include <fstream>
#ifndef FILELOCKSMITH_LIB_STATIC
#include <winrt/PowerToys.Interop.h>
#endif

View File

@@ -113,7 +113,7 @@ namespace Hosts.UITests
this.Find<NavigationViewItem>("Hosts File Editor").Click();
this.Find<ToggleSwitch>("Hosts File Editor").Toggle(true);
this.Find<ToggleSwitch>("Enable Hosts File Editor").Toggle(true);
this.Find<ToggleSwitch>("Launch as administrator").Toggle(launchAsAdmin);
this.Find<ToggleSwitch>("Show a warning at startup").Toggle(showWarning);

View File

@@ -1,123 +0,0 @@
<?xml version="1.0" encoding="utf-8"?>
<Project DefaultTargets="Build" xmlns="http://schemas.microsoft.com/developer/msbuild/2003">
<Import Project="..\..\..\..\packages\Microsoft.Windows.CppWinRT.2.0.240111.5\build\native\Microsoft.Windows.CppWinRT.props" Condition="Exists('..\..\..\..\packages\Microsoft.Windows.CppWinRT.2.0.240111.5\build\native\Microsoft.Windows.CppWinRT.props')" />
<ItemGroup Label="ProjectConfigurations">
<ProjectConfiguration Include="Debug|x64">
<Configuration>Debug</Configuration>
<Platform>x64</Platform>
</ProjectConfiguration>
<ProjectConfiguration Include="Release|x64">
<Configuration>Release</Configuration>
<Platform>x64</Platform>
</ProjectConfiguration>
<ProjectConfiguration Include="Debug|ARM64">
<Configuration>Debug</Configuration>
<Platform>ARM64</Platform>
</ProjectConfiguration>
<ProjectConfiguration Include="Release|ARM64">
<Configuration>Release</Configuration>
<Platform>ARM64</Platform>
</ProjectConfiguration>
</ItemGroup>
<PropertyGroup Label="Globals">
<VCProjectVersion>17.0</VCProjectVersion>
<Keyword>Win32Proj</Keyword>
<ProjectGuid>{79267138-2895-4346-9021-21408d65379f}</ProjectGuid>
<RootNamespace>LightSwitchLib</RootNamespace>
<WindowsTargetPlatformVersion>10.0.26100.0</WindowsTargetPlatformVersion>
<ProjectName>LightSwitchLib</ProjectName>
</PropertyGroup>
<Import Project="$(VCTargetsPath)\Microsoft.Cpp.Default.props" />
<PropertyGroup Condition="'$(Configuration)|$(Platform)'=='Debug|x64'" Label="Configuration">
<ConfigurationType>StaticLibrary</ConfigurationType>
<UseDebugLibraries>true</UseDebugLibraries>
<PlatformToolset>v143</PlatformToolset>
<CharacterSet>Unicode</CharacterSet>
</PropertyGroup>
<PropertyGroup Condition="'$(Configuration)|$(Platform)'=='Release|x64'" Label="Configuration">
<ConfigurationType>StaticLibrary</ConfigurationType>
<UseDebugLibraries>false</UseDebugLibraries>
<PlatformToolset>v143</PlatformToolset>
<WholeProgramOptimization>true</WholeProgramOptimization>
<CharacterSet>Unicode</CharacterSet>
</PropertyGroup>
<PropertyGroup Condition="'$(Configuration)|$(Platform)'=='Debug|ARM64'" Label="Configuration">
<ConfigurationType>StaticLibrary</ConfigurationType>
<UseDebugLibraries>true</UseDebugLibraries>
<PlatformToolset>v143</PlatformToolset>
<CharacterSet>Unicode</CharacterSet>
</PropertyGroup>
<PropertyGroup Condition="'$(Configuration)|$(Platform)'=='Release|ARM64'" Label="Configuration">
<ConfigurationType>StaticLibrary</ConfigurationType>
<UseDebugLibraries>false</UseDebugLibraries>
<PlatformToolset>v143</PlatformToolset>
<WholeProgramOptimization>true</WholeProgramOptimization>
<CharacterSet>Unicode</CharacterSet>
</PropertyGroup>
<Import Project="$(VCTargetsPath)\Microsoft.Cpp.props" />
<ImportGroup Label="ExtensionSettings">
</ImportGroup>
<ImportGroup Label="Shared">
</ImportGroup>
<ImportGroup Label="PropertySheets" Condition="'$(Configuration)|$(Platform)'=='Debug|x64'">
<Import Project="$(UserRootDir)\Microsoft.Cpp.$(Platform).user.props" Condition="exists('$(UserRootDir)\Microsoft.Cpp.$(Platform).user.props')" Label="LocalAppDataPlatform" />
</ImportGroup>
<ImportGroup Label="PropertySheets" Condition="'$(Configuration)|$(Platform)'=='Release|x64'">
<Import Project="$(UserRootDir)\Microsoft.Cpp.$(Platform).user.props" Condition="exists('$(UserRootDir)\Microsoft.Cpp.$(Platform).user.props')" Label="LocalAppDataPlatform" />
</ImportGroup>
<ImportGroup Label="PropertySheets" Condition="'$(Configuration)|$(Platform)'=='Debug|ARM64'">
<Import Project="$(UserRootDir)\Microsoft.Cpp.$(Platform).user.props" Condition="exists('$(UserRootDir)\Microsoft.Cpp.$(Platform).user.props')" Label="LocalAppDataPlatform" />
</ImportGroup>
<ImportGroup Label="PropertySheets" Condition="'$(Configuration)|$(Platform)'=='Release|ARM64'">
<Import Project="$(UserRootDir)\Microsoft.Cpp.$(Platform).user.props" Condition="exists('$(UserRootDir)\Microsoft.Cpp.$(Platform).user.props')" Label="LocalAppDataPlatform" />
</ImportGroup>
<PropertyGroup Label="UserMacros" />
<PropertyGroup>
<OutDir>..\..\..\..\$(Platform)\$(Configuration)\$(MSBuildProjectName)\</OutDir>
</PropertyGroup>
<ItemDefinitionGroup>
<ClCompile>
<WarningLevel>Level3</WarningLevel>
<SDLCheck>true</SDLCheck>
<PreprocessorDefinitions>_LIB;%(PreprocessorDefinitions)</PreprocessorDefinitions>
<ConformanceMode>true</ConformanceMode>
<PrecompiledHeader>Use</PrecompiledHeader>
<PrecompiledHeaderFile>pch.h</PrecompiledHeaderFile>
<AdditionalIncludeDirectories>
./;
..\..\..\common;
..\..\..\common\logger;
..\..\..\common\utils;
..\..\..\..\deps\spdlog\include;
%(AdditionalIncludeDirectories)
</AdditionalIncludeDirectories>
</ClCompile>
<Link>
<SubSystem>Windows</SubSystem>
<GenerateDebugInformation>true</GenerateDebugInformation>
</Link>
</ItemDefinitionGroup>
<ItemGroup>
<ClInclude Include="pch.h" />
<ClInclude Include="ThemeHelper.h" />
</ItemGroup>
<ItemGroup>
<ClCompile Include="pch.cpp">
<PrecompiledHeader Condition="'$(Configuration)|$(Platform)'=='Debug|x64'">Create</PrecompiledHeader>
<PrecompiledHeader Condition="'$(Configuration)|$(Platform)'=='Debug|ARM64'">Create</PrecompiledHeader>
<PrecompiledHeader Condition="'$(Configuration)|$(Platform)'=='Release|x64'">Create</PrecompiledHeader>
<PrecompiledHeader Condition="'$(Configuration)|$(Platform)'=='Release|ARM64'">Create</PrecompiledHeader>
</ClCompile>
<ClCompile Include="ThemeHelper.cpp" />
</ItemGroup>
<ItemGroup>
<ProjectReference Include="..\..\..\common\logger\logger.vcxproj">
<Project>{d9b8fc84-322a-4f9f-bbb9-20915c47ddfd}</Project>
</ProjectReference>
</ItemGroup>
<Import Project="$(VCTargetsPath)\Microsoft.Cpp.targets" />
<Import Project="..\..\..\..\deps\spdlog.props" />
<ImportGroup Label="ExtensionTargets">
<Import Project="..\..\..\..\packages\Microsoft.Windows.CppWinRT.2.0.240111.5\build\native\Microsoft.Windows.CppWinRT.targets" Condition="Exists('..\..\..\..\packages\Microsoft.Windows.CppWinRT.2.0.240111.5\build\native\Microsoft.Windows.CppWinRT.targets')" />
</ImportGroup>
</Project>

View File

@@ -1,33 +0,0 @@
<?xml version="1.0" encoding="utf-8"?>
<Project ToolsVersion="4.0" xmlns="http://schemas.microsoft.com/developer/msbuild/2003">
<ItemGroup>
<Filter Include="Source Files">
<UniqueIdentifier>{4FC737F1-C7A5-4376-A066-2A32D752A2FF}</UniqueIdentifier>
<Extensions>cpp;c;cc;cxx;c++;cppm;ixx;def;odl;idl;hpj;bat;asm;asmx</Extensions>
</Filter>
<Filter Include="Header Files">
<UniqueIdentifier>{93995380-89BD-4b04-88EB-625FBE52EBFB}</UniqueIdentifier>
<Extensions>h;hh;hpp;hxx;h++;hm;inl;inc;ipp;xsd</Extensions>
</Filter>
<Filter Include="Resource Files">
<UniqueIdentifier>{67DA6AB6-F800-4c08-8B7A-83BB121AAD01}</UniqueIdentifier>
<Extensions>rc;ico;cur;bmp;dlg;rc2;rct;bin;rgs;gif;jpg;jpeg;jpe;resx;tiff;tif;png;wav;mfcribbon-ms</Extensions>
</Filter>
</ItemGroup>
<ItemGroup>
<ClInclude Include="pch.h">
<Filter>Header Files</Filter>
</ClInclude>
<ClInclude Include="ThemeHelper.h">
<Filter>Header Files</Filter>
</ClInclude>
</ItemGroup>
<ItemGroup>
<ClCompile Include="pch.cpp">
<Filter>Source Files</Filter>
</ClCompile>
<ClCompile Include="ThemeHelper.cpp">
<Filter>Source Files</Filter>
</ClCompile>
</ItemGroup>
</Project>

View File

@@ -1,10 +0,0 @@
#pragma once
inline constexpr wchar_t PERSONALIZATION_REGISTRY_PATH[] = L"Software\\Microsoft\\Windows\\CurrentVersion\\Themes\\Personalize";
inline constexpr wchar_t NIGHT_LIGHT_REGISTRY_PATH[] = L"Software\\Microsoft\\Windows\\CurrentVersion\\CloudStore\\Store\\DefaultAccount\\Current\\default$windows.data.bluelightreduction.bluelightreductionstate\\windows.data.bluelightreduction.bluelightreductionstate";
void SetSystemTheme(bool isLight);
void SetAppsTheme(bool isLight);
bool GetCurrentSystemTheme();
bool GetCurrentAppsTheme();
bool IsNightLightEnabled();

Some files were not shown because too many files have changed in this diff Show More