Compare commits

...

2 Commits

Author SHA1 Message Date
Gordon Lam (SH)
336f7d8c47 Fix spelling 2026-01-08 18:26:22 +08:00
Gordon Lam (SH)
045c33930e Docs: consolidate instructions and fix prompt frontmatter 2026-01-08 18:14:13 +08:00
17 changed files with 1683 additions and 115 deletions

View File

@@ -1,59 +1,45 @@
---
description: PowerToys AI contributor guidance.
applyTo: pullRequests
description: 'PowerToys AI contributor guidance'
---
# PowerToys - Copilot guide (concise)
# PowerToys Copilot Instructions
This is the top-level guide for AI changes. Keep edits small, follow existing patterns, and cite exact paths in PRs.
Concise guidance for AI contributions. For complete details, see [AGENTS.md](../AGENTS.md).
# Repo map (1-line per area)
- Core apps: `src/runner/**` (tray/loader), `src/settings-ui/**` (Settings app)
- Shared libs: `src/common/**`
- Modules: `src/modules/*` (one per utility; Command Palette in `src/modules/cmdpal/**`)
- Build tools/docs: `tools/**`, `doc/devdocs/**`
## Quick Reference
# Build and test (defaults)
- Prerequisites: Visual Studio 2022 17.4+, minimal Windows 10 1803+.
- Build discipline:
- One terminal per operation (build -> test). Do not switch or open new ones mid-flow.
- After making changes, `cd` to the project folder that changed (`.csproj`/`.vcxproj`).
- Use scripts to build, synchronously block and wait in foreground for completion: `tools/build/build.ps1|.cmd` (current folder), `build-essentials.*` (once per brand new build for missing nuget packages).
- Treat build exit code 0 as success; any non-zero exit code is a failure. Read the errors log in the build folder (such as `build.*.*.errors.log`) and surface problems.
- Do not start tests or launch Runner until the previous step succeeded.
- Tests (fast and targeted):
- Find the test project by product code prefix (for example FancyZones, AdvancedPaste). Look for a sibling folder or one to two levels up named like `<Product>*UnitTests` or `<Product>*UITests`.
- Build the test project, wait for exit, then run only those tests via VS Test Explorer or `vstest.console.exe` with filters. Avoid `dotnet test` in this repo.
- Add or adjust tests when changing behavior; if skipped, state why (for example comment-only or string rename).
- **Build**: `tools\build\build-essentials.cmd` (first time), then `tools\build\build.cmd`
- **Tests**: Find `<Product>*UnitTests` project, build it, run via VS Test Explorer
- **Exit code 0 = success** do not proceed if build fails
# Pull requests (expectations)
- Atomic: one logical change; no drive-by refactors.
- Describe: problem, approach, risk, test evidence.
- List: touched paths if not obvious.
## Key Rules
# When to ask for clarification
- Ambiguous spec after scanning relevant docs (see below).
- Cross-module impact (shared enum or struct) not clear.
- Security, elevation, or installer changes.
- One terminal per operation (build → test)
- Atomic PRs: one logical change, no drive-by refactors
- Add tests when changing behavior
- Keep hot paths quiet (no logging in hooks/tight loops)
# Logging (use existing stacks)
- C++ logging lives in `src/common/logger/**` (`Logger::info`, `Logger::warn`, `Logger::error`, `Logger::debug`). Keep hot paths quiet (hooks, tight loops).
- C# logging goes through `ManagedCommon.Logger` (`LogInfo`, `LogWarning`, `LogError`, `LogDebug`, `LogTrace`). Some UIs use injected `ILogger` via `LoggerInstance.Logger`.
## Style Enforcement
# Docs to consult
- `tools/build/BUILD-GUIDELINES.md`
- `doc/devdocs/core/architecture.md`
- `doc/devdocs/core/runner.md`
- `doc/devdocs/core/settings/readme.md`
- `doc/devdocs/modules/readme.md`
- C#: `src/.editorconfig`, StyleCop.Analyzers
- C++: `src/.clang-format`
- XAML: XamlStyler
# Language style rules
- Always enforce repo analyzers: `src/.editorconfig` plus any `stylecop.json`.
- C# code follows StyleCop.Analyzers and Microsoft.CodeAnalysis.NetAnalyzers.
- C++ code honors `src/.clang-format` for formatting.
- Markdown files wrap at 80 characters and use ATX headers with fenced code blocks that include language tags.
- YAML files indent two spaces and add comments for complex settings while keeping keys clear.
- PowerShell scripts use Verb-Noun names and prefer single-quoted literals while documenting parameters and satisfying PSScriptAnalyzer.
## When to Ask for Clarification
# Done checklist (self review before finishing)
- Build clean? Tests updated or passed? No unintended formatting? Any new dependency? Documented skips?
- Ambiguous spec after scanning docs
- Cross-module impact unclear
- Security, elevation, or installer changes
## Component-Specific Instructions
These are auto-applied based on file location:
- [Runner & Settings UI](.github/instructions/runner-settings-ui.instructions.md)
- [Common Libraries](.github/instructions/common-libraries.instructions.md)
## Detailed Documentation
- [AGENTS.md](../AGENTS.md) Full contributor guide
- [Build Guidelines](../tools/build/BUILD-GUIDELINES.md)
- [Architecture](../doc/devdocs/core/architecture.md)
- [Coding Style](../doc/devdocs/development/style.md)

View File

@@ -0,0 +1,791 @@
---
description: 'Guidelines for creating custom agent files for GitHub Copilot'
applyTo: '**/*.agent.md'
---
# Custom Agent File Guidelines
Instructions for creating effective and maintainable custom agent files that provide specialized expertise for specific development tasks in GitHub Copilot.
## Project Context
- Target audience: Developers creating custom agents for GitHub Copilot
- File format: Markdown with YAML frontmatter
- File naming convention: lowercase with hyphens (e.g., `test-specialist.agent.md`)
- Location: `.github/agents/` directory (repository-level) or `agents/` directory (organization/enterprise-level)
- Purpose: Define specialized agents with tailored expertise, tools, and instructions for specific tasks
- Official documentation: https://docs.github.com/en/copilot/how-tos/use-copilot-agents/coding-agent/create-custom-agents
## Required Frontmatter
Every agent file must include YAML frontmatter with the following fields:
```yaml
---
description: 'Brief description of the agent purpose and capabilities'
name: 'Agent Display Name'
tools: ['read', 'edit', 'search']
model: 'Claude Sonnet 4.5'
target: 'vscode'
infer: true
---
```
### Core Frontmatter Properties
#### **description** (REQUIRED)
- Single-quoted string, clearly stating the agent's purpose and domain expertise
- Should be concise (50-150 characters) and actionable
- Example: `'Focuses on test coverage, quality, and testing best practices'`
#### **name** (OPTIONAL)
- Display name for the agent in the UI
- If omitted, defaults to filename (without `.md` or `.agent.md`)
- Use title case and be descriptive
- Example: `'Testing Specialist'`
#### **tools** (OPTIONAL)
- List of tool names or aliases the agent can use
- Supports comma-separated string or YAML array format
- If omitted, agent has access to all available tools
- See "Tool Configuration" section below for details
#### **model** (STRONGLY RECOMMENDED)
- Specifies which AI model the agent should use
- Supported in VS Code, JetBrains IDEs, Eclipse, and Xcode
- Example: `'Claude Sonnet 4.5'`, `'gpt-4'`, `'gpt-4o'`
- Choose based on agent complexity and required capabilities
#### **target** (OPTIONAL)
- Specifies target environment: `'vscode'` or `'github-copilot'`
- If omitted, agent is available in both environments
- Use when agent has environment-specific features
#### **infer** (OPTIONAL)
- Boolean controlling whether Copilot can automatically use this agent based on context
- Default: `true` if omitted
- Set to `false` to require manual agent selection
#### **metadata** (OPTIONAL, GitHub.com only)
- Object with name-value pairs for agent annotation
- Example: `metadata: { category: 'testing', version: '1.0' }`
- Not supported in VS Code
#### **mcp-servers** (OPTIONAL, Organization/Enterprise only)
- Configure MCP servers available only to this agent
- Only supported for organization/enterprise level agents
- See "MCP Server Configuration" section below
## Tool Configuration
### Tool Specification Strategies
**Enable all tools** (default):
```yaml
# Omit tools property entirely, or use:
tools: ['*']
```
**Enable specific tools**:
```yaml
tools: ['read', 'edit', 'search', 'execute']
```
**Enable MCP server tools**:
```yaml
tools: ['read', 'edit', 'github/*', 'playwright/navigate']
```
**Disable all tools**:
```yaml
tools: []
```
### Standard Tool Aliases
All aliases are case-insensitive:
| Alias | Alternative Names | Category | Description |
|-------|------------------|----------|-------------|
| `execute` | shell, Bash, powershell | Shell execution | Execute commands in appropriate shell |
| `read` | Read, NotebookRead, view | File reading | Read file contents |
| `edit` | Edit, MultiEdit, Write, NotebookEdit | File editing | Edit and modify files |
| `search` | Grep, Glob, search | Code search | Search for files or text in files |
| `agent` | custom-agent, Task | Agent invocation | Invoke other custom agents |
| `web` | WebSearch, WebFetch | Web access | Fetch web content and search |
| `todo` | TodoWrite | Task management | Create and manage task lists (VS Code only) |
### Built-in MCP Server Tools
**GitHub MCP Server**:
```yaml
tools: ['github/*'] # All GitHub tools
tools: ['github/get_file_contents', 'github/search_repositories'] # Specific tools
```
- All read-only tools available by default
- Token scoped to source repository
**Playwright MCP Server**:
```yaml
tools: ['playwright/*'] # All Playwright tools
tools: ['playwright/navigate', 'playwright/screenshot'] # Specific tools
```
- Configured to access localhost only
- Useful for browser automation and testing
### Tool Selection Best Practices
- **Principle of Least Privilege**: Only enable tools necessary for the agent's purpose
- **Security**: Limit `execute` access unless explicitly required
- **Focus**: Fewer tools = clearer agent purpose and better performance
- **Documentation**: Comment why specific tools are required for complex configurations
## Sub-Agent Invocation (Agent Orchestration)
Agents can invoke other agents using `runSubagent` to orchestrate multi-step workflows.
### How It Works
Include `agent` in tools list to enable sub-agent invocation:
```yaml
tools: ['read', 'edit', 'search', 'agent']
```
Then invoke other agents with `runSubagent`:
```javascript
const result = await runSubagent({
description: 'What this step does',
prompt: `You are the [Specialist] specialist.
Context:
- Parameter: ${parameterValue}
- Input: ${inputPath}
- Output: ${outputPath}
Task:
1. Do the specific work
2. Write results to output location
3. Return summary of completion`
});
```
### Basic Pattern
Structure each sub-agent call with:
1. **description**: Clear one-line purpose of the sub-agent invocation
2. **prompt**: Detailed instructions with substituted variables
The prompt should include:
- Who the sub-agent is (specialist role)
- What context it needs (parameters, paths)
- What to do (concrete tasks)
- Where to write output
- What to return (summary)
### Example: Multi-Step Processing
```javascript
// Step 1: Process data
const processing = await runSubagent({
description: 'Transform raw input data',
prompt: `You are the Data Processor specialist.
Project: ${projectName}
Input: ${basePath}/raw/
Output: ${basePath}/processed/
Task:
1. Read all files from input directory
2. Apply transformations
3. Write processed files to output
4. Create summary: ${basePath}/processed/summary.md
Return: Number of files processed and any issues found`
});
// Step 2: Analyze (depends on Step 1)
const analysis = await runSubagent({
description: 'Analyze processed data',
prompt: `You are the Data Analyst specialist.
Project: ${projectName}
Input: ${basePath}/processed/
Output: ${basePath}/analysis/
Task:
1. Read processed files from input
2. Generate analysis report
3. Write to: ${basePath}/analysis/report.md
Return: Key findings and identified patterns`
});
```
### Key Points
- **Pass variables in prompts**: Use `${variableName}` for all dynamic values
- **Keep prompts focused**: Clear, specific tasks for each sub-agent
- **Return summaries**: Each sub-agent should report what it accomplished
- **Sequential execution**: Use `await` to maintain order when steps depend on each other
- **Error handling**: Check results before proceeding to dependent steps
### ⚠️ Tool Availability Requirement
**Critical**: If a sub-agent requires specific tools (e.g., `edit`, `execute`, `search`), the orchestrator must include those tools in its own `tools` list. Sub-agents cannot access tools that aren't available to their parent orchestrator.
**Example**:
```yaml
# If your sub-agents need to edit files, execute commands, or search code
tools: ['read', 'edit', 'search', 'execute', 'agent']
```
The orchestrator's tool permissions act as a ceiling for all invoked sub-agents. Plan your tool list carefully to ensure all sub-agents have the tools they need.
### ⚠️ Important Limitation
**Sub-agent orchestration is NOT suitable for large-scale data processing.** Avoid using `runSubagent` when:
- Processing hundreds or thousands of files
- Handling large datasets
- Performing bulk transformations on big codebases
- Orchestrating more than 5-10 sequential steps
Each sub-agent call adds latency and context overhead. For high-volume processing, implement logic directly in a single agent instead. Use orchestration only for coordinating specialized tasks on focused, manageable datasets.
## Agent Prompt Structure
The markdown content below the frontmatter defines the agent's behavior, expertise, and instructions. Well-structured prompts typically include:
1. **Agent Identity and Role**: Who the agent is and its primary role
2. **Core Responsibilities**: What specific tasks the agent performs
3. **Approach and Methodology**: How the agent works to accomplish tasks
4. **Guidelines and Constraints**: What to do/avoid and quality standards
5. **Output Expectations**: Expected output format and quality
### Prompt Writing Best Practices
- **Be Specific and Direct**: Use imperative mood ("Analyze", "Generate"); avoid vague terms
- **Define Boundaries**: Clearly state scope limits and constraints
- **Include Context**: Explain domain expertise and reference relevant frameworks
- **Focus on Behavior**: Describe how the agent should think and work
- **Use Structured Format**: Headers, bullets, and lists make prompts scannable
## Variable Definition and Extraction
Agents can define dynamic parameters to extract values from user input and use them throughout the agent's behavior and sub-agent communications. This enables flexible, context-aware agents that adapt to user-provided data.
### When to Use Variables
**Use variables when**:
- Agent behavior depends on user input
- Need to pass dynamic values to sub-agents
- Want to make agents reusable across different contexts
- Require parameterized workflows
- Need to track or reference user-provided context
**Examples**:
- Extract project name from user prompt
- Capture certification name for pipeline processing
- Identify file paths or directories
- Extract configuration options
- Parse feature names or module identifiers
### Variable Declaration Pattern
Define variables section early in the agent prompt to document expected parameters:
```markdown
# Agent Name
## Dynamic Parameters
- **Parameter Name**: Description and usage
- **Another Parameter**: How it's extracted and used
## Your Mission
Process [PARAMETER_NAME] to accomplish [task].
```
### Variable Extraction Methods
#### 1. **Explicit User Input**
Ask the user to provide the variable if not detected in the prompt:
```markdown
## Your Mission
Process the project by analyzing your codebase.
### Step 1: Identify Project
If no project name is provided, **ASK THE USER** for:
- Project name or identifier
- Base path or directory location
- Configuration type (if applicable)
Use this information to contextualize all subsequent tasks.
```
#### 2. **Implicit Extraction from Prompt**
Automatically extract variables from the user's natural language input:
```javascript
// Example: Extract certification name from user input
const userInput = "Process My Certification";
// Extract key information
const certificationName = extractCertificationName(userInput);
// Result: "My Certification"
const basePath = `certifications/${certificationName}`;
// Result: "certifications/My Certification"
```
#### 3. **Contextual Variable Resolution**
Use file context or workspace information to derive variables:
```markdown
## Variable Resolution Strategy
1. **From User Prompt**: First, look for explicit mentions in user input
2. **From File Context**: Check current file name or path
3. **From Workspace**: Use workspace folder or active project
4. **From Settings**: Reference configuration files
5. **Ask User**: If all else fails, request missing information
```
### Using Variables in Agent Prompts
#### Variable Substitution in Instructions
Use template variables in agent prompts to make them dynamic:
```markdown
# Agent Name
## Dynamic Parameters
- **Project Name**: ${projectName}
- **Base Path**: ${basePath}
- **Output Directory**: ${outputDir}
## Your Mission
Process the **${projectName}** project located at `${basePath}`.
## Process Steps
1. Read input from: `${basePath}/input/`
2. Process files according to project configuration
3. Write results to: `${outputDir}/`
4. Generate summary report
## Quality Standards
- Maintain project-specific coding standards for **${projectName}**
- Follow directory structure: `${basePath}/[structure]`
```
#### Passing Variables to Sub-Agents
When invoking a sub-agent, pass all context through template variables in the prompt:
```javascript
// Extract and prepare variables
const basePath = `projects/${projectName}`;
const inputPath = `${basePath}/src/`;
const outputPath = `${basePath}/docs/`;
// Pass to sub-agent with all variables substituted
const result = await runSubagent({
description: 'Generate project documentation',
prompt: `You are the Documentation specialist.
Project: ${projectName}
Input: ${inputPath}
Output: ${outputPath}
Task:
1. Read source files from ${inputPath}
2. Generate comprehensive documentation
3. Write to ${outputPath}/index.md
4. Include code examples and usage guides
Return: Summary of documentation generated (file count, word count)`
});
```
The sub-agent receives all necessary context embedded in the prompt. Variables are resolved before sending the prompt, so the sub-agent works with concrete paths and values, not variable placeholders.
### Real-World Example: Code Review Orchestrator
Example of a simple orchestrator that validates code through multiple specialized agents:
```javascript
async function reviewCodePipeline(repositoryName, prNumber) {
const basePath = `projects/${repositoryName}/pr-${prNumber}`;
// Step 1: Security Review
const security = await runSubagent({
description: 'Scan for security vulnerabilities',
prompt: `You are the Security Reviewer specialist.
Repository: ${repositoryName}
PR: ${prNumber}
Code: ${basePath}/changes/
Task:
1. Scan code for OWASP Top 10 vulnerabilities
2. Check for injection attacks, auth flaws
3. Write findings to ${basePath}/security-review.md
Return: List of critical, high, and medium issues found`
});
// Step 2: Test Coverage Check
const coverage = await runSubagent({
description: 'Verify test coverage for changes',
prompt: `You are the Test Coverage specialist.
Repository: ${repositoryName}
PR: ${prNumber}
Changes: ${basePath}/changes/
Task:
1. Analyze code coverage for modified files
2. Identify untested critical paths
3. Write report to ${basePath}/coverage-report.md
Return: Current coverage percentage and gaps`
});
// Step 3: Aggregate Results
const finalReport = await runSubagent({
description: 'Compile all review findings',
prompt: `You are the Review Aggregator specialist.
Repository: ${repositoryName}
Reports: ${basePath}/*.md
Task:
1. Read all review reports from ${basePath}/
2. Synthesize findings into single report
3. Determine overall verdict (APPROVE/NEEDS_FIXES/BLOCK)
4. Write to ${basePath}/final-review.md
Return: Final verdict and executive summary`
});
return finalReport;
}
```
This pattern applies to any orchestration scenario: extract variables, call sub-agents with clear context, await results.
### Variable Best Practices
#### 1. **Clear Documentation**
Always document what variables are expected:
```markdown
## Required Variables
- **projectName**: The name of the project (string, required)
- **basePath**: Root directory for project files (path, required)
## Optional Variables
- **mode**: Processing mode - quick/standard/detailed (enum, default: standard)
- **outputFormat**: Output format - markdown/json/html (enum, default: markdown)
## Derived Variables
- **outputDir**: Automatically set to ${basePath}/output
- **logFile**: Automatically set to ${basePath}/.log.md
```
#### 2. **Consistent Naming**
Use consistent variable naming conventions:
```javascript
// Good: Clear, descriptive naming
const variables = {
projectName, // What project to work on
basePath, // Where project files are located
outputDirectory, // Where to save results
processingMode, // How to process (detail level)
configurationPath // Where config files are
};
// Avoid: Ambiguous or inconsistent
const bad_variables = {
name, // Too generic
path, // Unclear which path
mode, // Too short
config // Too vague
};
```
#### 3. **Validation and Constraints**
Document valid values and constraints:
```markdown
## Variable Constraints
**projectName**:
- Type: string (alphanumeric, hyphens, underscores allowed)
- Length: 1-100 characters
- Required: yes
- Pattern: `/^[a-zA-Z0-9_-]+$/`
**processingMode**:
- Type: enum
- Valid values: "quick" (< 5min), "standard" (5-15min), "detailed" (15+ min)
- Default: "standard"
- Required: no
```
## MCP Server Configuration (Organization/Enterprise Only)
MCP servers extend agent capabilities with additional tools. Only supported for organization and enterprise-level agents.
### Configuration Format
```yaml
---
name: my-custom-agent
description: 'Agent with MCP integration'
tools: ['read', 'edit', 'custom-mcp/tool-1']
mcp-servers:
custom-mcp:
type: 'local'
command: 'some-command'
args: ['--arg1', '--arg2']
tools: ["*"]
env:
ENV_VAR_NAME: ${{ secrets.API_KEY }}
---
```
### MCP Server Properties
- **type**: Server type (`'local'` or `'stdio'`)
- **command**: Command to start the MCP server
- **args**: Array of command arguments
- **tools**: Tools to enable from this server (`["*"]` for all)
- **env**: Environment variables (supports secrets)
### Environment Variables and Secrets
Secrets must be configured in repository settings under "copilot" environment.
**Supported syntax**:
```yaml
env:
# Environment variable only
VAR_NAME: COPILOT_MCP_ENV_VAR_VALUE
# Variable with header
VAR_NAME: $COPILOT_MCP_ENV_VAR_VALUE
VAR_NAME: ${COPILOT_MCP_ENV_VAR_VALUE}
# GitHub Actions-style (YAML only)
VAR_NAME: ${{ secrets.COPILOT_MCP_ENV_VAR_VALUE }}
VAR_NAME: ${{ var.COPILOT_MCP_ENV_VAR_VALUE }}
```
## File Organization and Naming
### Repository-Level Agents
- Location: `.github/agents/`
- Scope: Available only in the specific repository
- Access: Uses repository-configured MCP servers
### Organization/Enterprise-Level Agents
- Location: `.github-private/agents/` (then move to `agents/` root)
- Scope: Available across all repositories in org/enterprise
- Access: Can configure dedicated MCP servers
### Naming Conventions
- Use lowercase with hyphens: `test-specialist.agent.md`
- Name should reflect agent purpose
- Filename becomes default agent name (if `name` not specified)
- Allowed characters: `.`, `-`, `_`, `a-z`, `A-Z`, `0-9`
## Agent Processing and Behavior
### Versioning
- Based on Git commit SHAs for the agent file
- Create branches/tags for different agent versions
- Instantiated using latest version for repository/branch
- PR interactions use same agent version for consistency
### Name Conflicts
Priority (highest to lowest):
1. Repository-level agent
2. Organization-level agent
3. Enterprise-level agent
Lower-level configurations override higher-level ones with the same name.
### Tool Processing
- `tools` list filters available tools (built-in and MCP)
- No tools specified = all tools enabled
- Empty list (`[]`) = all tools disabled
- Specific list = only those tools enabled
- Unrecognized tool names are ignored (allows environment-specific tools)
### MCP Server Processing Order
1. Out-of-the-box MCP servers (e.g., GitHub MCP)
2. Custom agent MCP configuration (org/enterprise only)
3. Repository-level MCP configurations
Each level can override settings from previous levels.
## Agent Creation Checklist
### Frontmatter
- [ ] `description` field present and descriptive (50-150 chars)
- [ ] `description` wrapped in single quotes
- [ ] `name` specified (optional but recommended)
- [ ] `tools` configured appropriately (or intentionally omitted)
- [ ] `model` specified for optimal performance
- [ ] `target` set if environment-specific
- [ ] `infer` set to `false` if manual selection required
### Prompt Content
- [ ] Clear agent identity and role defined
- [ ] Core responsibilities listed explicitly
- [ ] Approach and methodology explained
- [ ] Guidelines and constraints specified
- [ ] Output expectations documented
- [ ] Examples provided where helpful
- [ ] Instructions are specific and actionable
- [ ] Scope and boundaries clearly defined
- [ ] Total content under 30,000 characters
### File Structure
- [ ] Filename follows lowercase-with-hyphens convention
- [ ] File placed in correct directory (`.github/agents/` or `agents/`)
- [ ] Filename uses only allowed characters
- [ ] File extension is `.agent.md`
### Quality Assurance
- [ ] Agent purpose is unique and not duplicative
- [ ] Tools are minimal and necessary
- [ ] Instructions are clear and unambiguous
- [ ] Agent has been tested with representative tasks
- [ ] Documentation references are current
- [ ] Security considerations addressed (if applicable)
## Common Agent Patterns
### Testing Specialist
**Purpose**: Focus on test coverage and quality
**Tools**: All tools (for comprehensive test creation)
**Approach**: Analyze, identify gaps, write tests, avoid production code changes
### Implementation Planner
**Purpose**: Create detailed technical plans and specifications
**Tools**: Limited to `['read', 'search', 'edit']`
**Approach**: Analyze requirements, create documentation, avoid implementation
### Code Reviewer
**Purpose**: Review code quality and provide feedback
**Tools**: `['read', 'search']` only
**Approach**: Analyze, suggest improvements, no direct modifications
### Refactoring Specialist
**Purpose**: Improve code structure and maintainability
**Tools**: `['read', 'search', 'edit']`
**Approach**: Analyze patterns, propose refactorings, implement safely
### Security Auditor
**Purpose**: Identify security issues and vulnerabilities
**Tools**: `['read', 'search', 'web']`
**Approach**: Scan code, check against OWASP, report findings
## Common Mistakes to Avoid
### Frontmatter Errors
- ❌ Missing `description` field
- ❌ Description not wrapped in quotes
- ❌ Invalid tool names without checking documentation
- ❌ Incorrect YAML syntax (indentation, quotes)
### Tool Configuration Issues
- ❌ Granting excessive tool access unnecessarily
- ❌ Missing required tools for agent's purpose
- ❌ Not using tool aliases consistently
- ❌ Forgetting MCP server namespace (`server-name/tool`)
### Prompt Content Problems
- ❌ Vague, ambiguous instructions
- ❌ Conflicting or contradictory guidelines
- ❌ Lack of clear scope definition
- ❌ Missing output expectations
- ❌ Overly verbose instructions (exceeding character limits)
- ❌ No examples or context for complex tasks
### Organizational Issues
- ❌ Filename doesn't reflect agent purpose
- ❌ Wrong directory (confusing repo vs org level)
- ❌ Using spaces or special characters in filename
- ❌ Duplicate agent names causing conflicts
## Testing and Validation
### Manual Testing
1. Create the agent file with proper frontmatter
2. Reload VS Code or refresh GitHub.com
3. Select the agent from the dropdown in Copilot Chat
4. Test with representative user queries
5. Verify tool access works as expected
6. Confirm output meets expectations
### Integration Testing
- Test agent with different file types in scope
- Verify MCP server connectivity (if configured)
- Check agent behavior with missing context
- Test error handling and edge cases
- Validate agent switching and handoffs
### Quality Checks
- Run through agent creation checklist
- Review against common mistakes list
- Compare with example agents in repository
- Get peer review for complex agents
- Document any special configuration needs
## Additional Resources
### Official Documentation
- [Creating Custom Agents](https://docs.github.com/en/copilot/how-tos/use-copilot-agents/coding-agent/create-custom-agents)
- [Custom Agents Configuration](https://docs.github.com/en/copilot/reference/custom-agents-configuration)
- [Custom Agents in VS Code](https://code.visualstudio.com/docs/copilot/customization/custom-agents)
- [MCP Integration](https://docs.github.com/en/copilot/how-tos/use-copilot-agents/coding-agent/extend-coding-agent-with-mcp)
### Community Resources
- [Awesome Copilot Agents Collection](https://github.com/github/awesome-copilot/tree/main/agents)
- [Customization Library Examples](https://docs.github.com/en/copilot/tutorials/customization-library/custom-agents)
- [Your First Custom Agent Tutorial](https://docs.github.com/en/copilot/tutorials/customization-library/custom-agents/your-first-custom-agent)
### Related Files
- [Prompt Files Guidelines](./prompt.instructions.md) - For creating prompt files
- [Instructions Guidelines](./instructions.instructions.md) - For creating instruction files
## Version Compatibility Notes
### GitHub.com (Coding Agent)
- ✅ Fully supports all standard frontmatter properties
- ✅ Repository and org/enterprise level agents
- ✅ MCP server configuration (org/enterprise)
- ❌ Does not support `model`, `argument-hint`, `handoffs` properties
### VS Code / JetBrains / Eclipse / Xcode
- ✅ Supports `model` property for AI model selection
- ✅ Supports `argument-hint` and `handoffs` properties
- ✅ User profile and workspace-level agents
- ❌ Cannot configure MCP servers at repository level
- ⚠️ Some properties may behave differently
When creating agents for multiple environments, focus on common properties and test in all target environments. Use `target` property to create environment-specific agents when necessary.

View File

@@ -0,0 +1,187 @@
---
description: 'Best practices for Azure DevOps Pipeline YAML files'
applyTo: '**/azure-pipelines.yml, **/azure-pipelines*.yml, **/*.pipeline.yml'
---
# Azure DevOps Pipeline YAML Best Practices
Guidelines for creating maintainable, secure, and efficient Azure DevOps pipelines in PowerToys.
## General Guidelines
- Use YAML syntax consistently with proper indentation (2 spaces)
- Always include meaningful names and display names for pipelines, stages, jobs, and steps
- Implement proper error handling and conditional execution
- Use variables and parameters to make pipelines reusable and maintainable
- Follow the principle of least privilege for service connections and permissions
- Include comprehensive logging and diagnostics for troubleshooting
## Pipeline Structure
- Organize complex pipelines using stages for better visualization and control
- Use jobs to group related steps and enable parallel execution when possible
- Implement proper dependencies between stages and jobs
- Use templates for reusable pipeline components
- Keep pipeline files focused and modular - split large pipelines into multiple files
## Build Best Practices
- Use specific agent pool versions and VM images for consistency
- Cache dependencies (npm, NuGet, Maven, etc.) to improve build performance
- Implement proper artifact management with meaningful names and retention policies
- Use build variables for version numbers and build metadata
- Include code quality gates (lint checks, testing, security scans)
- Ensure builds are reproducible and environment-independent
## Testing Integration
- Run unit tests as part of the build process
- Publish test results in standard formats (JUnit, VSTest, etc.)
- Include code coverage reporting and quality gates
- Implement integration and end-to-end tests in appropriate stages
- Use test impact analysis when available to optimize test execution
- Fail fast on test failures to provide quick feedback
## Security Considerations
- Use Azure Key Vault for sensitive configuration and secrets
- Implement proper secret management with variable groups
- Use service connections with minimal required permissions
- Enable security scans (dependency vulnerabilities, static analysis)
- Implement approval gates for production deployments
- Use managed identities when possible instead of service principals
## Deployment Strategies
- Implement proper environment promotion (dev → staging → production)
- Use deployment jobs with proper environment targeting
- Implement blue-green or canary deployment strategies when appropriate
- Include rollback mechanisms and health checks
- Use infrastructure as code (ARM, Bicep, Terraform) for consistent deployments
- Implement proper configuration management per environment
## Variable and Parameter Management
- Use variable groups for shared configuration across pipelines
- Implement runtime parameters for flexible pipeline execution
- Use conditional variables based on branches or environments
- Secure sensitive variables and mark them as secrets
- Document variable purposes and expected values
- Use variable templates for complex variable logic
## Performance Optimization
- Use parallel jobs and matrix strategies when appropriate
- Implement proper caching strategies for dependencies and build outputs
- Use shallow clone for Git operations when full history isn't needed
- Optimize Docker image builds with multi-stage builds and layer caching
- Monitor pipeline performance and optimize bottlenecks
- Use pipeline resource triggers efficiently
## Monitoring and Observability
- Include comprehensive logging throughout the pipeline
- Use Azure Monitor and Application Insights for deployment tracking
- Implement proper notification strategies for failures and successes
- Include deployment health checks and automated rollback triggers
- Use pipeline analytics to identify improvement opportunities
- Document pipeline behavior and troubleshooting steps
## Template and Reusability
- Create pipeline templates for common patterns
- Use extends templates for complete pipeline inheritance
- Implement step templates for reusable task sequences
- Use variable templates for complex variable logic
- Version templates appropriately for stability
- Document template parameters and usage examples
## Branch and Trigger Strategy
- Implement appropriate triggers for different branch types
- Use path filters to trigger builds only when relevant files change
- Configure proper CI/CD triggers for main/master branches
- Use pull request triggers for code validation
- Implement scheduled triggers for maintenance tasks
- Consider resource triggers for multi-repository scenarios
## Example Structure
```yaml
# azure-pipelines.yml
trigger:
branches:
include:
- main
- develop
paths:
exclude:
- docs/*
- README.md
variables:
- group: shared-variables
- name: buildConfiguration
value: 'Release'
stages:
- stage: Build
displayName: 'Build and Test'
jobs:
- job: Build
displayName: 'Build Application'
pool:
vmImage: 'ubuntu-latest'
steps:
- task: UseDotNet@2
displayName: 'Use .NET SDK'
inputs:
version: '8.x'
- task: DotNetCoreCLI@2
displayName: 'Restore dependencies'
inputs:
command: 'restore'
projects: '**/*.csproj'
- task: DotNetCoreCLI@2
displayName: 'Build application'
inputs:
command: 'build'
projects: '**/*.csproj'
arguments: '--configuration $(buildConfiguration) --no-restore'
- stage: Deploy
displayName: 'Deploy to Staging'
dependsOn: Build
condition: and(succeeded(), eq(variables['Build.SourceBranch'], 'refs/heads/main'))
jobs:
- deployment: DeployToStaging
displayName: 'Deploy to Staging Environment'
environment: 'staging'
strategy:
runOnce:
deploy:
steps:
- download: current
displayName: 'Download drop artifact'
artifact: drop
- task: AzureWebApp@1
displayName: 'Deploy to Azure Web App'
inputs:
azureSubscription: 'staging-service-connection'
appType: 'webApp'
appName: 'myapp-staging'
package: '$(Pipeline.Workspace)/drop/**/*.zip'
```
## Common Anti-Patterns to Avoid
- Hardcoding sensitive values directly in YAML files
- Using overly broad triggers that cause unnecessary builds
- Mixing build and deployment logic in a single stage
- Not implementing proper error handling and cleanup
- Using deprecated task versions without upgrade plans
- Creating monolithic pipelines that are difficult to maintain
- Not using proper naming conventions for clarity
- Ignoring pipeline security best practices

View File

@@ -0,0 +1,61 @@
---
description: 'Guidelines for shared libraries including logging, IPC, settings, DPI, telemetry, and utilities consumed by multiple modules'
applyTo: 'src/common/**'
---
# Common Libraries Shared Code Guidance
Guidelines for modifying shared code in `src/common/`. Changes here can have wide-reaching impact across the entire PowerToys codebase.
## Scope
- Logging infrastructure (`src/common/logger/`)
- IPC primitives and named pipe utilities
- Settings serialization and management
- DPI awareness and scaling utilities
- Telemetry helpers
- General utilities (JSON parsing, string helpers, etc.)
## Guidelines
### API Stability
- Avoid breaking public headers/APIs; if changed, search & update all callers
- Coordinate ABI-impacting struct/class layout changes; keep binary compatibility
- When modifying public interfaces, grep the entire codebase for usages
### Performance
- Watch perf in hot paths (hooks, timers, serialization)
- Avoid avoidable allocations in frequently called code
- Profile changes that touch performance-sensitive areas
### Dependencies
- Ask before adding third-party deps or changing serialization formats
- New dependencies must be MIT-licensed or approved by PM team
- Add any new external packages to `NOTICE.md`
### Logging
- C++ logging uses spdlog (`Logger::info`, `Logger::warn`, `Logger::error`, `Logger::debug`)
- Initialize with `init_logger()` early in startup
- Keep hot paths quiet no logging in tight loops or hooks
## Acceptance Criteria
- No unintended ABI breaks
- No noisy logs in hot paths
- New non-obvious symbols briefly commented
- All callers updated when interfaces change
## Code Style
- **C++**: Follow `.clang-format` in `src/`; use Modern C++ patterns per C++ Core Guidelines
- **C#**: Follow `src/.editorconfig`; enforce StyleCop.Analyzers
## Validation
- Build: `tools\build\build.cmd` from `src/common/` folder
- Verify no ABI breaks: grep for changed function/struct names across codebase
- Check logs: ensure no new logging in performance-critical paths

View File

@@ -0,0 +1,256 @@
---
description: 'Guidelines for creating high-quality custom instruction files for GitHub Copilot'
applyTo: '**/*.instructions.md'
---
# Custom Instructions File Guidelines
Instructions for creating effective and maintainable custom instruction files that guide GitHub Copilot in generating domain-specific code and following project conventions.
## Project Context
- Target audience: Developers and GitHub Copilot working with domain-specific code
- File format: Markdown with YAML frontmatter
- File naming convention: lowercase with hyphens (e.g., `react-best-practices.instructions.md`)
- Location: `.github/instructions/` directory
- Purpose: Provide context-aware guidance for code generation, review, and documentation
## Required Frontmatter
Every instruction file must include YAML frontmatter with the following fields:
```yaml
---
description: 'Brief description of the instruction purpose and scope'
applyTo: 'glob pattern for target files (e.g., **/*.ts, **/*.py)'
---
```
### Frontmatter Guidelines
- **description**: Single-quoted string, 1-500 characters, clearly stating the purpose
- **applyTo**: Glob pattern(s) specifying which files these instructions apply to
- Single pattern: `'**/*.ts'`
- Multiple patterns: `'**/*.ts, **/*.tsx, **/*.js'`
- Specific files: `'src/**/*.py'`
- All files: `'**'`
## File Structure
A well-structured instruction file should include the following sections:
### 1. Title and Overview
- Clear, descriptive title using `#` heading
- Brief introduction explaining the purpose and scope
- Optional: Project context section with key technologies and versions
### 2. Core Sections
Organize content into logical sections based on the domain:
- **General Instructions**: High-level guidelines and principles
- **Best Practices**: Recommended patterns and approaches
- **Code Standards**: Naming conventions, formatting, style rules
- **Architecture/Structure**: Project organization and design patterns
- **Common Patterns**: Frequently used implementations
- **Security**: Security considerations (if applicable)
- **Performance**: Optimization guidelines (if applicable)
- **Testing**: Testing standards and approaches (if applicable)
### 3. Examples and Code Snippets
Provide concrete examples with clear labels:
```markdown
### Good Example
\`\`\`language
// Recommended approach
code example here
\`\`\`
### Bad Example
\`\`\`language
// Avoid this pattern
code example here
\`\`\`
```
### 4. Validation and Verification (Optional but Recommended)
- Build commands to verify code
- Lint checks and formatting tools
- Testing requirements
- Verification steps
## Content Guidelines
### Writing Style
- Use clear, concise language
- Write in imperative mood ("Use", "Implement", "Avoid")
- Be specific and actionable
- Avoid ambiguous terms like "should", "might", "possibly"
- Use bullet points and lists for readability
- Keep sections focused and scannable
### Best Practices
- **Be Specific**: Provide concrete examples rather than abstract concepts
- **Show Why**: Explain the reasoning behind recommendations when it adds value
- **Use Tables**: For comparing options, listing rules, or showing patterns
- **Include Examples**: Real code snippets are more effective than descriptions
- **Stay Current**: Reference current versions and best practices
- **Link Resources**: Include official documentation and authoritative sources
### Common Patterns to Include
1. **Naming Conventions**: How to name variables, functions, classes, files
2. **Code Organization**: File structure, module organization, import order
3. **Error Handling**: Preferred error handling patterns
4. **Dependencies**: How to manage and document dependencies
5. **Comments and Documentation**: When and how to document code
6. **Version Information**: Target language/framework versions
## Patterns to Follow
### Bullet Points and Lists
```markdown
## Security Best Practices
- Always validate user input before processing
- Use parameterized queries to prevent SQL injection
- Store secrets in environment variables, never in code
- Implement proper authentication and authorization
- Enable HTTPS for all production endpoints
```
### Tables for Structured Information
```markdown
## Common Issues
| Issue | Solution | Example |
| ---------------- | ------------------- | ----------------------------- |
| Magic numbers | Use named constants | `const MAX_RETRIES = 3` |
| Deep nesting | Extract functions | Refactor nested if statements |
| Hardcoded values | Use configuration | Store API URLs in config |
```
### Code Comparison
```markdown
### Good Example - Using TypeScript interfaces
\`\`\`typescript
interface User {
id: string;
name: string;
email: string;
}
function getUser(id: string): User {
// Implementation
}
\`\`\`
### Bad Example - Using any type
\`\`\`typescript
function getUser(id: any): any {
// Loses type safety
}
\`\`\`
```
### Conditional Guidance
```markdown
## Framework Selection
- **For small projects**: Use Minimal API approach
- **For large projects**: Use controller-based architecture with clear separation
- **For microservices**: Consider domain-driven design patterns
```
## Patterns to Avoid
- **Overly verbose explanations**: Keep it concise and scannable
- **Outdated information**: Always reference current versions and practices
- **Ambiguous guidelines**: Be specific about what to do or avoid
- **Missing examples**: Abstract rules without concrete code examples
- **Contradictory advice**: Ensure consistency throughout the file
- **Copy-paste from documentation**: Add value by distilling and providing context
## Testing Your Instructions
Before finalizing instruction files:
1. **Test with Copilot**: Try the instructions with actual prompts in VS Code
2. **Verify Examples**: Ensure code examples are correct and run without errors
3. **Check Glob Patterns**: Confirm `applyTo` patterns match intended files
## Example Structure
Here's a minimal example structure for a new instruction file:
```markdown
---
description: 'Brief description of purpose'
applyTo: '**/*.ext'
---
# Technology Name Development
Brief introduction and context.
## General Instructions
- High-level guideline 1
- High-level guideline 2
## Best Practices
- Specific practice 1
- Specific practice 2
## Code Standards
### Naming Conventions
- Rule 1
- Rule 2
### File Organization
- Structure 1
- Structure 2
## Common Patterns
### Pattern 1
Description and example
\`\`\`language
code example
\`\`\`
### Pattern 2
Description and example
## Validation
- Build command: `command to verify`
- Lint checks: `command to lint`
- Testing: `command to test`
```
## Maintenance
- Review instructions when dependencies or frameworks are updated
- Update examples to reflect current best practices
- Remove outdated patterns or deprecated features
- Add new patterns as they emerge in the community
- Keep glob patterns accurate as project structure evolves
## Additional Resources
- [Custom Instructions Documentation](https://code.visualstudio.com/docs/copilot/customization/custom-instructions)
- [Awesome Copilot Instructions](https://github.com/github/awesome-copilot/tree/main/instructions)

View File

@@ -0,0 +1,88 @@
---
description: 'Guidelines for creating high-quality prompt files for GitHub Copilot'
applyTo: '**/*.prompt.md'
---
# Copilot Prompt Files Guidelines
Instructions for creating effective and maintainable prompt files that guide GitHub Copilot in delivering consistent, high-quality outcomes across any repository.
## Scope and Principles
- Target audience: maintainers and contributors authoring reusable prompts for Copilot Chat.
- Goals: predictable behaviour, clear expectations, minimal permissions, and portability across repositories.
- Primary references: VS Code documentation on prompt files and organization-specific conventions.
## Frontmatter Requirements
Every prompt file should include YAML frontmatter with the following fields:
### Required/Recommended Fields
| Field | Required | Description |
|-------|----------|-------------|
| `description` | Recommended | A short description of the prompt (single sentence, actionable outcome) |
| `name` | Optional | The name shown after typing `/` in chat. Defaults to filename if not specified |
| `agent` | Recommended | The agent to use: `ask`, `edit`, `agent`, or a custom agent name. Defaults to current agent |
| `model` | Optional | The language model to use. Defaults to currently selected model |
| `tools` | Optional | List of tool/tool set names available for this prompt |
| `argument-hint` | Optional | Hint text shown in chat input to guide user interaction |
### Guidelines
- Use consistent quoting (single quotes recommended) and keep one field per line for readability and version control clarity
- If `tools` are specified and current agent is `ask` or `edit`, the default agent becomes `agent`
- Preserve any additional metadata (`language`, `tags`, `visibility`, etc.) required by your organization
## File Naming and Placement
- Use kebab-case filenames ending with `.prompt.md` and store them under `.github/prompts/` unless your workspace standard specifies another directory.
- Provide a short filename that communicates the action (for example, `generate-readme.prompt.md` rather than `prompt1.prompt.md`).
## Body Structure
- Start with an `#` level heading that matches the prompt intent so it surfaces well in Quick Pick search.
- Organize content with predictable sections. Recommended baseline: `Mission` or `Primary Directive`, `Scope & Preconditions`, `Inputs`, `Workflow` (step-by-step), `Output Expectations`, and `Quality Assurance`.
- Adjust section names to fit the domain, but retain the logical flow: why → context → inputs → actions → outputs → validation.
- Reference related prompts or instruction files using relative links to aid discoverability.
## Input and Context Handling
- Use `${input:variableName[:placeholder]}` for required values and explain when the user must supply them. Provide defaults or alternatives where possible.
- Call out contextual variables such as `${selection}`, `${file}`, `${workspaceFolder}` only when they are essential, and describe how Copilot should interpret them.
- Document how to proceed when mandatory context is missing (for example, “Request the file path and stop if it remains undefined”).
## Tool and Permission Guidance
- Limit `tools` to the smallest set that enables the task. List them in the preferred execution order when the sequence matters.
- If the prompt inherits tools from a chat mode, mention that relationship and state any critical tool behaviours or side effects.
- Warn about destructive operations (file creation, edits, terminal commands) and include guard rails or confirmation steps in the workflow.
## Instruction Tone and Style
- Write in direct, imperative sentences targeted at Copilot (for example, “Analyze”, “Generate”, “Summarize”).
- Keep sentences short and unambiguous, following Google Developer Documentation translation best practices to support localization.
- Avoid idioms, humor, or culturally specific references; favor neutral, inclusive language.
## Output Definition
- Specify the format, structure, and location of expected results (for example, “Create an architecture decision record file using the template below, such as `docs/architecture-decisions/record-XXXX.md`).
- Include success criteria and failure triggers so Copilot knows when to halt or retry.
- Provide validation steps—manual checks, automated commands, or acceptance criteria lists—that reviewers can execute after running the prompt.
## Examples and Reusable Assets
- Embed Good/Bad examples or scaffolds (Markdown templates, JSON stubs) that the prompt should produce or follow.
- Maintain reference tables (capabilities, status codes, role descriptions) inline to keep the prompt self-contained. Update these tables when upstream resources change.
- Link to authoritative documentation instead of duplicating lengthy guidance.
## Quality Assurance Checklist
- [ ] Frontmatter fields are complete, accurate, and least-privilege.
- [ ] Inputs include placeholders, default behaviours, and fallbacks.
- [ ] Workflow covers preparation, execution, and post-processing without gaps.
- [ ] Output expectations include formatting and storage details.
- [ ] Validation steps are actionable (commands, diff checks, review prompts).
- [ ] Security, compliance, and privacy policies referenced by the prompt are current.
- [ ] Prompt executes successfully in VS Code (`Chat: Run Prompt`) using representative scenarios.
## Maintenance Guidance
- Version-control prompts alongside the code they affect; update them when dependencies, tooling, or review processes change.
- Review prompts periodically to ensure tool lists, model requirements, and linked documents remain valid.
- Coordinate with other repositories: when a prompt proves broadly useful, extract common guidance into instruction files or shared prompt packs.
## Additional Resources
- [Prompt Files Documentation](https://code.visualstudio.com/docs/copilot/customization/prompt-files#_prompt-file-format)
- [Awesome Copilot Prompt Files](https://github.com/github/awesome-copilot/tree/main/prompts)
- [Tool Configuration](https://code.visualstudio.com/docs/copilot/chat/chat-agent-mode#_agent-mode-tools)

View File

@@ -0,0 +1,68 @@
---
description: 'Guidelines for Runner and Settings UI components that communicate via named pipes and manage module lifecycle'
applyTo: 'src/runner/**,src/settings-ui/**'
---
# Runner & Settings UI Core Components Guidance
Guidelines for modifying the Runner (tray/module loader) and Settings UI (configuration app). These components communicate via Windows Named Pipes using JSON messages.
## Runner (`src/runner/`)
### Scope
- Module bootstrap, hotkey management, settings bridge, update/elevation handling
### Guidelines
- If IPC/JSON contracts change, mirror updates in `src/settings-ui/**`
- Keep module discovery in `src/runner/main.cpp` in sync when adding/removing modules
- Keep startup lean: avoid blocking/network calls in early init path
- Preserve GPO & elevation behaviors; confirm no regression in policy handling
- Ask before modifying update workflow or elevation logic
### Acceptance Criteria
- Stable startup, consistent contracts, no unnecessary logging noise
## Settings UI (`src/settings-ui/`)
### Scope
- WinUI/WPF UI, communicates with Runner over named pipes; manages persisted settings schema
### Guidelines
- Don't break settings schema silently; add migration when shape changes
- If IPC/JSON contracts change, align with `src/runner/**` implementation
- Keep UI responsive: marshal to UI thread for UI-bound operations
- Reuse existing styles/resources; avoid duplicate theme keys
- Add/adjust migration or serialization tests when changing persisted settings
### Acceptance Criteria
- Schema integrity preserved, responsive UI, consistent contracts, no style duplication
## Shared Concerns
### IPC Contract Changes
When modifying the JSON message format between Runner and Settings UI:
1. Update both `src/runner/` and `src/settings-ui/` in the same PR
2. Preserve backward compatibility where possible
3. Add migration logic for settings schema changes
4. Test both directions of communication
### Code Style
- **C++ (Runner)**: Follow `.clang-format` in `src/`
- **C# (Settings UI)**: Follow `src/.editorconfig`, use StyleCop.Analyzers
- **XAML**: Use XamlStyler or run `.\.pipelines\applyXamlStyling.ps1 -Main`
## Validation
- Build Runner: `tools\build\build.cmd` from `src/runner/`
- Build Settings UI: `tools\build\build.cmd` from `src/settings-ui/`
- Test IPC: Launch both Runner and Settings UI, verify communication works
- Schema changes: Run serialization tests if settings shape changed

View File

@@ -1,7 +1,10 @@
agent: 'agent'
model: GPT-5.1-Codex-Max
description: 'Generate an 80-character git commit title for the local diff.'
---
agent: 'agent'
model: 'GPT-5.1-Codex-Max'
description: 'Generate an 80-character git commit title for the local diff'
---
# Generate Commit Title
**Goal:** Provide a ready-to-paste git commit title (<= 80 characters) that captures the most important local changes since `HEAD`.

View File

@@ -1,7 +1,10 @@
agent: 'agent'
model: GPT-5.1-Codex-Max
description: 'Generate a PowerToys-ready pull request description from the local diff.'
---
agent: 'agent'
model: 'GPT-5.1-Codex-Max'
description: 'Generate a PowerToys-ready pull request description from the local diff'
---
# Generate PR Summary
**Goal:** Produce a ready-to-paste PR title and description that follows PowerToys conventions by comparing the current branch against a user-selected target branch.

View File

@@ -1,9 +1,12 @@
---
agent: 'agent'
model: GPT-5.1-Codex-Max
description: "Execute the fix for a GitHub issue using the previously generated implementation plan. Apply code & tests directly in the repo. Output only a PR description (and optional manual steps)."
model: 'GPT-5.1-Codex-Max'
description: 'Execute the fix for a GitHub issue using the previously generated implementation plan'
---
# DEPENDENCY
# Fix GitHub Issue
## Dependencies
Source review prompt (for generating the implementation plan if missing):
- .github/prompts/review-issue.prompt.md

View File

@@ -1,7 +1,10 @@
agent: 'agent'
model: GPT-5.1-Codex-Max
description: 'Resolve Code scanning / check-spelling comments on the active PR.'
---
agent: 'agent'
model: 'GPT-5.1-Codex-Max'
description: 'Resolve Code scanning / check-spelling comments on the active PR'
---
# Fix Spelling Comments
**Goal:** Clear every outstanding GitHub pull request comment created by the `Code scanning / check-spelling` workflow by explicitly allowing intentional terms.

View File

@@ -1,9 +1,12 @@
---
agent: 'agent'
model: GPT-5.1-Codex-Max
description: "You are a GitHub issue review and planning expert; score (0-100) and write one implementation plan. Outputs: overview.md, implementation-plan.md."
model: 'GPT-5.1-Codex-Max'
description: 'Review a GitHub issue, score it (0-100), and generate an implementation plan'
---
# GOAL
# Review GitHub Issue
## Goal
For **#{{issue_number}}** produce:
1) `Generated Files/issueReview/{{issue_number}}/overview.md`
2) `Generated Files/issueReview/{{issue_number}}/implementation-plan.md`

View File

@@ -1,9 +1,10 @@
---
agent: 'agent'
model: GPT-5.1-Codex-Max
description: "gh-driven PR review; per-step Markdown + machine-readable outputs"
model: 'GPT-5.1-Codex-Max'
description: 'Perform a comprehensive PR review with per-step Markdown and machine-readable outputs'
---
# PR Review — gh + stepwise
# Review Pull Request
**Goal**: Given `{{pr_number}}`, run a *one-topic-per-step* review. Write files to `Generated Files/prReview/{{pr_number}}/` (replace `{{pr_number}}` with the integer). Emit machinereadable blocks for a GitHub MCP to post review comments.

165
AGENTS.md Normal file
View File

@@ -0,0 +1,165 @@
---
description: 'Top-level AI contributor guidance for developing PowerToys - a collection of Windows productivity utilities'
applyTo: '**'
---
# PowerToys AI Contributor Guide
This is the top-level guidance for AI contributions to PowerToys. Keep changes atomic, follow existing patterns, and cite exact paths in PRs.
## Overview
PowerToys is a set of utilities for power users to tune and streamline their Windows experience.
| Area | Location | Description |
|------|----------|-------------|
| Runner | `src/runner/` | Main executable, tray icon, module loader, hotkey management |
| Settings UI | `src/settings-ui/` | WinUI/WPF configuration app communicating via named pipes |
| Modules | `src/modules/` | Individual PowerToys utilities (each in its own subfolder) |
| Common Libraries | `src/common/` | Shared code: logging, IPC, settings, DPI, telemetry, utilities |
| Build Tools | `tools/build/` | Build scripts and automation |
| Documentation | `doc/devdocs/` | Developer documentation |
| Installer | `installer/` | WiX-based installer projects |
For architecture details and module types, see [Architecture Overview](doc/devdocs/core/architecture.md).
## Conventions
For detailed coding conventions, see:
- [Coding Guidelines](doc/devdocs/development/guidelines.md) Dependencies, testing, PR management
- [Coding Style](doc/devdocs/development/style.md) Formatting, C++/C#/XAML style rules
- [Logging](doc/devdocs/development/logging.md) C++ spdlog and C# Logger usage
### Component-Specific Instructions
These instruction files are automatically applied when working in their respective areas:
- [Runner & Settings UI](.github/instructions/runner-settings-ui.instructions.md) IPC contracts, schema migrations
- [Common Libraries](.github/instructions/common-libraries.instructions.md) ABI stability, shared code guidelines
## Build
### Prerequisites
- Visual Studio 2022 17.4+
- Windows 10 1803+ (April 2018 Update or newer)
- Initialize submodules once: `git submodule update --init --recursive`
### Build Commands
| Task | Command |
|------|---------|
| First build / NuGet restore | `tools\build\build-essentials.cmd` |
| Build current folder | `tools\build\build.cmd` |
| Build with options | `build.ps1 -Platform x64 -Configuration Release` |
### Build Discipline
1. One terminal per operation (build → test). Do not switch or open new ones mid-flow
2. After making changes, `cd` to the project folder that changed (`.csproj`/`.vcxproj`)
3. Use scripts to build: `tools/build/build.ps1` or `tools/build/build.cmd`
4. For first build or missing NuGet packages, run `build-essentials.cmd` first
5. **Exit code 0 = success; non-zero = failure** treat this as absolute
6. On failure, read the errors log: `build.<config>.<platform>.errors.log`
7. Do not start tests or launch Runner until the build succeeds
### Build Logs
Located next to the solution/project being built:
- `build.<configuration>.<platform>.errors.log` errors only (check this first)
- `build.<configuration>.<platform>.all.log` full log
- `build.<configuration>.<platform>.trace.binlog` for MSBuild Structured Log Viewer
For complete details, see [Build Guidelines](tools/build/BUILD-GUIDELINES.md).
## Tests
### Test Discovery
- Find test projects by product code prefix (e.g., `FancyZones`, `AdvancedPaste`)
- Look for sibling folders or 1-2 levels up named `<Product>*UnitTests` or `<Product>*UITests`
### Running Tests
1. **Build the test project first**, wait for exit code 0
2. Run via VS Test Explorer (`Ctrl+E, T`) or `vstest.console.exe` with filters
3. **Avoid `dotnet test`** in this repo use VS Test Explorer or vstest.console.exe
### Test Types
| Type | Requirements | Setup |
|------|--------------|-------|
| Unit Tests | Standard dev environment | None |
| UI Tests | WinAppDriver v1.2.1, Developer Mode | Install from [WinAppDriver releases](https://github.com/microsoft/WinAppDriver/releases/tag/v1.2.1) |
| Fuzz Tests | OneFuzz, .NET 8 | See [Fuzzing Tests](doc/devdocs/tools/fuzzingtesting.md) |
### Test Discipline
1. Add or adjust tests when changing behavior
2. If tests skipped, state why (e.g., comment-only change, string rename)
3. New modules handling file I/O or user input **must** implement fuzzing tests
### Special Requirements
- **Mouse Without Borders**: Requires 2+ physical computers (not VMs)
- **Multi-monitor utilities**: Test with 2+ monitors, different DPI settings
For UI test setup details, see [UI Tests](doc/devdocs/development/ui-tests.md).
## Boundaries
### Ask for Clarification When
- Ambiguous spec after scanning relevant docs
- Cross-module impact (shared enum/struct) is unclear
- Security, elevation, or installer changes involved
- GPO or policy handling modifications needed
### Areas Requiring Extra Care
| Area | Concern | Reference |
|------|---------|-----------|
| `src/common/` | ABI breaks | [Common Libraries Instructions](.github/instructions/common-libraries.instructions.md) |
| `src/runner/`, `src/settings-ui/` | IPC contracts, schema | [Runner & Settings UI Instructions](.github/instructions/runner-settings-ui.instructions.md) |
| Installer files | Release impact | Careful review required |
| Elevation/GPO logic | Security | Confirm no regression in policy handling |
### What NOT to Do
- Don't merge incomplete features into main (use feature branches)
- Don't break IPC/JSON contracts without updating both runner and settings-ui
- Don't add noisy logs in hot paths
- Don't introduce third-party deps without PM approval and `NOTICE.md` update
## Validation Checklist
Before finishing, verify:
- [ ] Build clean with exit code 0
- [ ] Tests updated and passing locally
- [ ] No unintended ABI breaks or schema changes
- [ ] IPC contracts consistent between runner and settings-ui
- [ ] New dependencies added to `NOTICE.md`
- [ ] PR is atomic (one logical change), with issue linked
## Documentation Index
### Core Architecture
- [Architecture Overview](doc/devdocs/core/architecture.md)
- [Runner](doc/devdocs/core/runner.md)
- [Settings System](doc/devdocs/core/settings/readme.md)
- [Module Interface](doc/devdocs/modules/interface.md)
### Development
- [Coding Guidelines](doc/devdocs/development/guidelines.md)
- [Coding Style](doc/devdocs/development/style.md)
- [Logging](doc/devdocs/development/logging.md)
- [UI Tests](doc/devdocs/development/ui-tests.md)
- [Fuzzing Tests](doc/devdocs/tools/fuzzingtesting.md)
### Build & Tools
- [Build Guidelines](tools/build/BUILD-GUIDELINES.md)
- [Tools Overview](doc/devdocs/tools/readme.md)
### Instructions (Auto-Applied)
- [Runner & Settings UI](.github/instructions/runner-settings-ui.instructions.md)
- [Common Libraries](.github/instructions/common-libraries.instructions.md)

View File

@@ -1,16 +0,0 @@
---
applyTo: "**/*.cs,**/*.cpp,**/*.c,**/*.h,**/*.hpp"
---
# Common shared libraries guidance (concise)
Scope
- Logging, IPC, settings, DPI, telemetry, utilities consumed by multiple modules.
Guidelines
- Avoid breaking public headers/APIs; if changed, search & update all callers.
- Coordinate ABI-impacting struct/class layout changes; keep binary compatibility.
- Watch perf in hot paths (hooks, timers, serialization); avoid avoidable allocations.
- Ask before adding thirdparty deps or changing serialization formats.
Acceptance
- No unintended ABI breaks, no noisy logs, new non-obvious symbols briefly commented.

View File

@@ -1,17 +0,0 @@
---
applyTo: "**/*.cpp,**/*.c,**/*.h,**/*.hpp,**/*.rc"
---
# Runner tray / host process guidance
Scope
- Module bootstrap, hotkey management, settings bridge, update/elevation handling.
Guidelines
- If IPC/JSON contracts change, mirror updates in `src/settings-ui/**`.
- Keep module discovery in `src/runner/main.cpp` in sync when adding/removing modules.
- Keep startup lean: avoid blocking/network calls in early init path.
- Preserve GPO & elevation behaviors; confirm no regression in policy handling.
- Ask before modifying update workflow or elevation logic.
Acceptance
- Stable startup, consistent contracts, no unnecessary logging noise.

View File

@@ -1,17 +0,0 @@
---
applyTo: "**/*.cs,**/*.xaml"
---
# Settings UI configuration app guidance
Scope
- WinUI/WPF UI, communicates with Runner over named pipes; manages persisted settings schema.
Guidelines
- Dont break settings schema silently; add migration when shape changes.
- If IPC/JSON contracts change, align with `src/runner/**` implementation.
- Keep UI responsive: marshal to UI thread for UI-bound operations.
- Reuse existing styles/resources; avoid duplicate theme keys.
- Add/adjust migration or serialization tests when changing persisted settings.
Acceptance
- Schema integrity preserved, responsive UI, consistent contracts, no style duplication.