Compare commits

...

16 Commits

Author SHA1 Message Date
bahdotsh
d1268d55cf feat: move log stream composition and filtering to background thread
- Resolves #29: UI unresponsiveness in logs tab
- Add LogProcessor with background thread for async log processing
- Implement pre-processed log caching with ProcessedLogEntry
- Replace frame-by-frame log processing with cached results
- Add automatic log change detection for app and system logs
- Optimize rendering from O(n) to O(1) complexity
- Maintain all search, filter, and highlighting functionality
- Fix clippy warning for redundant pattern matching

Performance improvements:
- Log processing moved to separate thread with 50ms debouncing
- UI rendering no longer blocks on log filtering/formatting
- Supports thousands of logs without UI lag
- Non-blocking request/response pattern with mpsc channels
2025-08-13 13:38:17 +05:30
Gokul
a146d94c35 Merge pull request #34 from bahdotsh/fix/runs-on-array-support
fix: Support array format for runs-on field in GitHub Actions workflows
2025-08-13 13:24:35 +05:30
bahdotsh
7636195380 fix: Support array format for runs-on field in GitHub Actions workflows
- Add custom deserializer for runs-on field to handle both string and array formats
- Update Job struct to use Vec<String> instead of String for runs-on field
- Modify executor to extract first element from runs-on array for runner selection
- Add test workflow to verify both string and array formats work correctly
- Maintain backwards compatibility with existing string-based workflows

Fixes issue where workflows with runs-on: [self-hosted, ubuntu, small] format
would fail with 'invalid type: sequence, expected a string' error.

This change aligns with GitHub Actions specification which supports:
- String format: runs-on: ubuntu-latest
- Array format: runs-on: [self-hosted, ubuntu, small]
2025-08-13 13:21:58 +05:30
Gokul
98afdb3372 Merge pull request #33 from bahdotsh/docs/add-crate-readmes
docs(readme): add per-crate READMEs and enhance wrkflw crate README
2025-08-12 15:12:44 +05:30
bahdotsh
58de01e69f docs(readme): add per-crate READMEs and enhance wrkflw crate README 2025-08-12 15:09:38 +05:30
Gokul
880cae3899 Merge pull request #32 from bahdotsh/bahdotsh/reusable-workflow-execution
feat: add execution support for reusable workflows
2025-08-12 14:57:49 +05:30
bahdotsh
66e540645d feat(executor,parser,docs): add execution support for reusable workflows (jobs.<id>.uses)\n\n- Parser: make jobs.runs-on optional; add job-level uses/with/secrets for caller jobs\n- Executor: resolve and run local/remote called workflows; propagate inputs/secrets; summarize results\n- Docs: document feature, usage, and current limits in README\n- Tests: add execution tests for local reusable workflows (success/failure)\n\nLimits:\n- Does not propagate outputs back to caller\n- secrets: inherit not special-cased; use mapping\n- Remote private repos not yet supported; public only\n- Cycle detection for nested calls unchanged 2025-08-12 14:53:07 +05:30
bahdotsh
79b6389f54 fix: resolve schema file path issues for cargo publish
- Copied schema files into parser crate src directory
- Updated include_str! paths to be relative to source files
- Ensures schemas are bundled with crate during publish
- Resolves packaging and verification issues during publication

Fixes the build error that was preventing crate publication.
2025-08-09 18:14:25 +05:30
bahdotsh
5d55812872 fix: correct schema file paths for cargo publish
- Updated include_str! paths from ../../../ to ../../../../
- This resolves packaging issues during cargo publish
- Fixes schema loading for parser crate publication
2025-08-09 18:12:56 +05:30
bahdotsh
537bf2f9d1 chore: bump version to 0.6.0
- Updated workspace version from 0.5.0 to 0.6.0
- Updated all internal crate dependencies to 0.6.0
- Verified all tests pass and builds succeed
2025-08-09 17:46:09 +05:30
bahdotsh
f0b6633cb8 renamed 2025-08-09 17:03:03 +05:30
bahdotsh
181b5c5463 feat: reorganize test files and delete manual test checklist
- Move test workflows to tests/workflows/
- Move GitLab CI fixtures to tests/fixtures/gitlab-ci/
- Move test scripts to tests/scripts/
- Move Podman testing docs to tests/
- Update paths in test scripts and documentation
- Delete MANUAL_TEST_CHECKLIST.md as requested
- Update tests/README.md to reflect new organization
2025-08-09 15:30:53 +05:30
bahdotsh
1cc3bf98b6 feat: bump version to 0.5.0 for podman support 2025-08-09 15:24:49 +05:30
Gokul
af8ac002e4 Merge pull request #28 from bahdotsh/podman
feat: Add comprehensive Podman container runtime support
2025-08-09 15:11:58 +05:30
bahdotsh
50e62fbc1f feat: Add comprehensive Podman container runtime support
Add Podman as a new container runtime option alongside Docker and emulation modes,
enabling workflow execution in rootless containers for enhanced security and
compatibility in restricted environments.

Features:
- New PodmanRuntime implementing ContainerRuntime trait
- CLI --runtime flag with docker/podman/emulation options
- TUI runtime cycling (e → Docker → Podman → Emulation)
- Full container lifecycle management (run, pull, build, cleanup)
- Container preservation support with --preserve-containers-on-failure
- Automatic fallback to emulation when Podman unavailable
- Rootless container execution without privileged daemon

Implementation:
- crates/executor/src/podman.rs: Complete Podman runtime implementation
- crates/executor/src/engine.rs: Runtime type enum and initialization
- crates/ui/: TUI integration with runtime switching and status display
- crates/wrkflw/src/main.rs: CLI argument parsing for runtime selection

Testing & Documentation:
- TESTING_PODMAN.md: Comprehensive testing guide
- test-podman-basic.sh: Automated verification script
- test-preserve-containers.sh: Container preservation testing
- MANUAL_TEST_CHECKLIST.md: Manual verification checklist
- README.md: Complete Podman documentation and usage examples

Benefits:
- Organizations restricting Docker installation can use Podman
- Enhanced security through daemonless, rootless architecture
- Drop-in compatibility with existing Docker-based workflows
- Consistent container execution across different environments

Closes: Support for rootless container execution in restricted environments
2025-08-09 15:06:17 +05:30
Gokul
30659ac5d6 Merge pull request #27 from bahdotsh/bahdotsh/validation-exit-codes
feat: add exit code support for validation failures
2025-08-09 14:23:08 +05:30
94 changed files with 8878 additions and 829 deletions

374
Cargo.lock generated
View File

@@ -486,46 +486,6 @@ dependencies = [
"windows-sys 0.59.0",
]
[[package]]
name = "evaluator"
version = "0.4.0"
dependencies = [
"colored",
"models",
"serde_yaml",
"validators",
]
[[package]]
name = "executor"
version = "0.4.0"
dependencies = [
"async-trait",
"bollard",
"chrono",
"dirs",
"futures",
"futures-util",
"lazy_static",
"logging",
"matrix",
"models",
"num_cpus",
"once_cell",
"parser",
"regex",
"runtime",
"serde",
"serde_json",
"serde_yaml",
"tar",
"tempfile",
"thiserror",
"tokio",
"utils",
"uuid",
]
[[package]]
name = "fancy-regex"
version = "0.11.0"
@@ -714,35 +674,6 @@ version = "0.31.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "07e28edb80900c19c28f1072f2e8aeca7fa06b23cd4169cefe1af5aa3260783f"
[[package]]
name = "github"
version = "0.4.0"
dependencies = [
"lazy_static",
"models",
"regex",
"reqwest",
"serde",
"serde_json",
"serde_yaml",
"thiserror",
]
[[package]]
name = "gitlab"
version = "0.4.0"
dependencies = [
"lazy_static",
"models",
"regex",
"reqwest",
"serde",
"serde_json",
"serde_yaml",
"thiserror",
"urlencoding",
]
[[package]]
name = "h2"
version = "0.3.26"
@@ -1215,28 +1146,6 @@ version = "0.4.27"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "13dc2df351e3202783a1fe0d44375f7295ffb4049267b0f3018346dc122a1d94"
[[package]]
name = "logging"
version = "0.4.0"
dependencies = [
"chrono",
"models",
"once_cell",
"serde",
"serde_yaml",
]
[[package]]
name = "matrix"
version = "0.4.0"
dependencies = [
"indexmap 2.8.0",
"models",
"serde",
"serde_yaml",
"thiserror",
]
[[package]]
name = "memchr"
version = "2.7.4"
@@ -1281,16 +1190,6 @@ dependencies = [
"windows-sys 0.52.0",
]
[[package]]
name = "models"
version = "0.4.0"
dependencies = [
"serde",
"serde_json",
"serde_yaml",
"thiserror",
]
[[package]]
name = "native-tls"
version = "0.2.14"
@@ -1511,20 +1410,6 @@ dependencies = [
"windows-targets 0.52.6",
]
[[package]]
name = "parser"
version = "0.4.0"
dependencies = [
"jsonschema",
"matrix",
"models",
"serde",
"serde_json",
"serde_yaml",
"tempfile",
"thiserror",
]
[[package]]
name = "paste"
version = "1.0.15"
@@ -1731,23 +1616,6 @@ dependencies = [
"winreg",
]
[[package]]
name = "runtime"
version = "0.4.0"
dependencies = [
"async-trait",
"futures",
"logging",
"models",
"once_cell",
"serde",
"serde_yaml",
"tempfile",
"tokio",
"utils",
"which",
]
[[package]]
name = "rustc-demangle"
version = "0.1.24"
@@ -2243,28 +2111,6 @@ version = "0.2.5"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "e421abadd41a4225275504ea4d6566923418b7f05506fbc9c0fe86ba7396114b"
[[package]]
name = "ui"
version = "0.4.0"
dependencies = [
"chrono",
"crossterm 0.26.1",
"evaluator",
"executor",
"futures",
"github",
"logging",
"models",
"ratatui",
"regex",
"reqwest",
"serde",
"serde_json",
"serde_yaml",
"tokio",
"utils",
]
[[package]]
name = "unicode-ident"
version = "1.0.18"
@@ -2324,16 +2170,6 @@ version = "0.2.2"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "06abde3611657adf66d383f00b093d7faecc7fa57071cce2578660c9f1010821"
[[package]]
name = "utils"
version = "0.4.0"
dependencies = [
"models",
"nix",
"serde",
"serde_yaml",
]
[[package]]
name = "uuid"
version = "1.16.0"
@@ -2343,16 +2179,6 @@ dependencies = [
"getrandom 0.3.2",
]
[[package]]
name = "validators"
version = "0.4.0"
dependencies = [
"matrix",
"models",
"serde",
"serde_yaml",
]
[[package]]
name = "vcpkg"
version = "0.2.15"
@@ -2719,7 +2545,7 @@ checksum = "1e9df38ee2d2c3c5948ea468a8406ff0db0b29ae1ffde1bcf20ef305bcc95c51"
[[package]]
name = "wrkflw"
version = "0.4.0"
version = "0.6.0"
dependencies = [
"bollard",
"chrono",
@@ -2727,41 +2553,215 @@ dependencies = [
"colored",
"crossterm 0.26.1",
"dirs",
"evaluator",
"executor",
"futures",
"futures-util",
"github",
"gitlab",
"indexmap 2.8.0",
"itertools",
"lazy_static",
"libc",
"log",
"logging",
"matrix",
"models",
"nix",
"num_cpus",
"once_cell",
"parser",
"ratatui",
"rayon",
"regex",
"reqwest",
"runtime",
"serde",
"serde_json",
"serde_yaml",
"tempfile",
"thiserror",
"tokio",
"ui",
"urlencoding",
"utils",
"uuid",
"validators",
"walkdir",
"wrkflw-evaluator",
"wrkflw-executor",
"wrkflw-github",
"wrkflw-gitlab",
"wrkflw-logging",
"wrkflw-matrix",
"wrkflw-models",
"wrkflw-parser",
"wrkflw-runtime",
"wrkflw-ui",
"wrkflw-utils",
"wrkflw-validators",
]
[[package]]
name = "wrkflw-evaluator"
version = "0.6.0"
dependencies = [
"colored",
"serde_yaml",
"wrkflw-models",
"wrkflw-validators",
]
[[package]]
name = "wrkflw-executor"
version = "0.6.0"
dependencies = [
"async-trait",
"bollard",
"chrono",
"dirs",
"futures",
"futures-util",
"lazy_static",
"num_cpus",
"once_cell",
"regex",
"serde",
"serde_json",
"serde_yaml",
"tar",
"tempfile",
"thiserror",
"tokio",
"uuid",
"wrkflw-logging",
"wrkflw-matrix",
"wrkflw-models",
"wrkflw-parser",
"wrkflw-runtime",
"wrkflw-utils",
]
[[package]]
name = "wrkflw-github"
version = "0.6.0"
dependencies = [
"lazy_static",
"regex",
"reqwest",
"serde",
"serde_json",
"serde_yaml",
"thiserror",
"wrkflw-models",
]
[[package]]
name = "wrkflw-gitlab"
version = "0.6.0"
dependencies = [
"lazy_static",
"regex",
"reqwest",
"serde",
"serde_json",
"serde_yaml",
"thiserror",
"urlencoding",
"wrkflw-models",
]
[[package]]
name = "wrkflw-logging"
version = "0.6.0"
dependencies = [
"chrono",
"once_cell",
"serde",
"serde_yaml",
"wrkflw-models",
]
[[package]]
name = "wrkflw-matrix"
version = "0.6.0"
dependencies = [
"indexmap 2.8.0",
"serde",
"serde_yaml",
"thiserror",
"wrkflw-models",
]
[[package]]
name = "wrkflw-models"
version = "0.6.0"
dependencies = [
"serde",
"serde_json",
"serde_yaml",
"thiserror",
]
[[package]]
name = "wrkflw-parser"
version = "0.6.0"
dependencies = [
"jsonschema",
"serde",
"serde_json",
"serde_yaml",
"tempfile",
"thiserror",
"wrkflw-matrix",
"wrkflw-models",
]
[[package]]
name = "wrkflw-runtime"
version = "0.6.0"
dependencies = [
"async-trait",
"futures",
"once_cell",
"serde",
"serde_yaml",
"tempfile",
"tokio",
"which",
"wrkflw-logging",
"wrkflw-models",
"wrkflw-utils",
]
[[package]]
name = "wrkflw-ui"
version = "0.6.0"
dependencies = [
"chrono",
"crossterm 0.26.1",
"futures",
"ratatui",
"regex",
"reqwest",
"serde",
"serde_json",
"serde_yaml",
"tokio",
"wrkflw-evaluator",
"wrkflw-executor",
"wrkflw-github",
"wrkflw-logging",
"wrkflw-models",
"wrkflw-utils",
]
[[package]]
name = "wrkflw-utils"
version = "0.6.0"
dependencies = [
"nix",
"serde",
"serde_yaml",
"wrkflw-models",
]
[[package]]
name = "wrkflw-validators"
version = "0.6.0"
dependencies = [
"serde",
"serde_yaml",
"wrkflw-matrix",
"wrkflw-models",
]
[[package]]

View File

@@ -5,7 +5,7 @@ members = [
resolver = "2"
[workspace.package]
version = "0.4.0"
version = "0.6.0"
edition = "2021"
description = "A GitHub Actions workflow validator and executor"
documentation = "https://github.com/bahdotsh/wrkflw"

158
README.md
View File

@@ -14,22 +14,58 @@ WRKFLW is a powerful command-line tool for validating and executing GitHub Actio
- **TUI Interface**: A full-featured terminal user interface for managing and monitoring workflow executions
- **Validate Workflow Files**: Check for syntax errors and common mistakes in GitHub Actions workflow files with proper exit codes for CI/CD integration
- **Execute Workflows Locally**: Run workflows directly on your machine using Docker containers
- **Emulation Mode**: Optional execution without Docker by emulating the container environment locally
- **Execute Workflows Locally**: Run workflows directly on your machine using Docker or Podman containers
- **Multiple Container Runtimes**: Support for Docker, Podman, and emulation mode for maximum flexibility
- **Job Dependency Resolution**: Automatically determines the correct execution order based on job dependencies
- **Docker Integration**: Execute workflow steps in isolated Docker containers with proper environment setup
- **Container Integration**: Execute workflow steps in isolated containers with proper environment setup
- **GitHub Context**: Provides GitHub-like environment variables and workflow commands
- **Multiple Runtime Modes**: Choose between Docker containers or local emulation for maximum flexibility
- **Rootless Execution**: Podman support enables running containers without root privileges
- **Action Support**: Supports various GitHub Actions types:
- Docker container actions
- JavaScript actions
- Composite actions
- Local actions
- **Special Action Handling**: Native handling for commonly used actions like `actions/checkout`
- **Reusable Workflows (Caller Jobs)**: Execute jobs that call reusable workflows via `jobs.<id>.uses` (local path or `owner/repo/path@ref`)
- **Output Capturing**: View logs, step outputs, and execution details
- **Parallel Job Execution**: Runs independent jobs in parallel for faster workflow execution
- **Trigger Workflows Remotely**: Manually trigger workflow runs on GitHub or GitLab
## Requirements
### Container Runtime (Optional)
WRKFLW supports multiple container runtimes for isolated execution:
- **Docker**: The default container runtime. Install from [docker.com](https://docker.com)
- **Podman**: A rootless container runtime. Perfect for environments where Docker isn't available or permitted. Install from [podman.io](https://podman.io)
- **Emulation**: No container runtime required. Executes commands directly on the host system
### Podman Support
Podman is particularly useful in environments where:
- Docker installation is not permitted by your organization
- Root privileges are not available for Docker daemon
- You prefer rootless container execution
- Enhanced security through daemonless architecture is desired
To use Podman:
```bash
# Install Podman (varies by OS)
# On macOS with Homebrew:
brew install podman
# On Ubuntu/Debian:
sudo apt-get install podman
# Initialize Podman machine (macOS/Windows)
podman machine init
podman machine start
# Use with wrkflw
wrkflw run --runtime podman .github/workflows/ci.yml
```
## Installation
The recommended way to install `wrkflw` is using Rust's package manager, Cargo:
@@ -115,8 +151,11 @@ fi
# Run a workflow with Docker (default)
wrkflw run .github/workflows/ci.yml
# Run a workflow in emulation mode (without Docker)
wrkflw run --emulate .github/workflows/ci.yml
# Run a workflow with Podman instead of Docker
wrkflw run --runtime podman .github/workflows/ci.yml
# Run a workflow in emulation mode (without containers)
wrkflw run --runtime emulation .github/workflows/ci.yml
# Run with verbose output
wrkflw run --verbose .github/workflows/ci.yml
@@ -137,8 +176,11 @@ wrkflw tui path/to/workflows
# Open TUI with a specific workflow pre-selected
wrkflw tui path/to/workflow.yml
# Open TUI with Podman runtime
wrkflw tui --runtime podman
# Open TUI in emulation mode
wrkflw tui --emulate
wrkflw tui --runtime emulation
```
### Triggering Workflows Remotely
@@ -162,7 +204,7 @@ The terminal user interface provides an interactive way to manage workflows:
- **r**: Run all selected workflows
- **a**: Select all workflows
- **n**: Deselect all workflows
- **e**: Toggle between Docker and Emulation mode
- **e**: Cycle through runtime modes (Docker → Podman → Emulation)
- **v**: Toggle between Execution and Validation mode
- **Esc**: Back / Exit detailed view
- **q**: Quit application
@@ -225,20 +267,22 @@ $ wrkflw
# This will automatically load .github/workflows files into the TUI
```
## Requirements
## System Requirements
- Rust 1.67 or later
- Docker (optional, for container-based execution)
- When not using Docker, the emulation mode can run workflows using your local system tools
- Container Runtime (optional, for container-based execution):
- **Docker**: Traditional container runtime
- **Podman**: Rootless alternative to Docker
- **None**: Emulation mode runs workflows using local system tools
## How It Works
WRKFLW parses your GitHub Actions workflow files and executes each job and step in the correct order. For Docker mode, it creates containers that closely match GitHub's runner environments. The workflow execution process:
WRKFLW parses your GitHub Actions workflow files and executes each job and step in the correct order. For container modes (Docker/Podman), it creates containers that closely match GitHub's runner environments. The workflow execution process:
1. **Parsing**: Reads and validates the workflow YAML structure
2. **Dependency Resolution**: Creates an execution plan based on job dependencies
3. **Environment Setup**: Prepares GitHub-like environment variables and context
4. **Execution**: Runs each job and step either in Docker containers or through local emulation
4. **Execution**: Runs each job and step either in containers (Docker/Podman) or through local emulation
5. **Monitoring**: Tracks progress and captures outputs in the TUI or command line
## Advanced Features
@@ -262,7 +306,7 @@ WRKFLW supports composite actions, which are actions made up of multiple steps.
### Container Cleanup
WRKFLW automatically cleans up any Docker containers created during workflow execution, even if the process is interrupted with Ctrl+C.
WRKFLW automatically cleans up any containers created during workflow execution (Docker/Podman), even if the process is interrupted with Ctrl+C.
For debugging failed workflows, you can preserve containers that fail by using the `--preserve-containers-on-failure` flag:
@@ -277,10 +321,46 @@ wrkflw tui --preserve-containers-on-failure
When a container fails with this flag enabled, WRKFLW will:
- Keep the failed container running instead of removing it
- Log the container ID and provide inspection instructions
- Show a message like: `Preserving container abc123 for debugging (exit code: 1). Use 'docker exec -it abc123 bash' to inspect.`
- Show a message like: `Preserving container abc123 for debugging (exit code: 1). Use 'docker exec -it abc123 bash' to inspect.` (Docker)
- Or: `Preserving container abc123 for debugging (exit code: 1). Use 'podman exec -it abc123 bash' to inspect.` (Podman)
This allows you to inspect the exact state of the container when the failure occurred, examine files, check environment variables, and debug issues more effectively.
### Podman-Specific Features
When using Podman as the container runtime, you get additional benefits:
**Rootless Operation:**
```bash
# Run workflows without root privileges
wrkflw run --runtime podman .github/workflows/ci.yml
```
**Enhanced Security:**
- Daemonless architecture reduces attack surface
- User namespaces provide additional isolation
- No privileged daemon required
**Container Inspection:**
```bash
# List preserved containers
podman ps -a --filter "name=wrkflw-"
# Inspect a preserved container's filesystem (without executing)
podman mount <container-id>
# Or run a new container with the same volumes
podman run --rm -it --volumes-from <failed-container> ubuntu:20.04 bash
# Clean up all wrkflw containers
podman ps -a --filter "name=wrkflw-" --format "{{.Names}}" | xargs podman rm -f
```
**Compatibility:**
- Drop-in replacement for Docker workflows
- Same CLI options and behavior
- Identical container execution environment
## Limitations
### Supported Features
@@ -288,11 +368,12 @@ This allows you to inspect the exact state of the container when the failure occ
- ✅ Job dependency resolution and parallel execution (all jobs with correct 'needs' relationships are executed in the right order, and independent jobs run in parallel)
- ✅ Matrix builds (supported for reasonable matrix sizes; very large matrices may be slow or resource-intensive)
- ✅ Environment variables and GitHub context (all standard GitHub Actions environment variables and context objects are emulated)
-Docker container actions (all actions that use Docker containers are supported in Docker mode)
-Container actions (all actions that use containers are supported in Docker and Podman modes)
- ✅ JavaScript actions (all actions that use JavaScript are supported)
- ✅ Composite actions (all composite actions, including nested and local composite actions, are supported)
- ✅ Local actions (actions referenced with local paths are supported)
- ✅ Special handling for common actions (e.g., `actions/checkout` is natively supported)
- ✅ Reusable workflows (caller): Jobs that use `jobs.<id>.uses` to call local or remote workflows are executed; inputs and secrets are propagated to the called workflow
- ✅ Workflow triggering via `workflow_dispatch` (manual triggering of workflows is supported)
- ✅ GitLab pipeline triggering (manual triggering of GitLab pipelines is supported)
- ✅ Environment files (`GITHUB_OUTPUT`, `GITHUB_ENV`, `GITHUB_PATH`, `GITHUB_STEP_SUMMARY` are fully supported)
@@ -303,22 +384,59 @@ This allows you to inspect the exact state of the container when the failure occ
### Limited or Unsupported Features (Explicit List)
- ❌ GitHub secrets and permissions: Only basic environment variables are supported. GitHub's encrypted secrets and fine-grained permissions are NOT available.
- ❌ GitHub Actions cache: Caching functionality (e.g., `actions/cache`) is NOT supported in emulation mode and only partially supported in Docker mode (no persistent cache between runs).
- ❌ GitHub Actions cache: Caching functionality (e.g., `actions/cache`) is NOT supported in emulation mode and only partially supported in Docker and Podman modes (no persistent cache between runs).
- ❌ GitHub API integrations: Only basic workflow triggering is supported. Features like workflow status reporting, artifact upload/download, and API-based job control are NOT available.
- ❌ GitHub-specific environment variables: Some advanced or dynamic environment variables (e.g., those set by GitHub runners or by the GitHub API) are emulated with static or best-effort values, but not all are fully functional.
- ❌ Large/complex matrix builds: Very large matrices (hundreds or thousands of job combinations) may not be practical due to performance and resource limits.
- ❌ Network-isolated actions: Actions that require strict network isolation or custom network configuration may not work out-of-the-box and may require manual Docker configuration.
- ❌ Network-isolated actions: Actions that require strict network isolation or custom network configuration may not work out-of-the-box and may require manual container runtime configuration.
- ❌ Some event triggers: Only `workflow_dispatch` (manual trigger) is fully supported. Other triggers (e.g., `push`, `pull_request`, `schedule`, `release`, etc.) are NOT supported.
- ❌ GitHub runner-specific features: Features that depend on the exact GitHub-hosted runner environment (e.g., pre-installed tools, runner labels, or hardware) are NOT guaranteed to match. Only a best-effort emulation is provided.
- ❌ Windows and macOS runners: Only Linux-based runners are fully supported. Windows and macOS jobs are NOT supported.
- ❌ Service containers: Service containers (e.g., databases defined in `services:`) are only supported in Docker mode. In emulation mode, they are NOT supported.
- ❌ Service containers: Service containers (e.g., databases defined in `services:`) are only supported in Docker and Podman modes. In emulation mode, they are NOT supported.
- ❌ Artifacts: Uploading and downloading artifacts between jobs/steps is NOT supported.
- ❌ Job/step timeouts: Custom timeouts for jobs and steps are NOT enforced.
- ❌ Job/step concurrency and cancellation: Features like `concurrency` and job cancellation are NOT supported.
- ❌ Expressions and advanced YAML features: Most common expressions are supported, but some advanced or edge-case expressions may not be fully implemented.
- ⚠️ Reusable workflows (limits):
- Outputs from called workflows are not propagated back to the caller (`needs.<id>.outputs.*` not supported)
- `secrets: inherit` is not special-cased; provide a mapping to pass secrets
- Remote calls clone public repos via HTTPS; private repos require preconfigured access (not yet implemented)
- Deeply nested reusable calls work but lack cycle detection beyond regular job dependency checks
## Reusable Workflows
WRKFLW supports executing reusable workflow caller jobs.
### Syntax
```yaml
jobs:
call-local:
uses: ./.github/workflows/shared.yml
call-remote:
uses: my-org/my-repo/.github/workflows/shared.yml@v1
with:
foo: bar
secrets:
token: ${{ secrets.MY_TOKEN }}
```
### Behavior
- Local references are resolved relative to the current working directory.
- Remote references are shallow-cloned at the specified `@ref` into a temporary directory.
- `with:` entries are exposed to the called workflow as environment variables `INPUT_<KEY>`.
- `secrets:` mapping entries are exposed as environment variables `SECRET_<KEY>`.
- The called workflow executes according to its own `jobs`/`needs`; a summary of its job results is reported as a single result for the caller job.
### Current limitations
- Outputs from called workflows are not surfaced back to the caller.
- `secrets: inherit` is not supported; specify an explicit mapping.
- Private repositories for remote `uses:` are not yet supported.
### Runtime Mode Differences
- **Docker Mode**: Provides the closest match to GitHub's environment, including support for Docker container actions, service containers, and Linux-based jobs. Some advanced container configurations may still require manual setup.
- **Podman Mode**: Similar to Docker mode but uses Podman for container execution. Offers rootless container support and enhanced security. Fully compatible with Docker-based workflows.
- **Emulation Mode**: Runs workflows using the local system tools. Limitations:
- Only supports local and JavaScript actions (no Docker container actions)
- No support for service containers
@@ -373,7 +491,7 @@ The following roadmap outlines our planned approach to implementing currently un
### 6. Network-Isolated Actions
- **Goal:** Support custom network configurations and strict isolation for actions.
- **Plan:**
- Add advanced Docker network configuration options.
- Add advanced container network configuration options for Docker and Podman.
- Document best practices for network isolation.
### 7. Event Triggers

View File

@@ -1,14 +1,19 @@
[package]
name = "evaluator"
name = "wrkflw-evaluator"
version.workspace = true
edition.workspace = true
description = "Workflow evaluation for wrkflw"
description = "Workflow evaluation functionality for wrkflw execution engine"
license.workspace = true
documentation.workspace = true
homepage.workspace = true
repository.workspace = true
keywords.workspace = true
categories.workspace = true
[dependencies]
# Internal crates
models = { path = "../models" }
validators = { path = "../validators" }
wrkflw-models = { path = "../models", version = "0.6.0" }
wrkflw-validators = { path = "../validators", version = "0.6.0" }
# External dependencies
colored.workspace = true

View File

@@ -0,0 +1,29 @@
## wrkflw-evaluator
Small, focused helper for statically evaluating GitHub Actions workflow files.
- **Purpose**: Fast structural checks (e.g., `name`, `on`, `jobs`) before deeper validation/execution
- **Used by**: `wrkflw` CLI and TUI during validation flows
### Example
```rust
use std::path::Path;
let result = wrkflw_evaluator::evaluate_workflow_file(
Path::new(".github/workflows/ci.yml"),
/* verbose */ true,
).expect("evaluation failed");
if result.is_valid {
println!("Workflow looks structurally sound");
} else {
for issue in result.issues {
println!("- {}", issue);
}
}
```
### Notes
- This crate focuses on structural checks; deeper rules live in `wrkflw-validators`.
- Most consumers should prefer the top-level `wrkflw` CLI for end-to-end UX.

View File

@@ -3,8 +3,8 @@ use serde_yaml::{self, Value};
use std::fs;
use std::path::Path;
use models::ValidationResult;
use validators::{validate_jobs, validate_triggers};
use wrkflw_models::ValidationResult;
use wrkflw_validators::{validate_jobs, validate_triggers};
pub fn evaluate_workflow_file(path: &Path, verbose: bool) -> Result<ValidationResult, String> {
let content = fs::read_to_string(path).map_err(|e| format!("Failed to read file: {}", e))?;

View File

@@ -1,18 +1,23 @@
[package]
name = "executor"
name = "wrkflw-executor"
version.workspace = true
edition.workspace = true
description = "Workflow executor for wrkflw"
description = "Workflow execution engine for wrkflw"
license.workspace = true
documentation.workspace = true
homepage.workspace = true
repository.workspace = true
keywords.workspace = true
categories.workspace = true
[dependencies]
# Internal crates
models = { path = "../models" }
parser = { path = "../parser" }
runtime = { path = "../runtime" }
logging = { path = "../logging" }
matrix = { path = "../matrix" }
utils = { path = "../utils" }
wrkflw-models = { path = "../models", version = "0.6.0" }
wrkflw-parser = { path = "../parser", version = "0.6.0" }
wrkflw-runtime = { path = "../runtime", version = "0.6.0" }
wrkflw-logging = { path = "../logging", version = "0.6.0" }
wrkflw-matrix = { path = "../matrix", version = "0.6.0" }
wrkflw-utils = { path = "../utils", version = "0.6.0" }
# External dependencies
async-trait.workspace = true

29
crates/executor/README.md Normal file
View File

@@ -0,0 +1,29 @@
## wrkflw-executor
The execution engine that runs GitHub Actions workflows locally (Docker, Podman, or emulation).
- **Features**:
- Job graph execution with `needs` ordering and parallelism
- Docker/Podman container steps and emulation mode
- Basic environment/context wiring compatible with Actions
- **Used by**: `wrkflw` CLI and TUI
### API sketch
```rust
use wrkflw_executor::{execute_workflow, ExecutionConfig, RuntimeType};
let cfg = ExecutionConfig {
runtime: RuntimeType::Docker,
verbose: true,
preserve_containers_on_failure: false,
};
// Path to a workflow YAML
let workflow_path = std::path::Path::new(".github/workflows/ci.yml");
let result = execute_workflow(workflow_path, cfg).await?;
println!("workflow status: {:?}", result.summary_status);
```
Prefer using the `wrkflw` binary for a complete UX across validation, execution, and logs.

View File

@@ -1,5 +1,5 @@
use parser::workflow::WorkflowDefinition;
use std::collections::{HashMap, HashSet};
use wrkflw_parser::workflow::WorkflowDefinition;
pub fn resolve_dependencies(workflow: &WorkflowDefinition) -> Result<Vec<Vec<String>>, String> {
let jobs = &workflow.jobs;

View File

@@ -6,14 +6,14 @@ use bollard::{
Docker,
};
use futures_util::StreamExt;
use logging;
use once_cell::sync::Lazy;
use runtime::container::{ContainerError, ContainerOutput, ContainerRuntime};
use std::collections::HashMap;
use std::path::Path;
use std::sync::Mutex;
use utils;
use utils::fd;
use wrkflw_logging;
use wrkflw_runtime::container::{ContainerError, ContainerOutput, ContainerRuntime};
use wrkflw_utils;
use wrkflw_utils::fd;
static RUNNING_CONTAINERS: Lazy<Mutex<Vec<String>>> = Lazy::new(|| Mutex::new(Vec::new()));
static CREATED_NETWORKS: Lazy<Mutex<Vec<String>>> = Lazy::new(|| Mutex::new(Vec::new()));
@@ -50,7 +50,7 @@ impl DockerRuntime {
match CUSTOMIZED_IMAGES.lock() {
Ok(images) => images.get(&key).cloned(),
Err(e) => {
logging::error(&format!("Failed to acquire lock: {}", e));
wrkflw_logging::error(&format!("Failed to acquire lock: {}", e));
None
}
}
@@ -62,7 +62,7 @@ impl DockerRuntime {
if let Err(e) = CUSTOMIZED_IMAGES.lock().map(|mut images| {
images.insert(key, new_image.to_string());
}) {
logging::error(&format!("Failed to acquire lock: {}", e));
wrkflw_logging::error(&format!("Failed to acquire lock: {}", e));
}
}
@@ -72,7 +72,7 @@ impl DockerRuntime {
let image_keys = match CUSTOMIZED_IMAGES.lock() {
Ok(keys) => keys,
Err(e) => {
logging::error(&format!("Failed to acquire lock: {}", e));
wrkflw_logging::error(&format!("Failed to acquire lock: {}", e));
return None;
}
};
@@ -107,7 +107,7 @@ impl DockerRuntime {
match CUSTOMIZED_IMAGES.lock() {
Ok(images) => images.get(&key).cloned(),
Err(e) => {
logging::error(&format!("Failed to acquire lock: {}", e));
wrkflw_logging::error(&format!("Failed to acquire lock: {}", e));
None
}
}
@@ -134,7 +134,7 @@ impl DockerRuntime {
if let Err(e) = CUSTOMIZED_IMAGES.lock().map(|mut images| {
images.insert(key, new_image.to_string());
}) {
logging::error(&format!("Failed to acquire lock: {}", e));
wrkflw_logging::error(&format!("Failed to acquire lock: {}", e));
}
}
@@ -318,7 +318,7 @@ pub fn is_available() -> bool {
}
}
Err(_) => {
logging::debug("Docker CLI is not available");
wrkflw_logging::debug("Docker CLI is not available");
return false;
}
}
@@ -331,7 +331,7 @@ pub fn is_available() -> bool {
{
Ok(rt) => rt,
Err(e) => {
logging::error(&format!(
wrkflw_logging::error(&format!(
"Failed to create runtime for Docker availability check: {}",
e
));
@@ -352,17 +352,25 @@ pub fn is_available() -> bool {
{
Ok(Ok(_)) => true,
Ok(Err(e)) => {
logging::debug(&format!("Docker daemon ping failed: {}", e));
wrkflw_logging::debug(&format!(
"Docker daemon ping failed: {}",
e
));
false
}
Err(_) => {
logging::debug("Docker daemon ping timed out after 1 second");
wrkflw_logging::debug(
"Docker daemon ping timed out after 1 second",
);
false
}
}
}
Err(e) => {
logging::debug(&format!("Docker daemon connection failed: {}", e));
wrkflw_logging::debug(&format!(
"Docker daemon connection failed: {}",
e
));
false
}
}
@@ -371,7 +379,7 @@ pub fn is_available() -> bool {
{
Ok(result) => result,
Err(_) => {
logging::debug("Docker availability check timed out");
wrkflw_logging::debug("Docker availability check timed out");
false
}
}
@@ -379,7 +387,9 @@ pub fn is_available() -> bool {
}) {
Ok(result) => result,
Err(_) => {
logging::debug("Failed to redirect stderr when checking Docker availability");
wrkflw_logging::debug(
"Failed to redirect stderr when checking Docker availability",
);
false
}
}
@@ -393,7 +403,7 @@ pub fn is_available() -> bool {
return match handle.join() {
Ok(result) => result,
Err(_) => {
logging::warning("Docker availability check thread panicked");
wrkflw_logging::warning("Docker availability check thread panicked");
false
}
};
@@ -401,7 +411,9 @@ pub fn is_available() -> bool {
std::thread::sleep(std::time::Duration::from_millis(50));
}
logging::warning("Docker availability check timed out, assuming Docker is not available");
wrkflw_logging::warning(
"Docker availability check timed out, assuming Docker is not available",
);
false
}
@@ -444,19 +456,19 @@ pub async fn cleanup_resources(docker: &Docker) {
tokio::join!(cleanup_containers(docker), cleanup_networks(docker));
if let Err(e) = container_result {
logging::error(&format!("Error during container cleanup: {}", e));
wrkflw_logging::error(&format!("Error during container cleanup: {}", e));
}
if let Err(e) = network_result {
logging::error(&format!("Error during network cleanup: {}", e));
wrkflw_logging::error(&format!("Error during network cleanup: {}", e));
}
})
.await
{
Ok(_) => logging::debug("Docker cleanup completed within timeout"),
Err(_) => {
logging::warning("Docker cleanup timed out, some resources may not have been removed")
}
Ok(_) => wrkflw_logging::debug("Docker cleanup completed within timeout"),
Err(_) => wrkflw_logging::warning(
"Docker cleanup timed out, some resources may not have been removed",
),
}
}
@@ -468,7 +480,7 @@ pub async fn cleanup_containers(docker: &Docker) -> Result<(), String> {
match RUNNING_CONTAINERS.try_lock() {
Ok(containers) => containers.clone(),
Err(_) => {
logging::error("Could not acquire container lock for cleanup");
wrkflw_logging::error("Could not acquire container lock for cleanup");
vec![]
}
}
@@ -477,7 +489,7 @@ pub async fn cleanup_containers(docker: &Docker) -> Result<(), String> {
{
Ok(containers) => containers,
Err(_) => {
logging::error("Timeout while trying to get containers for cleanup");
wrkflw_logging::error("Timeout while trying to get containers for cleanup");
vec![]
}
};
@@ -486,7 +498,7 @@ pub async fn cleanup_containers(docker: &Docker) -> Result<(), String> {
return Ok(());
}
logging::info(&format!(
wrkflw_logging::info(&format!(
"Cleaning up {} containers",
containers_to_cleanup.len()
));
@@ -500,11 +512,14 @@ pub async fn cleanup_containers(docker: &Docker) -> Result<(), String> {
)
.await
{
Ok(Ok(_)) => logging::debug(&format!("Stopped container: {}", container_id)),
Ok(Err(e)) => {
logging::warning(&format!("Error stopping container {}: {}", container_id, e))
Ok(Ok(_)) => wrkflw_logging::debug(&format!("Stopped container: {}", container_id)),
Ok(Err(e)) => wrkflw_logging::warning(&format!(
"Error stopping container {}: {}",
container_id, e
)),
Err(_) => {
wrkflw_logging::warning(&format!("Timeout stopping container: {}", container_id))
}
Err(_) => logging::warning(&format!("Timeout stopping container: {}", container_id)),
}
// Then try to remove it
@@ -514,11 +529,14 @@ pub async fn cleanup_containers(docker: &Docker) -> Result<(), String> {
)
.await
{
Ok(Ok(_)) => logging::debug(&format!("Removed container: {}", container_id)),
Ok(Err(e)) => {
logging::warning(&format!("Error removing container {}: {}", container_id, e))
Ok(Ok(_)) => wrkflw_logging::debug(&format!("Removed container: {}", container_id)),
Ok(Err(e)) => wrkflw_logging::warning(&format!(
"Error removing container {}: {}",
container_id, e
)),
Err(_) => {
wrkflw_logging::warning(&format!("Timeout removing container: {}", container_id))
}
Err(_) => logging::warning(&format!("Timeout removing container: {}", container_id)),
}
// Always untrack the container whether or not we succeeded to avoid future cleanup attempts
@@ -536,7 +554,7 @@ pub async fn cleanup_networks(docker: &Docker) -> Result<(), String> {
match CREATED_NETWORKS.try_lock() {
Ok(networks) => networks.clone(),
Err(_) => {
logging::error("Could not acquire network lock for cleanup");
wrkflw_logging::error("Could not acquire network lock for cleanup");
vec![]
}
}
@@ -545,7 +563,7 @@ pub async fn cleanup_networks(docker: &Docker) -> Result<(), String> {
{
Ok(networks) => networks,
Err(_) => {
logging::error("Timeout while trying to get networks for cleanup");
wrkflw_logging::error("Timeout while trying to get networks for cleanup");
vec![]
}
};
@@ -554,7 +572,7 @@ pub async fn cleanup_networks(docker: &Docker) -> Result<(), String> {
return Ok(());
}
logging::info(&format!(
wrkflw_logging::info(&format!(
"Cleaning up {} networks",
networks_to_cleanup.len()
));
@@ -566,9 +584,13 @@ pub async fn cleanup_networks(docker: &Docker) -> Result<(), String> {
)
.await
{
Ok(Ok(_)) => logging::info(&format!("Successfully removed network: {}", network_id)),
Ok(Err(e)) => logging::error(&format!("Error removing network {}: {}", network_id, e)),
Err(_) => logging::warning(&format!("Timeout removing network: {}", network_id)),
Ok(Ok(_)) => {
wrkflw_logging::info(&format!("Successfully removed network: {}", network_id))
}
Ok(Err(e)) => {
wrkflw_logging::error(&format!("Error removing network {}: {}", network_id, e))
}
Err(_) => wrkflw_logging::warning(&format!("Timeout removing network: {}", network_id)),
}
// Always untrack the network whether or not we succeeded
@@ -599,7 +621,7 @@ pub async fn create_job_network(docker: &Docker) -> Result<String, ContainerErro
})?;
track_network(&network_id);
logging::info(&format!("Created Docker network: {}", network_id));
wrkflw_logging::info(&format!("Created Docker network: {}", network_id));
Ok(network_id)
}
@@ -615,7 +637,7 @@ impl ContainerRuntime for DockerRuntime {
volumes: &[(&Path, &Path)],
) -> Result<ContainerOutput, ContainerError> {
// Print detailed debugging info
logging::info(&format!("Docker: Running container with image: {}", image));
wrkflw_logging::info(&format!("Docker: Running container with image: {}", image));
// Add a global timeout for all Docker operations to prevent freezing
let timeout_duration = std::time::Duration::from_secs(360); // Increased outer timeout to 6 minutes
@@ -629,7 +651,7 @@ impl ContainerRuntime for DockerRuntime {
{
Ok(result) => result,
Err(_) => {
logging::error("Docker operation timed out after 360 seconds");
wrkflw_logging::error("Docker operation timed out after 360 seconds");
Err(ContainerError::ContainerExecution(
"Operation timed out".to_string(),
))
@@ -644,7 +666,7 @@ impl ContainerRuntime for DockerRuntime {
match tokio::time::timeout(timeout_duration, self.pull_image_inner(image)).await {
Ok(result) => result,
Err(_) => {
logging::warning(&format!(
wrkflw_logging::warning(&format!(
"Pull of image {} timed out, continuing with existing image",
image
));
@@ -662,7 +684,7 @@ impl ContainerRuntime for DockerRuntime {
{
Ok(result) => result,
Err(_) => {
logging::error(&format!(
wrkflw_logging::error(&format!(
"Building image {} timed out after 120 seconds",
tag
));
@@ -836,9 +858,9 @@ impl DockerRuntime {
// Convert command vector to Vec<String>
let cmd_vec: Vec<String> = cmd.iter().map(|&s| s.to_string()).collect();
logging::debug(&format!("Running command in Docker: {:?}", cmd_vec));
logging::debug(&format!("Environment: {:?}", env));
logging::debug(&format!("Working directory: {}", working_dir.display()));
wrkflw_logging::debug(&format!("Running command in Docker: {:?}", cmd_vec));
wrkflw_logging::debug(&format!("Environment: {:?}", env));
wrkflw_logging::debug(&format!("Working directory: {}", working_dir.display()));
// Determine platform-specific configurations
let is_windows_image = image.contains("windows")
@@ -973,7 +995,7 @@ impl DockerRuntime {
_ => -1,
},
Err(_) => {
logging::warning("Container wait operation timed out, treating as failure");
wrkflw_logging::warning("Container wait operation timed out, treating as failure");
-1
}
};
@@ -1003,7 +1025,7 @@ impl DockerRuntime {
}
}
} else {
logging::warning("Retrieving container logs timed out");
wrkflw_logging::warning("Retrieving container logs timed out");
}
// Clean up container with a timeout, but preserve on failure if configured
@@ -1016,7 +1038,7 @@ impl DockerRuntime {
untrack_container(&container.id);
} else {
// Container failed and we want to preserve it for debugging
logging::info(&format!(
wrkflw_logging::info(&format!(
"Preserving container {} for debugging (exit code: {}). Use 'docker exec -it {} bash' to inspect.",
container.id, exit_code, container.id
));
@@ -1026,13 +1048,13 @@ impl DockerRuntime {
// Log detailed information about the command execution for debugging
if exit_code != 0 {
logging::info(&format!(
wrkflw_logging::info(&format!(
"Docker command failed with exit code: {}",
exit_code
));
logging::debug(&format!("Failed command: {:?}", cmd));
logging::debug(&format!("Working directory: {}", working_dir.display()));
logging::debug(&format!("STDERR: {}", stderr));
wrkflw_logging::debug(&format!("Failed command: {:?}", cmd));
wrkflw_logging::debug(&format!("Working directory: {}", working_dir.display()));
wrkflw_logging::debug(&format!("STDERR: {}", stderr));
}
Ok(ContainerOutput {

View File

@@ -12,13 +12,14 @@ use thiserror::Error;
use crate::dependency;
use crate::docker;
use crate::environment;
use logging;
use matrix::MatrixCombination;
use models::gitlab::Pipeline;
use parser::gitlab::{self, parse_pipeline};
use parser::workflow::{self, parse_workflow, ActionInfo, Job, WorkflowDefinition};
use runtime::container::ContainerRuntime;
use runtime::emulation;
use crate::podman;
use wrkflw_logging;
use wrkflw_matrix::MatrixCombination;
use wrkflw_models::gitlab::Pipeline;
use wrkflw_parser::gitlab::{self, parse_pipeline};
use wrkflw_parser::workflow::{self, parse_workflow, ActionInfo, Job, WorkflowDefinition};
use wrkflw_runtime::container::ContainerRuntime;
use wrkflw_runtime::emulation;
#[allow(unused_variables, unused_assignments)]
/// Execute a GitHub Actions workflow file locally
@@ -26,8 +27,8 @@ pub async fn execute_workflow(
workflow_path: &Path,
config: ExecutionConfig,
) -> Result<ExecutionResult, ExecutionError> {
logging::info(&format!("Executing workflow: {}", workflow_path.display()));
logging::info(&format!("Runtime: {:?}", config.runtime_type));
wrkflw_logging::info(&format!("Executing workflow: {}", workflow_path.display()));
wrkflw_logging::info(&format!("Runtime: {:?}", config.runtime_type));
// Determine if this is a GitLab CI/CD pipeline or GitHub Actions workflow
let is_gitlab = is_gitlab_pipeline(workflow_path);
@@ -95,10 +96,10 @@ async fn execute_github_workflow(
// Add runtime mode to environment
env_context.insert(
"WRKFLW_RUNTIME_MODE".to_string(),
if config.runtime_type == RuntimeType::Emulation {
"emulation".to_string()
} else {
"docker".to_string()
match config.runtime_type {
RuntimeType::Emulation => "emulation".to_string(),
RuntimeType::Docker => "docker".to_string(),
RuntimeType::Podman => "podman".to_string(),
},
);
@@ -149,7 +150,7 @@ async fn execute_github_workflow(
// If there were failures, add detailed failure information to the result
if has_failures {
logging::error(&format!("Workflow execution failed:{}", failure_details));
wrkflw_logging::error(&format!("Workflow execution failed:{}", failure_details));
}
Ok(ExecutionResult {
@@ -167,7 +168,7 @@ async fn execute_gitlab_pipeline(
pipeline_path: &Path,
config: ExecutionConfig,
) -> Result<ExecutionResult, ExecutionError> {
logging::info("Executing GitLab CI/CD pipeline");
wrkflw_logging::info("Executing GitLab CI/CD pipeline");
// 1. Parse the GitLab pipeline file
let pipeline = parse_pipeline(pipeline_path)
@@ -195,10 +196,10 @@ async fn execute_gitlab_pipeline(
// Add runtime mode to environment
env_context.insert(
"WRKFLW_RUNTIME_MODE".to_string(),
if config.runtime_type == RuntimeType::Emulation {
"emulation".to_string()
} else {
"docker".to_string()
match config.runtime_type {
RuntimeType::Emulation => "emulation".to_string(),
RuntimeType::Docker => "docker".to_string(),
RuntimeType::Podman => "podman".to_string(),
},
);
@@ -243,7 +244,7 @@ async fn execute_gitlab_pipeline(
// If there were failures, add detailed failure information to the result
if has_failures {
logging::error(&format!("Pipeline execution failed:{}", failure_details));
wrkflw_logging::error(&format!("Pipeline execution failed:{}", failure_details));
}
Ok(ExecutionResult {
@@ -356,7 +357,7 @@ fn resolve_gitlab_dependencies(
Ok(execution_plan)
}
// Determine if Docker is available or fall back to emulation
// Determine if Docker/Podman is available or fall back to emulation
fn initialize_runtime(
runtime_type: RuntimeType,
preserve_containers_on_failure: bool,
@@ -368,7 +369,7 @@ fn initialize_runtime(
match docker::DockerRuntime::new_with_config(preserve_containers_on_failure) {
Ok(docker_runtime) => Ok(Box::new(docker_runtime)),
Err(e) => {
logging::error(&format!(
wrkflw_logging::error(&format!(
"Failed to initialize Docker runtime: {}, falling back to emulation mode",
e
));
@@ -376,7 +377,25 @@ fn initialize_runtime(
}
}
} else {
logging::error("Docker not available, falling back to emulation mode");
wrkflw_logging::error("Docker not available, falling back to emulation mode");
Ok(Box::new(emulation::EmulationRuntime::new()))
}
}
RuntimeType::Podman => {
if podman::is_available() {
// Handle the Result returned by PodmanRuntime::new()
match podman::PodmanRuntime::new_with_config(preserve_containers_on_failure) {
Ok(podman_runtime) => Ok(Box::new(podman_runtime)),
Err(e) => {
wrkflw_logging::error(&format!(
"Failed to initialize Podman runtime: {}, falling back to emulation mode",
e
));
Ok(Box::new(emulation::EmulationRuntime::new()))
}
}
} else {
wrkflw_logging::error("Podman not available, falling back to emulation mode");
Ok(Box::new(emulation::EmulationRuntime::new()))
}
}
@@ -387,6 +406,7 @@ fn initialize_runtime(
#[derive(Debug, Clone, PartialEq)]
pub enum RuntimeType {
Docker,
Podman,
Emulation,
}
@@ -557,7 +577,7 @@ async fn execute_job_with_matrix(
if let Some(if_condition) = &job.if_condition {
let should_run = evaluate_job_condition(if_condition, env_context, workflow);
if !should_run {
logging::info(&format!(
wrkflw_logging::info(&format!(
"⏭️ Skipping job '{}' due to condition: {}",
job_name, if_condition
));
@@ -574,11 +594,11 @@ async fn execute_job_with_matrix(
// Check if this is a matrix job
if let Some(matrix_config) = &job.matrix {
// Expand the matrix into combinations
let combinations = matrix::expand_matrix(matrix_config)
let combinations = wrkflw_matrix::expand_matrix(matrix_config)
.map_err(|e| ExecutionError::Execution(format!("Failed to expand matrix: {}", e)))?;
if combinations.is_empty() {
logging::info(&format!(
wrkflw_logging::info(&format!(
"Matrix job '{}' has no valid combinations",
job_name
));
@@ -586,7 +606,7 @@ async fn execute_job_with_matrix(
return Ok(Vec::new());
}
logging::info(&format!(
wrkflw_logging::info(&format!(
"Matrix job '{}' expanded to {} combinations",
job_name,
combinations.len()
@@ -632,6 +652,12 @@ async fn execute_job(ctx: JobExecutionContext<'_>) -> Result<JobResult, Executio
ExecutionError::Execution(format!("Job '{}' not found in workflow", ctx.job_name))
})?;
// Handle reusable workflow jobs (job-level 'uses')
if let Some(uses) = &job.uses {
return execute_reusable_workflow_job(&ctx, uses, job.with.as_ref(), job.secrets.as_ref())
.await;
}
// Clone context and add job-specific variables
let mut job_env = ctx.env_context.clone();
@@ -654,17 +680,20 @@ async fn execute_job(ctx: JobExecutionContext<'_>) -> Result<JobResult, Executio
})?;
// Copy project files to the job workspace directory
logging::info(&format!(
wrkflw_logging::info(&format!(
"Copying project files to job workspace: {}",
job_dir.path().display()
));
copy_directory_contents(&current_dir, job_dir.path())?;
logging::info(&format!("Executing job: {}", ctx.job_name));
wrkflw_logging::info(&format!("Executing job: {}", ctx.job_name));
let mut job_success = true;
// Execute job steps
// Determine runner image (default if not provided)
let runner_image_value = get_runner_image_from_opt(&job.runs_on);
for (idx, step) in job.steps.iter().enumerate() {
let step_result = execute_step(StepExecutionContext {
step,
@@ -673,7 +702,7 @@ async fn execute_job(ctx: JobExecutionContext<'_>) -> Result<JobResult, Executio
working_dir: job_dir.path(),
runtime: ctx.runtime,
workflow: ctx.workflow,
runner_image: &get_runner_image(&job.runs_on),
runner_image: &runner_image_value,
verbose: ctx.verbose,
matrix_combination: &None,
})
@@ -760,7 +789,8 @@ async fn execute_matrix_combinations(
if ctx.fail_fast && any_failed {
// Add skipped results for remaining combinations
for combination in chunk {
let combination_name = matrix::format_combination_name(ctx.job_name, combination);
let combination_name =
wrkflw_matrix::format_combination_name(ctx.job_name, combination);
results.push(JobResult {
name: combination_name,
status: JobStatus::Skipped,
@@ -798,7 +828,7 @@ async fn execute_matrix_combinations(
Err(e) => {
// On error, mark as failed and continue if not fail-fast
any_failed = true;
logging::error(&format!("Matrix job failed: {}", e));
wrkflw_logging::error(&format!("Matrix job failed: {}", e));
if ctx.fail_fast {
return Err(e);
@@ -822,9 +852,9 @@ async fn execute_matrix_job(
verbose: bool,
) -> Result<JobResult, ExecutionError> {
// Create the matrix-specific job name
let matrix_job_name = matrix::format_combination_name(job_name, combination);
let matrix_job_name = wrkflw_matrix::format_combination_name(job_name, combination);
logging::info(&format!("Executing matrix job: {}", matrix_job_name));
wrkflw_logging::info(&format!("Executing matrix job: {}", matrix_job_name));
// Clone the environment and add matrix-specific values
let mut job_env = base_env_context.clone();
@@ -850,17 +880,20 @@ async fn execute_matrix_job(
})?;
// Copy project files to the job workspace directory
logging::info(&format!(
wrkflw_logging::info(&format!(
"Copying project files to job workspace: {}",
job_dir.path().display()
));
copy_directory_contents(&current_dir, job_dir.path())?;
let job_success = if job_template.steps.is_empty() {
logging::warning(&format!("Job '{}' has no steps", matrix_job_name));
wrkflw_logging::warning(&format!("Job '{}' has no steps", matrix_job_name));
true
} else {
// Execute each step
// Determine runner image (default if not provided)
let runner_image_value = get_runner_image_from_opt(&job_template.runs_on);
for (idx, step) in job_template.steps.iter().enumerate() {
match execute_step(StepExecutionContext {
step,
@@ -869,7 +902,7 @@ async fn execute_matrix_job(
working_dir: job_dir.path(),
runtime,
workflow,
runner_image: &get_runner_image(&job_template.runs_on),
runner_image: &runner_image_value,
verbose,
matrix_combination: &Some(combination.values.clone()),
})
@@ -951,7 +984,7 @@ async fn execute_step(ctx: StepExecutionContext<'_>) -> Result<StepResult, Execu
.unwrap_or_else(|| format!("Step {}", ctx.step_idx + 1));
if ctx.verbose {
logging::info(&format!(" Executing step: {}", step_name));
wrkflw_logging::info(&format!(" Executing step: {}", step_name));
}
// Prepare step environment
@@ -1053,7 +1086,9 @@ async fn execute_step(ctx: StepExecutionContext<'_>) -> Result<StepResult, Execu
// Special handling for Rust actions
if uses.starts_with("actions-rs/") {
logging::info("🔄 Detected Rust action - using system Rust installation");
wrkflw_logging::info(
"🔄 Detected Rust action - using system Rust installation",
);
// For toolchain action, verify Rust is installed
if uses.starts_with("actions-rs/toolchain@") {
@@ -1063,7 +1098,10 @@ async fn execute_step(ctx: StepExecutionContext<'_>) -> Result<StepResult, Execu
.map(|output| String::from_utf8_lossy(&output.stdout).to_string())
.unwrap_or_else(|_| "not found".to_string());
logging::info(&format!("🔄 Using system Rust: {}", rustc_version.trim()));
wrkflw_logging::info(&format!(
"🔄 Using system Rust: {}",
rustc_version.trim()
));
// Return success since we're using system Rust
return Ok(StepResult {
@@ -1081,7 +1119,7 @@ async fn execute_step(ctx: StepExecutionContext<'_>) -> Result<StepResult, Execu
.map(|output| String::from_utf8_lossy(&output.stdout).to_string())
.unwrap_or_else(|_| "not found".to_string());
logging::info(&format!(
wrkflw_logging::info(&format!(
"🔄 Using system Rust/Cargo: {}",
cargo_version.trim()
));
@@ -1089,7 +1127,10 @@ async fn execute_step(ctx: StepExecutionContext<'_>) -> Result<StepResult, Execu
// Get the command from the 'with' parameters
if let Some(with_params) = &ctx.step.with {
if let Some(command) = with_params.get("command") {
logging::info(&format!("🔄 Found command parameter: {}", command));
wrkflw_logging::info(&format!(
"🔄 Found command parameter: {}",
command
));
// Build the actual command
let mut real_command = format!("cargo {}", command);
@@ -1099,7 +1140,7 @@ async fn execute_step(ctx: StepExecutionContext<'_>) -> Result<StepResult, Execu
if !args.is_empty() {
// Resolve GitHub-style variables in args
let resolved_args = if args.contains("${{") {
logging::info(&format!(
wrkflw_logging::info(&format!(
"🔄 Resolving workflow variables in: {}",
args
));
@@ -1113,7 +1154,7 @@ async fn execute_step(ctx: StepExecutionContext<'_>) -> Result<StepResult, Execu
let re_pattern =
regex::Regex::new(r"\$\{\{\s*([^}]+)\s*\}\}")
.unwrap_or_else(|_| {
logging::error(
wrkflw_logging::error(
"Failed to create regex pattern",
);
regex::Regex::new(r"\$\{\{.*?\}\}").unwrap()
@@ -1121,7 +1162,10 @@ async fn execute_step(ctx: StepExecutionContext<'_>) -> Result<StepResult, Execu
let resolved =
re_pattern.replace_all(&resolved, "").to_string();
logging::info(&format!("🔄 Resolved to: {}", resolved));
wrkflw_logging::info(&format!(
"🔄 Resolved to: {}",
resolved
));
resolved.trim().to_string()
} else {
@@ -1137,7 +1181,7 @@ async fn execute_step(ctx: StepExecutionContext<'_>) -> Result<StepResult, Execu
}
}
logging::info(&format!(
wrkflw_logging::info(&format!(
"🔄 Running actual command: {}",
real_command
));
@@ -1219,13 +1263,13 @@ async fn execute_step(ctx: StepExecutionContext<'_>) -> Result<StepResult, Execu
.cloned()
.unwrap_or_else(|| "not set".to_string());
logging::debug(&format!(
wrkflw_logging::debug(&format!(
"WRKFLW_HIDE_ACTION_MESSAGES value: {}",
hide_action_value
));
let hide_messages = hide_action_value == "true";
logging::debug(&format!("Should hide messages: {}", hide_messages));
wrkflw_logging::debug(&format!("Should hide messages: {}", hide_messages));
// Only log a message to the console if we're showing action messages
if !hide_messages {
@@ -1242,7 +1286,10 @@ async fn execute_step(ctx: StepExecutionContext<'_>) -> Result<StepResult, Execu
// Common GitHub action pattern: has a 'command' parameter
if let Some(cmd) = with_params.get("command") {
if ctx.verbose {
logging::info(&format!("🔄 Found command parameter: {}", cmd));
wrkflw_logging::info(&format!(
"🔄 Found command parameter: {}",
cmd
));
}
// Convert to real command based on action type patterns
@@ -1282,7 +1329,7 @@ async fn execute_step(ctx: StepExecutionContext<'_>) -> Result<StepResult, Execu
if !args.is_empty() {
// Resolve GitHub-style variables in args
let resolved_args = if args.contains("${{") {
logging::info(&format!(
wrkflw_logging::info(&format!(
"🔄 Resolving workflow variables in: {}",
args
));
@@ -1295,7 +1342,7 @@ async fn execute_step(ctx: StepExecutionContext<'_>) -> Result<StepResult, Execu
let re_pattern =
regex::Regex::new(r"\$\{\{\s*([^}]+)\s*\}\}")
.unwrap_or_else(|_| {
logging::error(
wrkflw_logging::error(
"Failed to create regex pattern",
);
regex::Regex::new(r"\$\{\{.*?\}\}").unwrap()
@@ -1303,7 +1350,10 @@ async fn execute_step(ctx: StepExecutionContext<'_>) -> Result<StepResult, Execu
let resolved =
re_pattern.replace_all(&resolved, "").to_string();
logging::info(&format!("🔄 Resolved to: {}", resolved));
wrkflw_logging::info(&format!(
"🔄 Resolved to: {}",
resolved
));
resolved.trim().to_string()
} else {
@@ -1322,7 +1372,10 @@ async fn execute_step(ctx: StepExecutionContext<'_>) -> Result<StepResult, Execu
if should_run_real_command && !real_command_parts.is_empty() {
// Build a final command string
let command_str = real_command_parts.join(" ");
logging::info(&format!("🔄 Running actual command: {}", command_str));
wrkflw_logging::info(&format!(
"🔄 Running actual command: {}",
command_str
));
// Replace the emulated command with a shell command to execute our command
cmd.clear();
@@ -1709,6 +1762,189 @@ fn get_runner_image(runs_on: &str) -> String {
.to_string()
}
fn get_runner_image_from_opt(runs_on: &Option<Vec<String>>) -> String {
let default = "ubuntu-latest";
let ro = runs_on
.as_ref()
.and_then(|vec| vec.first())
.map(|s| s.as_str())
.unwrap_or(default);
get_runner_image(ro)
}
async fn execute_reusable_workflow_job(
ctx: &JobExecutionContext<'_>,
uses: &str,
with: Option<&HashMap<String, String>>,
secrets: Option<&serde_yaml::Value>,
) -> Result<JobResult, ExecutionError> {
wrkflw_logging::info(&format!(
"Executing reusable workflow job '{}' -> {}",
ctx.job_name, uses
));
// Resolve the called workflow file path
enum UsesRef<'a> {
LocalPath(&'a str),
Remote {
owner: String,
repo: String,
path: String,
r#ref: String,
},
}
let uses_ref = if uses.starts_with("./") || uses.starts_with('/') {
UsesRef::LocalPath(uses)
} else {
// Expect format owner/repo/path/to/workflow.yml@ref
let parts: Vec<&str> = uses.split('@').collect();
if parts.len() != 2 {
return Err(ExecutionError::Execution(format!(
"Invalid reusable workflow reference: {}",
uses
)));
}
let left = parts[0];
let r#ref = parts[1].to_string();
let mut segs = left.splitn(3, '/');
let owner = segs.next().unwrap_or("").to_string();
let repo = segs.next().unwrap_or("").to_string();
let path = segs.next().unwrap_or("").to_string();
if owner.is_empty() || repo.is_empty() || path.is_empty() {
return Err(ExecutionError::Execution(format!(
"Invalid reusable workflow reference: {}",
uses
)));
}
UsesRef::Remote {
owner,
repo,
path,
r#ref,
}
};
// Load workflow file
let workflow_path = match uses_ref {
UsesRef::LocalPath(p) => {
// Resolve relative to current directory
let current_dir = std::env::current_dir().map_err(|e| {
ExecutionError::Execution(format!("Failed to get current dir: {}", e))
})?;
let path = current_dir.join(p);
if !path.exists() {
return Err(ExecutionError::Execution(format!(
"Reusable workflow not found at path: {}",
path.display()
)));
}
path
}
UsesRef::Remote {
owner,
repo,
path,
r#ref,
} => {
// Clone minimal repository and checkout ref
let tempdir = tempfile::tempdir().map_err(|e| {
ExecutionError::Execution(format!("Failed to create temp dir: {}", e))
})?;
let repo_url = format!("https://github.com/{}/{}.git", owner, repo);
// git clone
let status = Command::new("git")
.arg("clone")
.arg("--depth")
.arg("1")
.arg("--branch")
.arg(&r#ref)
.arg(&repo_url)
.arg(tempdir.path())
.status()
.map_err(|e| ExecutionError::Execution(format!("Failed to execute git: {}", e)))?;
if !status.success() {
return Err(ExecutionError::Execution(format!(
"Failed to clone {}@{}",
repo_url, r#ref
)));
}
let joined = tempdir.path().join(path);
if !joined.exists() {
return Err(ExecutionError::Execution(format!(
"Reusable workflow file not found in repo: {}",
joined.display()
)));
}
joined
}
};
// Parse called workflow
let called = parse_workflow(&workflow_path)?;
// Create child env context
let mut child_env = ctx.env_context.clone();
if let Some(with_map) = with {
for (k, v) in with_map {
child_env.insert(format!("INPUT_{}", k.to_uppercase()), v.clone());
}
}
if let Some(secrets_val) = secrets {
if let Some(map) = secrets_val.as_mapping() {
for (k, v) in map {
if let (Some(key), Some(value)) = (k.as_str(), v.as_str()) {
child_env.insert(format!("SECRET_{}", key.to_uppercase()), value.to_string());
}
}
}
}
// Execute called workflow
let plan = dependency::resolve_dependencies(&called)?;
let mut all_results = Vec::new();
let mut any_failed = false;
for batch in plan {
let results =
execute_job_batch(&batch, &called, ctx.runtime, &child_env, ctx.verbose).await?;
for r in &results {
if r.status == JobStatus::Failure {
any_failed = true;
}
}
all_results.extend(results);
}
// Summarize into a single JobResult
let mut logs = String::new();
logs.push_str(&format!("Called workflow: {}\n", workflow_path.display()));
for r in &all_results {
logs.push_str(&format!("- {}: {:?}\n", r.name, r.status));
}
// Represent as one summary step for UI
let summary_step = StepResult {
name: format!("Run reusable workflow: {}", uses),
status: if any_failed {
StepStatus::Failure
} else {
StepStatus::Success
},
output: logs.clone(),
};
Ok(JobResult {
name: ctx.job_name.to_string(),
status: if any_failed {
JobStatus::Failure
} else {
JobStatus::Success
},
steps: vec![summary_step],
logs,
})
}
#[allow(dead_code)]
async fn prepare_runner_image(
image: &str,
@@ -1717,7 +1953,7 @@ async fn prepare_runner_image(
) -> Result<(), ExecutionError> {
// Try to pull the image first
if let Err(e) = runtime.pull_image(image).await {
logging::warning(&format!("Failed to pull image {}: {}", image, e));
wrkflw_logging::warning(&format!("Failed to pull image {}: {}", image, e));
}
// Check if this is a language-specific runner
@@ -1730,7 +1966,7 @@ async fn prepare_runner_image(
.map_err(|e| ExecutionError::Runtime(e.to_string()))
{
if verbose {
logging::info(&format!("Using customized image: {}", custom_image));
wrkflw_logging::info(&format!("Using customized image: {}", custom_image));
}
return Ok(());
}
@@ -2008,7 +2244,7 @@ fn evaluate_job_condition(
env_context: &HashMap<String, String>,
workflow: &WorkflowDefinition,
) -> bool {
logging::debug(&format!("Evaluating condition: {}", condition));
wrkflw_logging::debug(&format!("Evaluating condition: {}", condition));
// For now, implement basic pattern matching for common conditions
// TODO: Implement a full GitHub Actions expression evaluator
@@ -2031,14 +2267,14 @@ fn evaluate_job_condition(
if condition.contains("needs.") && condition.contains(".outputs.") {
// For now, simulate that outputs are available but empty
// This means conditions like needs.changes.outputs.source-code == 'true' will be false
logging::debug(
wrkflw_logging::debug(
"Evaluating needs.outputs condition - defaulting to false for local execution",
);
return false;
}
// Default to true for unknown conditions to avoid breaking workflows
logging::warning(&format!(
wrkflw_logging::warning(&format!(
"Unknown condition pattern: '{}' - defaulting to true",
condition
));

View File

@@ -1,8 +1,8 @@
use chrono::Utc;
use matrix::MatrixCombination;
use parser::workflow::WorkflowDefinition;
use serde_yaml::Value;
use std::{collections::HashMap, fs, io, path::Path};
use wrkflw_matrix::MatrixCombination;
use wrkflw_parser::workflow::WorkflowDefinition;
pub fn setup_github_environment_files(workspace_dir: &Path) -> io::Result<()> {
// Create necessary directories

View File

@@ -6,6 +6,7 @@ pub mod dependency;
pub mod docker;
pub mod engine;
pub mod environment;
pub mod podman;
pub mod substitution;
// Re-export public items

View File

@@ -0,0 +1,877 @@
use async_trait::async_trait;
use once_cell::sync::Lazy;
use std::collections::HashMap;
use std::path::Path;
use std::process::Stdio;
use std::sync::Mutex;
use tempfile;
use tokio::process::Command;
use wrkflw_logging;
use wrkflw_runtime::container::{ContainerError, ContainerOutput, ContainerRuntime};
use wrkflw_utils;
use wrkflw_utils::fd;
static RUNNING_CONTAINERS: Lazy<Mutex<Vec<String>>> = Lazy::new(|| Mutex::new(Vec::new()));
// Map to track customized images for a job
#[allow(dead_code)]
static CUSTOMIZED_IMAGES: Lazy<Mutex<HashMap<String, String>>> =
Lazy::new(|| Mutex::new(HashMap::new()));
pub struct PodmanRuntime {
preserve_containers_on_failure: bool,
}
impl PodmanRuntime {
pub fn new() -> Result<Self, ContainerError> {
Self::new_with_config(false)
}
pub fn new_with_config(preserve_containers_on_failure: bool) -> Result<Self, ContainerError> {
// Check if podman command is available
if !is_available() {
return Err(ContainerError::ContainerStart(
"Podman is not available on this system".to_string(),
));
}
Ok(PodmanRuntime {
preserve_containers_on_failure,
})
}
// Add a method to store and retrieve customized images (e.g., with Python installed)
#[allow(dead_code)]
pub fn get_customized_image(base_image: &str, customization: &str) -> Option<String> {
let key = format!("{}:{}", base_image, customization);
match CUSTOMIZED_IMAGES.lock() {
Ok(images) => images.get(&key).cloned(),
Err(e) => {
wrkflw_logging::error(&format!("Failed to acquire lock: {}", e));
None
}
}
}
#[allow(dead_code)]
pub fn set_customized_image(base_image: &str, customization: &str, new_image: &str) {
let key = format!("{}:{}", base_image, customization);
if let Err(e) = CUSTOMIZED_IMAGES.lock().map(|mut images| {
images.insert(key, new_image.to_string());
}) {
wrkflw_logging::error(&format!("Failed to acquire lock: {}", e));
}
}
/// Find a customized image key by prefix
#[allow(dead_code)]
pub fn find_customized_image_key(image: &str, prefix: &str) -> Option<String> {
let image_keys = match CUSTOMIZED_IMAGES.lock() {
Ok(keys) => keys,
Err(e) => {
wrkflw_logging::error(&format!("Failed to acquire lock: {}", e));
return None;
}
};
// Look for any key that starts with the prefix
for (key, _) in image_keys.iter() {
if key.starts_with(prefix) {
return Some(key.clone());
}
}
None
}
/// Get a customized image with language-specific dependencies
pub fn get_language_specific_image(
base_image: &str,
language: &str,
version: Option<&str>,
) -> Option<String> {
let key = match (language, version) {
("python", Some(ver)) => format!("python:{}", ver),
("node", Some(ver)) => format!("node:{}", ver),
("java", Some(ver)) => format!("eclipse-temurin:{}", ver),
("go", Some(ver)) => format!("golang:{}", ver),
("dotnet", Some(ver)) => format!("mcr.microsoft.com/dotnet/sdk:{}", ver),
("rust", Some(ver)) => format!("rust:{}", ver),
(lang, Some(ver)) => format!("{}:{}", lang, ver),
(lang, None) => lang.to_string(),
};
match CUSTOMIZED_IMAGES.lock() {
Ok(images) => images.get(&key).cloned(),
Err(e) => {
wrkflw_logging::error(&format!("Failed to acquire lock: {}", e));
None
}
}
}
/// Set a customized image with language-specific dependencies
pub fn set_language_specific_image(
base_image: &str,
language: &str,
version: Option<&str>,
new_image: &str,
) {
let key = match (language, version) {
("python", Some(ver)) => format!("python:{}", ver),
("node", Some(ver)) => format!("node:{}", ver),
("java", Some(ver)) => format!("eclipse-temurin:{}", ver),
("go", Some(ver)) => format!("golang:{}", ver),
("dotnet", Some(ver)) => format!("mcr.microsoft.com/dotnet/sdk:{}", ver),
("rust", Some(ver)) => format!("rust:{}", ver),
(lang, Some(ver)) => format!("{}:{}", lang, ver),
(lang, None) => lang.to_string(),
};
if let Err(e) = CUSTOMIZED_IMAGES.lock().map(|mut images| {
images.insert(key, new_image.to_string());
}) {
wrkflw_logging::error(&format!("Failed to acquire lock: {}", e));
}
}
/// Execute a podman command with proper error handling and timeout
async fn execute_podman_command(
&self,
args: &[&str],
input: Option<&str>,
) -> Result<ContainerOutput, ContainerError> {
let timeout_duration = std::time::Duration::from_secs(360); // 6 minutes timeout
let result = tokio::time::timeout(timeout_duration, async {
let mut cmd = Command::new("podman");
cmd.args(args);
if input.is_some() {
cmd.stdin(Stdio::piped());
}
cmd.stdout(Stdio::piped()).stderr(Stdio::piped());
wrkflw_logging::debug(&format!(
"Running Podman command: podman {}",
args.join(" ")
));
let mut child = cmd.spawn().map_err(|e| {
ContainerError::ContainerStart(format!("Failed to spawn podman command: {}", e))
})?;
// Send input if provided
if let Some(input_data) = input {
if let Some(stdin) = child.stdin.take() {
use tokio::io::AsyncWriteExt;
let mut stdin = stdin;
stdin.write_all(input_data.as_bytes()).await.map_err(|e| {
ContainerError::ContainerExecution(format!(
"Failed to write to stdin: {}",
e
))
})?;
stdin.shutdown().await.map_err(|e| {
ContainerError::ContainerExecution(format!("Failed to close stdin: {}", e))
})?;
}
}
let output = child.wait_with_output().await.map_err(|e| {
ContainerError::ContainerExecution(format!("Podman command failed: {}", e))
})?;
Ok(ContainerOutput {
stdout: String::from_utf8_lossy(&output.stdout).to_string(),
stderr: String::from_utf8_lossy(&output.stderr).to_string(),
exit_code: output.status.code().unwrap_or(-1),
})
})
.await;
match result {
Ok(output) => output,
Err(_) => {
wrkflw_logging::error("Podman operation timed out after 360 seconds");
Err(ContainerError::ContainerExecution(
"Operation timed out".to_string(),
))
}
}
}
}
pub fn is_available() -> bool {
// Use a very short timeout for the entire availability check
let overall_timeout = std::time::Duration::from_secs(3);
// Spawn a thread with the timeout to prevent blocking the main thread
let handle = std::thread::spawn(move || {
// Use safe FD redirection utility to suppress Podman error messages
match fd::with_stderr_to_null(|| {
// First, check if podman CLI is available as a quick test
if cfg!(target_os = "linux") || cfg!(target_os = "macos") {
// Try a simple podman version command with a short timeout
let process = std::process::Command::new("podman")
.arg("version")
.arg("--format")
.arg("{{.Version}}")
.stdout(std::process::Stdio::null())
.stderr(std::process::Stdio::null())
.spawn();
match process {
Ok(mut child) => {
// Set a very short timeout for the process
let status = std::thread::scope(|_| {
// Try to wait for a short time
for _ in 0..10 {
match child.try_wait() {
Ok(Some(status)) => return status.success(),
Ok(None) => {
std::thread::sleep(std::time::Duration::from_millis(100))
}
Err(_) => return false,
}
}
// Kill it if it takes too long
let _ = child.kill();
false
});
if !status {
return false;
}
}
Err(_) => {
wrkflw_logging::debug("Podman CLI is not available");
return false;
}
}
}
// Try to run a simple podman command to check if the daemon is responsive
let runtime = match tokio::runtime::Builder::new_current_thread()
.enable_all()
.build()
{
Ok(rt) => rt,
Err(e) => {
wrkflw_logging::error(&format!(
"Failed to create runtime for Podman availability check: {}",
e
));
return false;
}
};
runtime.block_on(async {
match tokio::time::timeout(std::time::Duration::from_secs(2), async {
let mut cmd = Command::new("podman");
cmd.args(["info", "--format", "{{.Host.Hostname}}"]);
cmd.stdout(Stdio::null()).stderr(Stdio::null());
match tokio::time::timeout(std::time::Duration::from_secs(1), cmd.output())
.await
{
Ok(Ok(output)) => {
if output.status.success() {
true
} else {
wrkflw_logging::debug("Podman info command failed");
false
}
}
Ok(Err(e)) => {
wrkflw_logging::debug(&format!("Podman info command error: {}", e));
false
}
Err(_) => {
wrkflw_logging::debug("Podman info command timed out after 1 second");
false
}
}
})
.await
{
Ok(result) => result,
Err(_) => {
wrkflw_logging::debug("Podman availability check timed out");
false
}
}
})
}) {
Ok(result) => result,
Err(_) => {
wrkflw_logging::debug(
"Failed to redirect stderr when checking Podman availability",
);
false
}
}
});
// Manual implementation of join with timeout
let start = std::time::Instant::now();
while start.elapsed() < overall_timeout {
if handle.is_finished() {
return match handle.join() {
Ok(result) => result,
Err(_) => {
wrkflw_logging::warning("Podman availability check thread panicked");
false
}
};
}
std::thread::sleep(std::time::Duration::from_millis(50));
}
wrkflw_logging::warning(
"Podman availability check timed out, assuming Podman is not available",
);
false
}
// Add container to tracking
pub fn track_container(id: &str) {
if let Ok(mut containers) = RUNNING_CONTAINERS.lock() {
containers.push(id.to_string());
}
}
// Remove container from tracking
pub fn untrack_container(id: &str) {
if let Ok(mut containers) = RUNNING_CONTAINERS.lock() {
containers.retain(|c| c != id);
}
}
// Clean up all tracked resources
pub async fn cleanup_resources() {
// Use a global timeout for the entire cleanup process
let cleanup_timeout = std::time::Duration::from_secs(5);
match tokio::time::timeout(cleanup_timeout, cleanup_containers()).await {
Ok(result) => {
if let Err(e) = result {
wrkflw_logging::error(&format!("Error during container cleanup: {}", e));
}
}
Err(_) => wrkflw_logging::warning(
"Podman cleanup timed out, some resources may not have been removed",
),
}
}
// Clean up all tracked containers
pub async fn cleanup_containers() -> Result<(), String> {
// Getting the containers to clean up should not take a long time
let containers_to_cleanup =
match tokio::time::timeout(std::time::Duration::from_millis(500), async {
match RUNNING_CONTAINERS.try_lock() {
Ok(containers) => containers.clone(),
Err(_) => {
wrkflw_logging::error("Could not acquire container lock for cleanup");
vec![]
}
}
})
.await
{
Ok(containers) => containers,
Err(_) => {
wrkflw_logging::error("Timeout while trying to get containers for cleanup");
vec![]
}
};
if containers_to_cleanup.is_empty() {
return Ok(());
}
wrkflw_logging::info(&format!(
"Cleaning up {} containers",
containers_to_cleanup.len()
));
// Process each container with a timeout
for container_id in containers_to_cleanup {
// First try to stop the container
let stop_result = tokio::time::timeout(
std::time::Duration::from_millis(1000),
Command::new("podman")
.args(["stop", &container_id])
.stdout(Stdio::null())
.stderr(Stdio::null())
.output(),
)
.await;
match stop_result {
Ok(Ok(output)) => {
if output.status.success() {
wrkflw_logging::debug(&format!("Stopped container: {}", container_id));
} else {
wrkflw_logging::warning(&format!("Error stopping container {}", container_id));
}
}
Ok(Err(e)) => wrkflw_logging::warning(&format!(
"Error stopping container {}: {}",
container_id, e
)),
Err(_) => {
wrkflw_logging::warning(&format!("Timeout stopping container: {}", container_id))
}
}
// Then try to remove it
let remove_result = tokio::time::timeout(
std::time::Duration::from_millis(1000),
Command::new("podman")
.args(["rm", &container_id])
.stdout(Stdio::null())
.stderr(Stdio::null())
.output(),
)
.await;
match remove_result {
Ok(Ok(output)) => {
if output.status.success() {
wrkflw_logging::debug(&format!("Removed container: {}", container_id));
} else {
wrkflw_logging::warning(&format!("Error removing container {}", container_id));
}
}
Ok(Err(e)) => wrkflw_logging::warning(&format!(
"Error removing container {}: {}",
container_id, e
)),
Err(_) => {
wrkflw_logging::warning(&format!("Timeout removing container: {}", container_id))
}
}
// Always untrack the container whether or not we succeeded to avoid future cleanup attempts
untrack_container(&container_id);
}
Ok(())
}
#[async_trait]
impl ContainerRuntime for PodmanRuntime {
async fn run_container(
&self,
image: &str,
cmd: &[&str],
env_vars: &[(&str, &str)],
working_dir: &Path,
volumes: &[(&Path, &Path)],
) -> Result<ContainerOutput, ContainerError> {
// Print detailed debugging info
wrkflw_logging::info(&format!("Podman: Running container with image: {}", image));
let timeout_duration = std::time::Duration::from_secs(360); // 6 minutes timeout
// Run the entire container operation with a timeout
match tokio::time::timeout(
timeout_duration,
self.run_container_inner(image, cmd, env_vars, working_dir, volumes),
)
.await
{
Ok(result) => result,
Err(_) => {
wrkflw_logging::error("Podman operation timed out after 360 seconds");
Err(ContainerError::ContainerExecution(
"Operation timed out".to_string(),
))
}
}
}
async fn pull_image(&self, image: &str) -> Result<(), ContainerError> {
// Add a timeout for pull operations
let timeout_duration = std::time::Duration::from_secs(30);
match tokio::time::timeout(timeout_duration, self.pull_image_inner(image)).await {
Ok(result) => result,
Err(_) => {
wrkflw_logging::warning(&format!(
"Pull of image {} timed out, continuing with existing image",
image
));
// Return success to allow continuing with existing image
Ok(())
}
}
}
async fn build_image(&self, dockerfile: &Path, tag: &str) -> Result<(), ContainerError> {
// Add a timeout for build operations
let timeout_duration = std::time::Duration::from_secs(120); // 2 minutes timeout for builds
match tokio::time::timeout(timeout_duration, self.build_image_inner(dockerfile, tag)).await
{
Ok(result) => result,
Err(_) => {
wrkflw_logging::error(&format!(
"Building image {} timed out after 120 seconds",
tag
));
Err(ContainerError::ImageBuild(
"Operation timed out".to_string(),
))
}
}
}
async fn prepare_language_environment(
&self,
language: &str,
version: Option<&str>,
additional_packages: Option<Vec<String>>,
) -> Result<String, ContainerError> {
// Check if we already have a customized image for this language and version
let key = format!("{}-{}", language, version.unwrap_or("latest"));
if let Some(customized_image) = Self::get_language_specific_image("", language, version) {
return Ok(customized_image);
}
// Create a temporary Dockerfile for customization
let temp_dir = tempfile::tempdir().map_err(|e| {
ContainerError::ContainerStart(format!("Failed to create temp directory: {}", e))
})?;
let dockerfile_path = temp_dir.path().join("Dockerfile");
let mut dockerfile_content = String::new();
// Add language-specific setup based on the language
match language {
"python" => {
let base_image =
version.map_or("python:3.11-slim".to_string(), |v| format!("python:{}", v));
dockerfile_content.push_str(&format!("FROM {}\n\n", base_image));
dockerfile_content.push_str(
"RUN apt-get update && apt-get install -y --no-install-recommends \\\n",
);
dockerfile_content.push_str(" build-essential \\\n");
dockerfile_content.push_str(" && rm -rf /var/lib/apt/lists/*\n");
if let Some(packages) = additional_packages {
for package in packages {
dockerfile_content.push_str(&format!("RUN pip install {}\n", package));
}
}
}
"node" => {
let base_image =
version.map_or("node:20-slim".to_string(), |v| format!("node:{}", v));
dockerfile_content.push_str(&format!("FROM {}\n\n", base_image));
dockerfile_content.push_str(
"RUN apt-get update && apt-get install -y --no-install-recommends \\\n",
);
dockerfile_content.push_str(" build-essential \\\n");
dockerfile_content.push_str(" && rm -rf /var/lib/apt/lists/*\n");
if let Some(packages) = additional_packages {
for package in packages {
dockerfile_content.push_str(&format!("RUN npm install -g {}\n", package));
}
}
}
"java" => {
let base_image = version.map_or("eclipse-temurin:17-jdk".to_string(), |v| {
format!("eclipse-temurin:{}", v)
});
dockerfile_content.push_str(&format!("FROM {}\n\n", base_image));
dockerfile_content.push_str(
"RUN apt-get update && apt-get install -y --no-install-recommends \\\n",
);
dockerfile_content.push_str(" maven \\\n");
dockerfile_content.push_str(" && rm -rf /var/lib/apt/lists/*\n");
}
"go" => {
let base_image =
version.map_or("golang:1.21-slim".to_string(), |v| format!("golang:{}", v));
dockerfile_content.push_str(&format!("FROM {}\n\n", base_image));
dockerfile_content.push_str(
"RUN apt-get update && apt-get install -y --no-install-recommends \\\n",
);
dockerfile_content.push_str(" git \\\n");
dockerfile_content.push_str(" && rm -rf /var/lib/apt/lists/*\n");
if let Some(packages) = additional_packages {
for package in packages {
dockerfile_content.push_str(&format!("RUN go install {}\n", package));
}
}
}
"dotnet" => {
let base_image = version
.map_or("mcr.microsoft.com/dotnet/sdk:7.0".to_string(), |v| {
format!("mcr.microsoft.com/dotnet/sdk:{}", v)
});
dockerfile_content.push_str(&format!("FROM {}\n\n", base_image));
if let Some(packages) = additional_packages {
for package in packages {
dockerfile_content
.push_str(&format!("RUN dotnet tool install -g {}\n", package));
}
}
}
"rust" => {
let base_image =
version.map_or("rust:latest".to_string(), |v| format!("rust:{}", v));
dockerfile_content.push_str(&format!("FROM {}\n\n", base_image));
dockerfile_content.push_str(
"RUN apt-get update && apt-get install -y --no-install-recommends \\\n",
);
dockerfile_content.push_str(" build-essential \\\n");
dockerfile_content.push_str(" && rm -rf /var/lib/apt/lists/*\n");
if let Some(packages) = additional_packages {
for package in packages {
dockerfile_content.push_str(&format!("RUN cargo install {}\n", package));
}
}
}
_ => {
return Err(ContainerError::ContainerStart(format!(
"Unsupported language: {}",
language
)));
}
}
// Write the Dockerfile
std::fs::write(&dockerfile_path, dockerfile_content).map_err(|e| {
ContainerError::ContainerStart(format!("Failed to write Dockerfile: {}", e))
})?;
// Build the customized image
let image_tag = format!("wrkflw-{}-{}", language, version.unwrap_or("latest"));
self.build_image(&dockerfile_path, &image_tag).await?;
// Store the customized image
Self::set_language_specific_image("", language, version, &image_tag);
Ok(image_tag)
}
}
// Implementation of internal methods
impl PodmanRuntime {
async fn run_container_inner(
&self,
image: &str,
cmd: &[&str],
env_vars: &[(&str, &str)],
working_dir: &Path,
volumes: &[(&Path, &Path)],
) -> Result<ContainerOutput, ContainerError> {
wrkflw_logging::debug(&format!("Running command in Podman: {:?}", cmd));
wrkflw_logging::debug(&format!("Environment: {:?}", env_vars));
wrkflw_logging::debug(&format!("Working directory: {}", working_dir.display()));
// Generate a unique container name
let container_name = format!("wrkflw-{}", uuid::Uuid::new_v4());
// Build the podman run command and store temporary strings
let working_dir_str = working_dir.to_string_lossy().to_string();
let mut env_strings = Vec::new();
let mut volume_strings = Vec::new();
// Prepare environment variable strings
for (key, value) in env_vars {
env_strings.push(format!("{}={}", key, value));
}
// Prepare volume mount strings
for (host_path, container_path) in volumes {
volume_strings.push(format!(
"{}:{}",
host_path.to_string_lossy(),
container_path.to_string_lossy()
));
}
let mut args = vec!["run", "--name", &container_name, "-w", &working_dir_str];
// Only use --rm if we don't want to preserve containers on failure
// When preserve_containers_on_failure is true, we skip --rm so failed containers remain
if !self.preserve_containers_on_failure {
args.insert(1, "--rm"); // Insert after "run"
}
// Add environment variables
for env_string in &env_strings {
args.push("-e");
args.push(env_string);
}
// Add volume mounts
for volume_string in &volume_strings {
args.push("-v");
args.push(volume_string);
}
// Add the image
args.push(image);
// Add the command
args.extend(cmd);
// Track the container (even though we use --rm, track it for consistency)
track_container(&container_name);
// Execute the command
let result = self.execute_podman_command(&args, None).await;
// Handle container cleanup based on result and settings
match &result {
Ok(output) => {
if output.exit_code == 0 {
// Success - always clean up successful containers
if self.preserve_containers_on_failure {
// We didn't use --rm, so manually remove successful container
let cleanup_result = tokio::time::timeout(
std::time::Duration::from_millis(1000),
Command::new("podman")
.args(["rm", &container_name])
.stdout(Stdio::null())
.stderr(Stdio::null())
.output(),
)
.await;
match cleanup_result {
Ok(Ok(cleanup_output)) => {
if !cleanup_output.status.success() {
wrkflw_logging::debug(&format!(
"Failed to remove successful container {}",
container_name
));
}
}
_ => wrkflw_logging::debug(&format!(
"Timeout removing successful container {}",
container_name
)),
}
}
// If not preserving, container was auto-removed with --rm
untrack_container(&container_name);
} else {
// Failed container
if self.preserve_containers_on_failure {
// Failed and we want to preserve - don't clean up but untrack from auto-cleanup
wrkflw_logging::info(&format!(
"Preserving failed container {} for debugging (exit code: {}). Use 'podman exec -it {} bash' to inspect.",
container_name, output.exit_code, container_name
));
untrack_container(&container_name);
} else {
// Failed but we don't want to preserve - container was auto-removed with --rm
untrack_container(&container_name);
}
}
}
Err(_) => {
// Command failed to execute properly - clean up if container exists and not preserving
if !self.preserve_containers_on_failure {
// Container was created with --rm, so it should be auto-removed
untrack_container(&container_name);
} else {
// Container was created without --rm, try to clean it up since execution failed
let cleanup_result = tokio::time::timeout(
std::time::Duration::from_millis(1000),
Command::new("podman")
.args(["rm", "-f", &container_name])
.stdout(Stdio::null())
.stderr(Stdio::null())
.output(),
)
.await;
match cleanup_result {
Ok(Ok(_)) => wrkflw_logging::debug(&format!(
"Cleaned up failed execution container {}",
container_name
)),
_ => wrkflw_logging::debug(&format!(
"Failed to clean up execution failure container {}",
container_name
)),
}
untrack_container(&container_name);
}
}
}
match &result {
Ok(output) => {
if output.exit_code != 0 {
wrkflw_logging::info(&format!(
"Podman command failed with exit code: {}",
output.exit_code
));
wrkflw_logging::debug(&format!("Failed command: {:?}", cmd));
wrkflw_logging::debug(&format!("Working directory: {}", working_dir.display()));
wrkflw_logging::debug(&format!("STDERR: {}", output.stderr));
}
}
Err(e) => {
wrkflw_logging::error(&format!("Podman execution error: {}", e));
}
}
result
}
async fn pull_image_inner(&self, image: &str) -> Result<(), ContainerError> {
let args = vec!["pull", image];
let output = self.execute_podman_command(&args, None).await?;
if output.exit_code != 0 {
return Err(ContainerError::ImagePull(format!(
"Failed to pull image {}: {}",
image, output.stderr
)));
}
Ok(())
}
async fn build_image_inner(&self, dockerfile: &Path, tag: &str) -> Result<(), ContainerError> {
let context_dir = dockerfile.parent().unwrap_or(Path::new("."));
let dockerfile_str = dockerfile.to_string_lossy().to_string();
let context_dir_str = context_dir.to_string_lossy().to_string();
let args = vec!["build", "-f", &dockerfile_str, "-t", tag, &context_dir_str];
let output = self.execute_podman_command(&args, None).await?;
if output.exit_code != 0 {
return Err(ContainerError::ImageBuild(format!(
"Failed to build image {}: {}",
tag, output.stderr
)));
}
Ok(())
}
}
// Public accessor functions for testing
#[cfg(test)]
pub fn get_tracked_containers() -> Vec<String> {
if let Ok(containers) = RUNNING_CONTAINERS.lock() {
containers.clone()
} else {
vec![]
}
}

View File

@@ -1,13 +1,18 @@
[package]
name = "github"
name = "wrkflw-github"
version.workspace = true
edition.workspace = true
description = "github functionality for wrkflw"
description = "GitHub API integration for wrkflw workflow execution engine"
license.workspace = true
documentation.workspace = true
homepage.workspace = true
repository.workspace = true
keywords.workspace = true
categories.workspace = true
[dependencies]
# Add other crate dependencies as needed
models = { path = "../models" }
# Internal crates
wrkflw-models = { path = "../models", version = "0.6.0" }
# External dependencies from workspace
serde.workspace = true

23
crates/github/README.md Normal file
View File

@@ -0,0 +1,23 @@
## wrkflw-github
GitHub integration helpers used by `wrkflw` to list/trigger workflows.
- **List workflows** in `.github/workflows`
- **Trigger workflow_dispatch** events over the GitHub API
### Example
```rust
use wrkflw_github::{get_repo_info, trigger_workflow};
# tokio_test::block_on(async {
let info = get_repo_info()?;
println!("{}/{} (default branch: {})", info.owner, info.repo, info.default_branch);
// Requires GITHUB_TOKEN in env
trigger_workflow("ci", Some("main"), None).await?;
# Ok::<_, Box<dyn std::error::Error>>(())
# })?;
```
Notes: set `GITHUB_TOKEN` with the `workflow` scope; only public repos are supported out-of-the-box.

View File

@@ -1,13 +1,18 @@
[package]
name = "gitlab"
name = "wrkflw-gitlab"
version.workspace = true
edition.workspace = true
description = "gitlab functionality for wrkflw"
description = "GitLab API integration for wrkflw workflow execution engine"
license.workspace = true
documentation.workspace = true
homepage.workspace = true
repository.workspace = true
keywords.workspace = true
categories.workspace = true
[dependencies]
# Internal crates
models = { path = "../models" }
wrkflw-models = { path = "../models", version = "0.6.0" }
# External dependencies
lazy_static.workspace = true

23
crates/gitlab/README.md Normal file
View File

@@ -0,0 +1,23 @@
## wrkflw-gitlab
GitLab integration helpers used by `wrkflw` to trigger pipelines.
- Reads repo info from local git remote
- Triggers pipelines via GitLab API
### Example
```rust
use wrkflw_gitlab::{get_repo_info, trigger_pipeline};
# tokio_test::block_on(async {
let info = get_repo_info()?;
println!("{}/{} (default branch: {})", info.namespace, info.project, info.default_branch);
// Requires GITLAB_TOKEN in env (api scope)
trigger_pipeline(Some("main"), None).await?;
# Ok::<_, Box<dyn std::error::Error>>(())
# })?;
```
Notes: looks for `.gitlab-ci.yml` in the repo root when listing pipelines.

View File

@@ -1,13 +1,18 @@
[package]
name = "logging"
name = "wrkflw-logging"
version.workspace = true
edition.workspace = true
description = "logging functionality for wrkflw"
description = "Logging functionality for wrkflw workflow execution engine"
license.workspace = true
documentation.workspace = true
homepage.workspace = true
repository.workspace = true
keywords.workspace = true
categories.workspace = true
[dependencies]
# Internal crates
models = { path = "../models" }
wrkflw-models = { path = "../models", version = "0.6.0" }
# External dependencies
chrono.workspace = true

22
crates/logging/README.md Normal file
View File

@@ -0,0 +1,22 @@
## wrkflw-logging
Lightweight in-memory logging with simple levels for TUI/CLI output.
- Thread-safe, timestamped messages
- Level filtering (Debug/Info/Warning/Error)
- Pluggable into UI for live log views
### Example
```rust
use wrkflw_logging::{info, warning, error, LogLevel, set_log_level, get_logs};
set_log_level(LogLevel::Info);
info("starting");
warning("be careful");
error("boom");
for line in get_logs() {
println!("{}", line);
}
```

View File

@@ -1,13 +1,18 @@
[package]
name = "matrix"
name = "wrkflw-matrix"
version.workspace = true
edition.workspace = true
description = "matrix functionality for wrkflw"
description = "Matrix job parallelization for wrkflw workflow execution engine"
license.workspace = true
documentation.workspace = true
homepage.workspace = true
repository.workspace = true
keywords.workspace = true
categories.workspace = true
[dependencies]
# Internal crates
models = { path = "../models" }
wrkflw-models = { path = "../models", version = "0.6.0" }
# External dependencies
indexmap.workspace = true

20
crates/matrix/README.md Normal file
View File

@@ -0,0 +1,20 @@
## wrkflw-matrix
Matrix expansion utilities used to compute all job combinations and format labels.
- Supports `include`, `exclude`, `max-parallel`, and `fail-fast`
- Provides display helpers for UI/CLI
### Example
```rust
use wrkflw_matrix::{MatrixConfig, expand_matrix};
use serde_yaml::Value;
use std::collections::HashMap;
let mut cfg = MatrixConfig::default();
cfg.parameters.insert("os".into(), Value::from(vec!["ubuntu", "alpine"])) ;
let combos = expand_matrix(&cfg).expect("expand");
assert!(!combos.is_empty());
```

View File

@@ -1,9 +1,14 @@
[package]
name = "models"
name = "wrkflw-models"
version.workspace = true
edition.workspace = true
description = "Data models for wrkflw"
description = "Data models and structures for wrkflw workflow execution engine"
license.workspace = true
documentation.workspace = true
homepage.workspace = true
repository.workspace = true
keywords.workspace = true
categories.workspace = true
[dependencies]
serde.workspace = true

16
crates/models/README.md Normal file
View File

@@ -0,0 +1,16 @@
## wrkflw-models
Common data structures shared across crates.
- `ValidationResult` for structural/semantic checks
- GitLab pipeline models (serde types)
### Example
```rust
use wrkflw_models::ValidationResult;
let mut res = ValidationResult::new();
res.add_issue("missing jobs".into());
assert!(!res.is_valid);
```

View File

@@ -1,14 +1,19 @@
[package]
name = "parser"
name = "wrkflw-parser"
version.workspace = true
edition.workspace = true
description = "Parser functionality for wrkflw"
description = "Workflow parsing functionality for wrkflw execution engine"
license.workspace = true
documentation.workspace = true
homepage.workspace = true
repository.workspace = true
keywords.workspace = true
categories.workspace = true
[dependencies]
# Internal crates
models = { path = "../models" }
matrix = { path = "../matrix" }
wrkflw-models = { path = "../models", version = "0.6.0" }
wrkflw-matrix = { path = "../matrix", version = "0.6.0" }
# External dependencies
jsonschema.workspace = true

13
crates/parser/README.md Normal file
View File

@@ -0,0 +1,13 @@
## wrkflw-parser
Parsers and schema helpers for GitHub/GitLab workflow files.
- GitHub Actions workflow parsing and JSON Schema validation
- GitLab CI parsing helpers
### Example
```rust
// High-level crates (`wrkflw` and `wrkflw-executor`) wrap parser usage.
// Use those unless you are extending parsing behavior directly.
```

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

View File

@@ -1,11 +1,11 @@
use crate::schema::{SchemaType, SchemaValidator};
use crate::workflow;
use models::gitlab::Pipeline;
use models::ValidationResult;
use std::collections::HashMap;
use std::fs;
use std::path::Path;
use thiserror::Error;
use wrkflw_models::gitlab::Pipeline;
use wrkflw_models::ValidationResult;
#[derive(Error, Debug)]
pub enum GitlabParserError {
@@ -130,7 +130,7 @@ pub fn convert_to_workflow_format(pipeline: &Pipeline) -> workflow::WorkflowDefi
// Create a new job
let mut job = workflow::Job {
runs_on: "ubuntu-latest".to_string(), // Default runner
runs_on: Some(vec!["ubuntu-latest".to_string()]), // Default runner
needs: None,
steps: Vec::new(),
env: HashMap::new(),
@@ -139,6 +139,9 @@ pub fn convert_to_workflow_format(pipeline: &Pipeline) -> workflow::WorkflowDefi
if_condition: None,
outputs: None,
permissions: None,
uses: None,
with: None,
secrets: None,
};
// Add job-specific environment variables
@@ -204,8 +207,8 @@ pub fn convert_to_workflow_format(pipeline: &Pipeline) -> workflow::WorkflowDefi
for (i, service) in services.iter().enumerate() {
let service_name = format!("service-{}", i);
let service_image = match service {
models::gitlab::Service::Simple(name) => name.clone(),
models::gitlab::Service::Detailed { name, .. } => name.clone(),
wrkflw_models::gitlab::Service::Simple(name) => name.clone(),
wrkflw_models::gitlab::Service::Detailed { name, .. } => name.clone(),
};
let service = workflow::Service {
@@ -230,13 +233,13 @@ pub fn convert_to_workflow_format(pipeline: &Pipeline) -> workflow::WorkflowDefi
#[cfg(test)]
mod tests {
use super::*;
use std::path::PathBuf;
// use std::path::PathBuf; // unused
use tempfile::NamedTempFile;
#[test]
fn test_parse_simple_pipeline() {
// Create a temporary file with a simple GitLab CI/CD pipeline
let mut file = NamedTempFile::new().unwrap();
let file = NamedTempFile::new().unwrap();
let content = r#"
stages:
- build

View File

@@ -3,8 +3,8 @@ use serde_json::Value;
use std::fs;
use std::path::Path;
const GITHUB_WORKFLOW_SCHEMA: &str = include_str!("../../../schemas/github-workflow.json");
const GITLAB_CI_SCHEMA: &str = include_str!("../../../schemas/gitlab-ci.json");
const GITHUB_WORKFLOW_SCHEMA: &str = include_str!("github-workflow.json");
const GITLAB_CI_SCHEMA: &str = include_str!("gitlab-ci.json");
#[derive(Debug, Clone, Copy)]
pub enum SchemaType {

View File

@@ -1,8 +1,8 @@
use matrix::MatrixConfig;
use serde::{Deserialize, Deserializer, Serialize};
use std::collections::HashMap;
use std::fs;
use std::path::Path;
use wrkflw_matrix::MatrixConfig;
use super::schema::SchemaValidator;
@@ -26,6 +26,26 @@ where
}
}
// Custom deserializer for runs-on field that handles both string and array formats
fn deserialize_runs_on<'de, D>(deserializer: D) -> Result<Option<Vec<String>>, D::Error>
where
D: Deserializer<'de>,
{
#[derive(Deserialize)]
#[serde(untagged)]
enum StringOrVec {
String(String),
Vec(Vec<String>),
}
let value = Option::<StringOrVec>::deserialize(deserializer)?;
match value {
Some(StringOrVec::String(s)) => Ok(Some(vec![s])),
Some(StringOrVec::Vec(v)) => Ok(Some(v)),
None => Ok(None),
}
}
#[derive(Debug, Deserialize, Serialize)]
pub struct WorkflowDefinition {
pub name: String,
@@ -38,10 +58,11 @@ pub struct WorkflowDefinition {
#[derive(Debug, Deserialize, Serialize)]
pub struct Job {
#[serde(rename = "runs-on")]
pub runs_on: String,
#[serde(rename = "runs-on", default, deserialize_with = "deserialize_runs_on")]
pub runs_on: Option<Vec<String>>,
#[serde(default, deserialize_with = "deserialize_needs")]
pub needs: Option<Vec<String>>,
#[serde(default)]
pub steps: Vec<Step>,
#[serde(default)]
pub env: HashMap<String, String>,
@@ -55,6 +76,13 @@ pub struct Job {
pub outputs: Option<HashMap<String, String>>,
#[serde(default)]
pub permissions: Option<HashMap<String, String>>,
// Reusable workflow (job-level 'uses') support
#[serde(default)]
pub uses: Option<String>,
#[serde(default)]
pub with: Option<HashMap<String, String>>,
#[serde(default)]
pub secrets: Option<serde_yaml::Value>,
}
#[derive(Debug, Deserialize, Serialize)]

View File

@@ -1,14 +1,19 @@
[package]
name = "runtime"
name = "wrkflw-runtime"
version.workspace = true
edition.workspace = true
description = "Runtime environment for wrkflw"
description = "Runtime execution environment for wrkflw workflow engine"
license.workspace = true
documentation.workspace = true
homepage.workspace = true
repository.workspace = true
keywords.workspace = true
categories.workspace = true
[dependencies]
# Internal crates
models = { path = "../models" }
logging = { path = "../logging", version = "0.4.0" }
wrkflw-models = { path = "../models", version = "0.6.0" }
wrkflw-logging = { path = "../logging", version = "0.6.0" }
# External dependencies
async-trait.workspace = true
@@ -18,5 +23,5 @@ serde_yaml.workspace = true
tempfile = "3.9"
tokio.workspace = true
futures = "0.3"
utils = { path = "../utils", version = "0.4.0" }
wrkflw-utils = { path = "../utils", version = "0.6.0" }
which = "4.4"

13
crates/runtime/README.md Normal file
View File

@@ -0,0 +1,13 @@
## wrkflw-runtime
Runtime abstractions for executing steps in containers or emulation.
- Container management primitives used by the executor
- Emulation mode helpers (run on host without containers)
### Example
```rust
// This crate is primarily consumed by `wrkflw-executor`.
// Prefer using the executor API instead of calling runtime directly.
```

View File

@@ -1,6 +1,5 @@
use crate::container::{ContainerError, ContainerOutput, ContainerRuntime};
use async_trait::async_trait;
use logging;
use once_cell::sync::Lazy;
use std::collections::HashMap;
use std::fs;
@@ -9,6 +8,7 @@ use std::process::Command;
use std::sync::Mutex;
use tempfile::TempDir;
use which;
use wrkflw_logging;
// Global collection of resources to clean up
static EMULATION_WORKSPACES: Lazy<Mutex<Vec<PathBuf>>> = Lazy::new(|| Mutex::new(Vec::new()));
@@ -162,9 +162,9 @@ impl ContainerRuntime for EmulationRuntime {
}
// Log more detailed debugging information
logging::info(&format!("Executing command in container: {}", command_str));
logging::info(&format!("Working directory: {}", working_dir.display()));
logging::info(&format!("Command length: {}", command.len()));
wrkflw_logging::info(&format!("Executing command in container: {}", command_str));
wrkflw_logging::info(&format!("Working directory: {}", working_dir.display()));
wrkflw_logging::info(&format!("Command length: {}", command.len()));
if command.is_empty() {
return Err(ContainerError::ContainerExecution(
@@ -174,13 +174,13 @@ impl ContainerRuntime for EmulationRuntime {
// Print each command part separately for debugging
for (i, part) in command.iter().enumerate() {
logging::info(&format!("Command part {}: '{}'", i, part));
wrkflw_logging::info(&format!("Command part {}: '{}'", i, part));
}
// Log environment variables
logging::info("Environment variables:");
wrkflw_logging::info("Environment variables:");
for (key, value) in env_vars {
logging::info(&format!(" {}={}", key, value));
wrkflw_logging::info(&format!(" {}={}", key, value));
}
// Find actual working directory - determine if we should use the current directory instead
@@ -197,7 +197,7 @@ impl ContainerRuntime for EmulationRuntime {
// If found, use that as the working directory
if let Some(path) = workspace_path {
if path.exists() {
logging::info(&format!(
wrkflw_logging::info(&format!(
"Using environment-defined workspace: {}",
path.display()
));
@@ -206,7 +206,7 @@ impl ContainerRuntime for EmulationRuntime {
// Fallback to current directory
let current_dir =
std::env::current_dir().unwrap_or_else(|_| PathBuf::from("."));
logging::info(&format!(
wrkflw_logging::info(&format!(
"Using current directory: {}",
current_dir.display()
));
@@ -215,7 +215,7 @@ impl ContainerRuntime for EmulationRuntime {
} else {
// Fallback to current directory
let current_dir = std::env::current_dir().unwrap_or_else(|_| PathBuf::from("."));
logging::info(&format!(
wrkflw_logging::info(&format!(
"Using current directory: {}",
current_dir.display()
));
@@ -225,7 +225,7 @@ impl ContainerRuntime for EmulationRuntime {
working_dir.to_path_buf()
};
logging::info(&format!(
wrkflw_logging::info(&format!(
"Using actual working directory: {}",
actual_working_dir.display()
));
@@ -233,8 +233,8 @@ impl ContainerRuntime for EmulationRuntime {
// Check if path contains the command (for shell script execution)
let command_path = which::which(command[0]);
match &command_path {
Ok(path) => logging::info(&format!("Found command at: {}", path.display())),
Err(e) => logging::error(&format!(
Ok(path) => wrkflw_logging::info(&format!("Found command at: {}", path.display())),
Err(e) => wrkflw_logging::error(&format!(
"Command not found in PATH: {} - Error: {}",
command[0], e
)),
@@ -246,7 +246,7 @@ impl ContainerRuntime for EmulationRuntime {
|| command_str.starts_with("mkdir ")
|| command_str.starts_with("mv ")
{
logging::info("Executing as shell command");
wrkflw_logging::info("Executing as shell command");
// Execute as a shell command
let mut cmd = Command::new("sh");
cmd.arg("-c");
@@ -264,7 +264,7 @@ impl ContainerRuntime for EmulationRuntime {
let output = String::from_utf8_lossy(&output_result.stdout).to_string();
let error = String::from_utf8_lossy(&output_result.stderr).to_string();
logging::debug(&format!(
wrkflw_logging::debug(&format!(
"Shell command completed with exit code: {}",
exit_code
));
@@ -314,7 +314,7 @@ impl ContainerRuntime for EmulationRuntime {
// Always use the current directory for cargo/rust commands rather than the temporary directory
let current_dir = std::env::current_dir().unwrap_or_else(|_| PathBuf::from("."));
logging::info(&format!(
wrkflw_logging::info(&format!(
"Using project directory for Rust command: {}",
current_dir.display()
));
@@ -326,7 +326,7 @@ impl ContainerRuntime for EmulationRuntime {
if *key == "CARGO_HOME" && value.contains("${CI_PROJECT_DIR}") {
let cargo_home =
value.replace("${CI_PROJECT_DIR}", &current_dir.to_string_lossy());
logging::info(&format!("Setting CARGO_HOME to: {}", cargo_home));
wrkflw_logging::info(&format!("Setting CARGO_HOME to: {}", cargo_home));
cmd.env(key, cargo_home);
} else {
cmd.env(key, value);
@@ -338,7 +338,7 @@ impl ContainerRuntime for EmulationRuntime {
cmd.args(&parts[1..]);
}
logging::debug(&format!(
wrkflw_logging::debug(&format!(
"Executing Rust command: {} in {}",
command_str,
current_dir.display()
@@ -350,7 +350,7 @@ impl ContainerRuntime for EmulationRuntime {
let output = String::from_utf8_lossy(&output_result.stdout).to_string();
let error = String::from_utf8_lossy(&output_result.stderr).to_string();
logging::debug(&format!("Command exit code: {}", exit_code));
wrkflw_logging::debug(&format!("Command exit code: {}", exit_code));
if exit_code != 0 {
let mut error_details = format!(
@@ -405,7 +405,7 @@ impl ContainerRuntime for EmulationRuntime {
let output = String::from_utf8_lossy(&output_result.stdout).to_string();
let error = String::from_utf8_lossy(&output_result.stderr).to_string();
logging::debug(&format!("Command completed with exit code: {}", exit_code));
wrkflw_logging::debug(&format!("Command completed with exit code: {}", exit_code));
if exit_code != 0 {
let mut error_details = format!(
@@ -443,12 +443,12 @@ impl ContainerRuntime for EmulationRuntime {
}
async fn pull_image(&self, image: &str) -> Result<(), ContainerError> {
logging::info(&format!("🔄 Emulation: Pretending to pull image {}", image));
wrkflw_logging::info(&format!("🔄 Emulation: Pretending to pull image {}", image));
Ok(())
}
async fn build_image(&self, dockerfile: &Path, tag: &str) -> Result<(), ContainerError> {
logging::info(&format!(
wrkflw_logging::info(&format!(
"🔄 Emulation: Pretending to build image {} from {}",
tag,
dockerfile.display()
@@ -543,14 +543,14 @@ pub async fn handle_special_action(action: &str) -> Result<(), ContainerError> {
"latest"
};
logging::info(&format!(
wrkflw_logging::info(&format!(
"🔄 Processing action: {} @ {}",
action_name, action_version
));
// Handle specific known actions with special requirements
if action.starts_with("cachix/install-nix-action") {
logging::info("🔄 Emulating cachix/install-nix-action");
wrkflw_logging::info("🔄 Emulating cachix/install-nix-action");
// In emulation mode, check if nix is installed
let nix_installed = Command::new("which")
@@ -560,56 +560,56 @@ pub async fn handle_special_action(action: &str) -> Result<(), ContainerError> {
.unwrap_or(false);
if !nix_installed {
logging::info("🔄 Emulation: Nix is required but not installed.");
logging::info(
wrkflw_logging::info("🔄 Emulation: Nix is required but not installed.");
wrkflw_logging::info(
"🔄 To use this workflow, please install Nix: https://nixos.org/download.html",
);
logging::info("🔄 Continuing emulation, but nix commands will fail.");
wrkflw_logging::info("🔄 Continuing emulation, but nix commands will fail.");
} else {
logging::info("🔄 Emulation: Using system-installed Nix");
wrkflw_logging::info("🔄 Emulation: Using system-installed Nix");
}
} else if action.starts_with("actions-rs/cargo@") {
// For actions-rs/cargo action, ensure Rust is available
logging::info(&format!("🔄 Detected Rust cargo action: {}", action));
wrkflw_logging::info(&format!("🔄 Detected Rust cargo action: {}", action));
// Verify Rust/cargo is installed
check_command_available("cargo", "Rust/Cargo", "https://rustup.rs/");
} else if action.starts_with("actions-rs/toolchain@") {
// For actions-rs/toolchain action, check for Rust installation
logging::info(&format!("🔄 Detected Rust toolchain action: {}", action));
wrkflw_logging::info(&format!("🔄 Detected Rust toolchain action: {}", action));
check_command_available("rustc", "Rust", "https://rustup.rs/");
} else if action.starts_with("actions-rs/fmt@") {
// For actions-rs/fmt action, check if rustfmt is available
logging::info(&format!("🔄 Detected Rust formatter action: {}", action));
wrkflw_logging::info(&format!("🔄 Detected Rust formatter action: {}", action));
check_command_available("rustfmt", "rustfmt", "rustup component add rustfmt");
} else if action.starts_with("actions/setup-node@") {
// Node.js setup action
logging::info(&format!("🔄 Detected Node.js setup action: {}", action));
wrkflw_logging::info(&format!("🔄 Detected Node.js setup action: {}", action));
check_command_available("node", "Node.js", "https://nodejs.org/");
} else if action.starts_with("actions/setup-python@") {
// Python setup action
logging::info(&format!("🔄 Detected Python setup action: {}", action));
wrkflw_logging::info(&format!("🔄 Detected Python setup action: {}", action));
check_command_available("python", "Python", "https://www.python.org/downloads/");
} else if action.starts_with("actions/setup-java@") {
// Java setup action
logging::info(&format!("🔄 Detected Java setup action: {}", action));
wrkflw_logging::info(&format!("🔄 Detected Java setup action: {}", action));
check_command_available("java", "Java", "https://adoptium.net/");
} else if action.starts_with("actions/checkout@") {
// Git checkout action - this is handled implicitly by our workspace setup
logging::info("🔄 Detected checkout action - workspace files are already prepared");
wrkflw_logging::info("🔄 Detected checkout action - workspace files are already prepared");
} else if action.starts_with("actions/cache@") {
// Cache action - can't really emulate caching effectively
logging::info(
wrkflw_logging::info(
"🔄 Detected cache action - caching is not fully supported in emulation mode",
);
} else {
// Generic action we don't have special handling for
logging::info(&format!(
wrkflw_logging::info(&format!(
"🔄 Action '{}' has no special handling in emulation mode",
action_name
));
@@ -628,12 +628,12 @@ fn check_command_available(command: &str, name: &str, install_url: &str) {
.unwrap_or(false);
if !is_available {
logging::warning(&format!("{} is required but not found on the system", name));
logging::info(&format!(
wrkflw_logging::warning(&format!("{} is required but not found on the system", name));
wrkflw_logging::info(&format!(
"To use this action, please install {}: {}",
name, install_url
));
logging::info(&format!(
wrkflw_logging::info(&format!(
"Continuing emulation, but {} commands will fail",
name
));
@@ -642,7 +642,7 @@ fn check_command_available(command: &str, name: &str, install_url: &str) {
if let Ok(output) = Command::new(command).arg("--version").output() {
if output.status.success() {
let version = String::from_utf8_lossy(&output.stdout);
logging::info(&format!("🔄 Using system {}: {}", name, version.trim()));
wrkflw_logging::info(&format!("🔄 Using system {}: {}", name, version.trim()));
}
}
}
@@ -708,7 +708,7 @@ async fn cleanup_processes() {
};
for pid in processes_to_cleanup {
logging::info(&format!("Cleaning up emulated process: {}", pid));
wrkflw_logging::info(&format!("Cleaning up emulated process: {}", pid));
#[cfg(unix)]
{
@@ -747,7 +747,7 @@ async fn cleanup_workspaces() {
};
for workspace_path in workspaces_to_cleanup {
logging::info(&format!(
wrkflw_logging::info(&format!(
"Cleaning up emulation workspace: {}",
workspace_path.display()
));
@@ -755,8 +755,8 @@ async fn cleanup_workspaces() {
// Only attempt to remove if it exists
if workspace_path.exists() {
match fs::remove_dir_all(&workspace_path) {
Ok(_) => logging::info("Successfully removed workspace directory"),
Err(e) => logging::error(&format!("Error removing workspace: {}", e)),
Ok(_) => wrkflw_logging::info("Successfully removed workspace directory"),
Err(e) => wrkflw_logging::error(&format!("Error removing workspace: {}", e)),
}
}

View File

@@ -1,18 +1,23 @@
[package]
name = "ui"
name = "wrkflw-ui"
version.workspace = true
edition.workspace = true
description = "user interface functionality for wrkflw"
description = "Terminal user interface for wrkflw workflow execution engine"
license.workspace = true
documentation.workspace = true
homepage.workspace = true
repository.workspace = true
keywords.workspace = true
categories.workspace = true
[dependencies]
# Internal crates
models = { path = "../models" }
evaluator = { path = "../evaluator" }
executor = { path = "../executor" }
logging = { path = "../logging" }
utils = { path = "../utils" }
github = { path = "../github" }
wrkflw-models = { path = "../models", version = "0.6.0" }
wrkflw-evaluator = { path = "../evaluator", version = "0.6.0" }
wrkflw-executor = { path = "../executor", version = "0.6.0" }
wrkflw-logging = { path = "../logging", version = "0.6.0" }
wrkflw-utils = { path = "../utils", version = "0.6.0" }
wrkflw-github = { path = "../github", version = "0.6.0" }
# External dependencies
chrono.workspace = true

23
crates/ui/README.md Normal file
View File

@@ -0,0 +1,23 @@
## wrkflw-ui
Terminal user interface for browsing workflows, running them, and viewing logs.
- Tabs: Workflows, Execution, Logs, Help
- Hotkeys: `1-4`, `Tab`, `Enter`, `r`, `R`, `t`, `v`, `e`, `q`, etc.
- Integrates with `wrkflw-executor` and `wrkflw-logging`
### Example
```rust
use std::path::PathBuf;
use wrkflw_executor::RuntimeType;
use wrkflw_ui::run_wrkflw_tui;
# tokio_test::block_on(async {
let path = PathBuf::from(".github/workflows");
run_wrkflw_tui(Some(&path), RuntimeType::Docker, true, false).await?;
# Ok::<_, Box<dyn std::error::Error>>(())
# })?;
```
Most users should run the `wrkflw` binary and select TUI mode: `wrkflw tui`.

View File

@@ -11,12 +11,12 @@ use crossterm::{
execute,
terminal::{disable_raw_mode, enable_raw_mode, EnterAlternateScreen, LeaveAlternateScreen},
};
use executor::RuntimeType;
use ratatui::{backend::CrosstermBackend, Terminal};
use std::io::{self, stdout};
use std::path::PathBuf;
use std::sync::mpsc;
use std::time::{Duration, Instant};
use wrkflw_executor::RuntimeType;
pub use state::App;
@@ -50,7 +50,7 @@ pub async fn run_wrkflw_tui(
if app.validation_mode {
app.logs.push("Starting in validation mode".to_string());
logging::info("Starting in validation mode");
wrkflw_logging::info("Starting in validation mode");
}
// Load workflows
@@ -108,13 +108,13 @@ pub async fn run_wrkflw_tui(
Ok(_) => Ok(()),
Err(e) => {
// If the TUI fails to initialize or crashes, fall back to CLI mode
logging::error(&format!("Failed to start UI: {}", e));
wrkflw_logging::error(&format!("Failed to start UI: {}", e));
// Only for 'tui' command should we fall back to CLI mode for files
// For other commands, return the error
if let Some(path) = path {
if path.is_file() {
logging::error("Falling back to CLI mode...");
wrkflw_logging::error("Falling back to CLI mode...");
crate::handlers::workflow::execute_workflow_cli(path, runtime_type, verbose)
.await
} else if path.is_dir() {
@@ -154,6 +154,15 @@ fn run_tui_event_loop(
if last_tick.elapsed() >= tick_rate {
app.tick();
app.update_running_workflow_progress();
// Check for log processing updates (includes system log change detection)
app.check_log_processing_updates();
// Request log processing if needed
if app.logs_need_update {
app.request_log_processing_update();
}
last_tick = Instant::now();
}
@@ -273,7 +282,7 @@ fn run_tui_event_loop(
"[{}] DEBUG: Shift+r detected - this should be uppercase R",
timestamp
));
logging::info(
wrkflw_logging::info(
"Shift+r detected as lowercase - this should be uppercase R",
);
@@ -329,7 +338,7 @@ fn run_tui_event_loop(
"[{}] DEBUG: Reset key 'Shift+R' pressed",
timestamp
));
logging::info("Reset key 'Shift+R' pressed");
wrkflw_logging::info("Reset key 'Shift+R' pressed");
if !app.running {
// Reset workflow status
@@ -367,7 +376,7 @@ fn run_tui_event_loop(
"Workflow '{}' is already running",
workflow.name
));
logging::warning(&format!(
wrkflw_logging::warning(&format!(
"Workflow '{}' is already running",
workflow.name
));
@@ -408,7 +417,7 @@ fn run_tui_event_loop(
));
}
logging::warning(&format!(
wrkflw_logging::warning(&format!(
"Cannot trigger workflow in {} state",
status_text
));
@@ -416,20 +425,22 @@ fn run_tui_event_loop(
}
} else {
app.logs.push("No workflow selected to trigger".to_string());
logging::warning("No workflow selected to trigger");
wrkflw_logging::warning("No workflow selected to trigger");
}
} else if app.running {
app.logs.push(
"Cannot trigger workflow while another operation is in progress"
.to_string(),
);
logging::warning(
wrkflw_logging::warning(
"Cannot trigger workflow while another operation is in progress",
);
} else if app.selected_tab != 0 {
app.logs
.push("Switch to Workflows tab to trigger a workflow".to_string());
logging::warning("Switch to Workflows tab to trigger a workflow");
wrkflw_logging::warning(
"Switch to Workflows tab to trigger a workflow",
);
// For better UX, we could also automatically switch to the Workflows tab here
app.switch_tab(0);
}

View File

@@ -1,14 +1,15 @@
// App state for the UI
use crate::log_processor::{LogProcessingRequest, LogProcessor, ProcessedLogEntry};
use crate::models::{
ExecutionResultMsg, JobExecution, LogFilterLevel, StepExecution, Workflow, WorkflowExecution,
WorkflowStatus,
};
use chrono::Local;
use crossterm::event::KeyCode;
use executor::{JobStatus, RuntimeType, StepStatus};
use ratatui::widgets::{ListState, TableState};
use std::sync::mpsc;
use std::time::{Duration, Instant};
use wrkflw_executor::{JobStatus, RuntimeType, StepStatus};
/// Application state
pub struct App {
@@ -40,6 +41,12 @@ pub struct App {
pub log_filter_level: Option<LogFilterLevel>, // Current log level filter
pub log_search_matches: Vec<usize>, // Indices of logs that match the search
pub log_search_match_idx: usize, // Current match index for navigation
// Background log processing
pub log_processor: LogProcessor,
pub processed_logs: Vec<ProcessedLogEntry>,
pub logs_need_update: bool, // Flag to trigger log processing
pub last_system_logs_count: usize, // Track system log changes
}
impl App {
@@ -60,7 +67,7 @@ impl App {
let mut step_table_state = TableState::default();
step_table_state.select(Some(0));
// Check Docker availability if Docker runtime is selected
// Check container runtime availability if container runtime is selected
let mut initial_logs = Vec::new();
let runtime_type = match runtime_type {
RuntimeType::Docker => {
@@ -69,8 +76,10 @@ impl App {
// Use a very short timeout to prevent blocking the UI
let result = std::thread::scope(|s| {
let handle = s.spawn(|| {
utils::fd::with_stderr_to_null(executor::docker::is_available)
.unwrap_or(false)
wrkflw_utils::fd::with_stderr_to_null(
wrkflw_executor::docker::is_available,
)
.unwrap_or(false)
});
// Set a short timeout for the thread
@@ -85,7 +94,7 @@ impl App {
}
// If we reach here, the check took too long
logging::warning(
wrkflw_logging::warning(
"Docker availability check timed out, falling back to emulation mode",
);
false
@@ -94,7 +103,7 @@ impl App {
}) {
Ok(result) => result,
Err(_) => {
logging::warning("Docker availability check failed with panic, falling back to emulation mode");
wrkflw_logging::warning("Docker availability check failed with panic, falling back to emulation mode");
false
}
};
@@ -104,15 +113,67 @@ impl App {
"Docker is not available or unresponsive. Using emulation mode instead."
.to_string(),
);
logging::warning(
wrkflw_logging::warning(
"Docker is not available or unresponsive. Using emulation mode instead.",
);
RuntimeType::Emulation
} else {
logging::info("Docker is available, using Docker runtime");
wrkflw_logging::info("Docker is available, using Docker runtime");
RuntimeType::Docker
}
}
RuntimeType::Podman => {
// Use a timeout for the Podman availability check to prevent hanging
let is_podman_available = match std::panic::catch_unwind(|| {
// Use a very short timeout to prevent blocking the UI
let result = std::thread::scope(|s| {
let handle = s.spawn(|| {
wrkflw_utils::fd::with_stderr_to_null(
wrkflw_executor::podman::is_available,
)
.unwrap_or(false)
});
// Set a short timeout for the thread
let start = std::time::Instant::now();
let timeout = std::time::Duration::from_secs(1);
while start.elapsed() < timeout {
if handle.is_finished() {
return handle.join().unwrap_or(false);
}
std::thread::sleep(std::time::Duration::from_millis(10));
}
// If we reach here, the check took too long
wrkflw_logging::warning(
"Podman availability check timed out, falling back to emulation mode",
);
false
});
result
}) {
Ok(result) => result,
Err(_) => {
wrkflw_logging::warning("Podman availability check failed with panic, falling back to emulation mode");
false
}
};
if !is_podman_available {
initial_logs.push(
"Podman is not available or unresponsive. Using emulation mode instead."
.to_string(),
);
wrkflw_logging::warning(
"Podman is not available or unresponsive. Using emulation mode instead.",
);
RuntimeType::Emulation
} else {
wrkflw_logging::info("Podman is available, using Podman runtime");
RuntimeType::Podman
}
}
RuntimeType::Emulation => RuntimeType::Emulation,
};
@@ -145,6 +206,12 @@ impl App {
log_filter_level: Some(LogFilterLevel::All),
log_search_matches: Vec::new(),
log_search_match_idx: 0,
// Background log processing
log_processor: LogProcessor::new(),
processed_logs: Vec::new(),
logs_need_update: true,
last_system_logs_count: 0,
}
}
@@ -159,7 +226,8 @@ impl App {
pub fn toggle_emulation_mode(&mut self) {
self.runtime_type = match self.runtime_type {
RuntimeType::Docker => RuntimeType::Emulation,
RuntimeType::Docker => RuntimeType::Podman,
RuntimeType::Podman => RuntimeType::Emulation,
RuntimeType::Emulation => RuntimeType::Docker,
};
self.logs
@@ -176,12 +244,13 @@ impl App {
let timestamp = Local::now().format("%H:%M:%S").to_string();
self.logs
.push(format!("[{}] Switched to {} mode", timestamp, mode));
logging::info(&format!("Switched to {} mode", mode));
wrkflw_logging::info(&format!("Switched to {} mode", mode));
}
pub fn runtime_type_name(&self) -> &str {
match self.runtime_type {
RuntimeType::Docker => "Docker",
RuntimeType::Podman => "Podman",
RuntimeType::Emulation => "Emulation",
}
}
@@ -373,10 +442,9 @@ impl App {
if let Some(idx) = self.workflow_list_state.selected() {
if idx < self.workflows.len() && !self.execution_queue.contains(&idx) {
self.execution_queue.push(idx);
let timestamp = Local::now().format("%H:%M:%S").to_string();
self.logs.push(format!(
"[{}] Added '{}' to execution queue. Press 'Enter' to start.",
timestamp, self.workflows[idx].name
self.add_timestamped_log(&format!(
"Added '{}' to execution queue. Press 'Enter' to start.",
self.workflows[idx].name
));
}
}
@@ -393,7 +461,7 @@ impl App {
let timestamp = Local::now().format("%H:%M:%S").to_string();
self.logs
.push(format!("[{}] Starting workflow execution...", timestamp));
logging::info("Starting workflow execution...");
wrkflw_logging::info("Starting workflow execution...");
}
}
@@ -401,7 +469,7 @@ impl App {
pub fn process_execution_result(
&mut self,
workflow_idx: usize,
result: Result<(Vec<executor::JobResult>, ()), String>,
result: Result<(Vec<wrkflw_executor::JobResult>, ()), String>,
) {
if workflow_idx >= self.workflows.len() {
let timestamp = Local::now().format("%H:%M:%S").to_string();
@@ -409,7 +477,7 @@ impl App {
"[{}] Error: Invalid workflow index received",
timestamp
));
logging::error("Invalid workflow index received in process_execution_result");
wrkflw_logging::error("Invalid workflow index received in process_execution_result");
return;
}
@@ -438,15 +506,15 @@ impl App {
.push(format!("[{}] Operation completed successfully.", timestamp));
execution_details.progress = 1.0;
// Convert executor::JobResult to our JobExecution struct
// Convert wrkflw_executor::JobResult to our JobExecution struct
execution_details.jobs = jobs
.iter()
.map(|job_result| JobExecution {
name: job_result.name.clone(),
status: match job_result.status {
executor::JobStatus::Success => JobStatus::Success,
executor::JobStatus::Failure => JobStatus::Failure,
executor::JobStatus::Skipped => JobStatus::Skipped,
wrkflw_executor::JobStatus::Success => JobStatus::Success,
wrkflw_executor::JobStatus::Failure => JobStatus::Failure,
wrkflw_executor::JobStatus::Skipped => JobStatus::Skipped,
},
steps: job_result
.steps
@@ -454,9 +522,9 @@ impl App {
.map(|step_result| StepExecution {
name: step_result.name.clone(),
status: match step_result.status {
executor::StepStatus::Success => StepStatus::Success,
executor::StepStatus::Failure => StepStatus::Failure,
executor::StepStatus::Skipped => StepStatus::Skipped,
wrkflw_executor::StepStatus::Success => StepStatus::Success,
wrkflw_executor::StepStatus::Failure => StepStatus::Failure,
wrkflw_executor::StepStatus::Skipped => StepStatus::Skipped,
},
output: step_result.output.clone(),
})
@@ -495,7 +563,7 @@ impl App {
"[{}] Workflow '{}' completed successfully!",
timestamp, workflow.name
));
logging::info(&format!(
wrkflw_logging::info(&format!(
"[{}] Workflow '{}' completed successfully!",
timestamp, workflow.name
));
@@ -507,7 +575,7 @@ impl App {
"[{}] Workflow '{}' failed: {}",
timestamp, workflow.name, e
));
logging::error(&format!(
wrkflw_logging::error(&format!(
"[{}] Workflow '{}' failed: {}",
timestamp, workflow.name, e
));
@@ -533,7 +601,7 @@ impl App {
self.current_execution = Some(next);
self.logs
.push(format!("Executing workflow: {}", self.workflows[next].name));
logging::info(&format!(
wrkflw_logging::info(&format!(
"Executing workflow: {}",
self.workflows[next].name
));
@@ -579,10 +647,11 @@ impl App {
self.log_search_active = false;
self.log_search_query.clear();
self.log_search_matches.clear();
self.mark_logs_for_update();
}
KeyCode::Backspace => {
self.log_search_query.pop();
self.update_log_search_matches();
self.mark_logs_for_update();
}
KeyCode::Enter => {
self.log_search_active = false;
@@ -590,7 +659,7 @@ impl App {
}
KeyCode::Char(c) => {
self.log_search_query.push(c);
self.update_log_search_matches();
self.mark_logs_for_update();
}
_ => {}
}
@@ -602,8 +671,8 @@ impl App {
if !self.log_search_active {
// Don't clear the query, this allows toggling the search UI while keeping the filter
} else {
// When activating search, update matches
self.update_log_search_matches();
// When activating search, trigger update
self.mark_logs_for_update();
}
}
@@ -614,8 +683,8 @@ impl App {
Some(level) => Some(level.next()),
};
// Update search matches when filter changes
self.update_log_search_matches();
// Trigger log processing update when filter changes
self.mark_logs_for_update();
}
// Clear log search and filter
@@ -624,6 +693,7 @@ impl App {
self.log_filter_level = None;
self.log_search_matches.clear();
self.log_search_match_idx = 0;
self.mark_logs_for_update();
}
// Update matches based on current search and filter
@@ -636,7 +706,7 @@ impl App {
for log in &self.logs {
all_logs.push(log.clone());
}
for log in logging::get_logs() {
for log in wrkflw_logging::get_logs() {
all_logs.push(log.clone());
}
@@ -728,7 +798,7 @@ impl App {
// Scroll logs down
pub fn scroll_logs_down(&mut self) {
// Get total log count including system logs
let total_logs = self.logs.len() + logging::get_logs().len();
let total_logs = self.logs.len() + wrkflw_logging::get_logs().len();
if total_logs > 0 {
self.log_scroll = (self.log_scroll + 1).min(total_logs - 1);
}
@@ -782,7 +852,9 @@ impl App {
let timestamp = Local::now().format("%H:%M:%S").to_string();
self.logs
.push(format!("[{}] Error: Invalid workflow selection", timestamp));
logging::error("Invalid workflow selection in trigger_selected_workflow");
wrkflw_logging::error(
"Invalid workflow selection in trigger_selected_workflow",
);
return;
}
@@ -792,7 +864,7 @@ impl App {
"[{}] Triggering workflow: {}",
timestamp, workflow.name
));
logging::info(&format!("Triggering workflow: {}", workflow.name));
wrkflw_logging::info(&format!("Triggering workflow: {}", workflow.name));
// Clone necessary values for the async task
let workflow_name = workflow.name.clone();
@@ -825,19 +897,19 @@ impl App {
// Send the result back to the main thread
if let Err(e) = tx_clone.send((selected_idx, result)) {
logging::error(&format!("Error sending trigger result: {}", e));
wrkflw_logging::error(&format!("Error sending trigger result: {}", e));
}
});
} else {
let timestamp = Local::now().format("%H:%M:%S").to_string();
self.logs
.push(format!("[{}] No workflow selected to trigger", timestamp));
logging::warning("No workflow selected to trigger");
wrkflw_logging::warning("No workflow selected to trigger");
}
} else {
self.logs
.push("No workflow selected to trigger".to_string());
logging::warning("No workflow selected to trigger");
wrkflw_logging::warning("No workflow selected to trigger");
}
}
@@ -850,7 +922,7 @@ impl App {
"[{}] Debug: No workflow selected for reset",
timestamp
));
logging::warning("No workflow selected for reset");
wrkflw_logging::warning("No workflow selected for reset");
return;
}
@@ -887,7 +959,7 @@ impl App {
"[{}] Reset workflow '{}' from {} state to NotStarted - status is now {:?}",
timestamp, workflow.name, old_status, workflow.status
));
logging::info(&format!(
wrkflw_logging::info(&format!(
"Reset workflow '{}' from {} state to NotStarted - status is now {:?}",
workflow.name, old_status, workflow.status
));
@@ -897,4 +969,82 @@ impl App {
}
}
}
/// Request log processing update from background thread
pub fn request_log_processing_update(&mut self) {
let request = LogProcessingRequest {
search_query: self.log_search_query.clone(),
filter_level: self.log_filter_level.clone(),
app_logs: self.logs.clone(),
app_logs_count: self.logs.len(),
system_logs_count: wrkflw_logging::get_logs().len(),
};
if self.log_processor.request_update(request).is_err() {
// Log processor channel disconnected, recreate it
self.log_processor = LogProcessor::new();
self.logs_need_update = true;
}
}
/// Check for and apply log processing updates
pub fn check_log_processing_updates(&mut self) {
// Check if system logs have changed
let current_system_logs_count = wrkflw_logging::get_logs().len();
if current_system_logs_count != self.last_system_logs_count {
self.last_system_logs_count = current_system_logs_count;
self.mark_logs_for_update();
}
if let Some(response) = self.log_processor.try_get_update() {
self.processed_logs = response.processed_logs;
self.log_search_matches = response.search_matches;
// Update scroll position to first match if we have search results
if !self.log_search_matches.is_empty() && !self.log_search_query.is_empty() {
self.log_search_match_idx = 0;
if let Some(&idx) = self.log_search_matches.first() {
self.log_scroll = idx;
}
}
self.logs_need_update = false;
}
}
/// Trigger log processing when search/filter changes
pub fn mark_logs_for_update(&mut self) {
self.logs_need_update = true;
self.request_log_processing_update();
}
/// Get combined app and system logs for background processing
pub fn get_combined_logs(&self) -> Vec<String> {
let mut all_logs = Vec::new();
// Add app logs
for log in &self.logs {
all_logs.push(log.clone());
}
// Add system logs
for log in wrkflw_logging::get_logs() {
all_logs.push(log.clone());
}
all_logs
}
/// Add a log entry and trigger log processing update
pub fn add_log(&mut self, message: String) {
self.logs.push(message);
self.mark_logs_for_update();
}
/// Add a formatted log entry with timestamp and trigger log processing update
pub fn add_timestamped_log(&mut self, message: &str) {
let timestamp = Local::now().format("%H:%M:%S").to_string();
let formatted_message = format!("[{}] {}", timestamp, message);
self.add_log(formatted_message);
}
}

View File

@@ -2,12 +2,12 @@
use crate::app::App;
use crate::models::{ExecutionResultMsg, WorkflowExecution, WorkflowStatus};
use chrono::Local;
use evaluator::evaluate_workflow_file;
use executor::{self, JobStatus, RuntimeType, StepStatus};
use std::io;
use std::path::{Path, PathBuf};
use std::sync::mpsc;
use std::thread;
use wrkflw_evaluator::evaluate_workflow_file;
use wrkflw_executor::{self, JobStatus, RuntimeType, StepStatus};
// Validate a workflow or directory containing workflows
pub fn validate_workflow(path: &Path, verbose: bool) -> io::Result<()> {
@@ -20,7 +20,7 @@ pub fn validate_workflow(path: &Path, verbose: bool) -> io::Result<()> {
let entry = entry?;
let entry_path = entry.path();
if entry_path.is_file() && utils::is_workflow_file(&entry_path) {
if entry_path.is_file() && wrkflw_utils::is_workflow_file(&entry_path) {
workflows.push(entry_path);
}
}
@@ -102,17 +102,26 @@ pub async fn execute_workflow_cli(
}
}
// Check Docker availability if Docker runtime is selected
// Check container runtime availability if container runtime is selected
let runtime_type = match runtime_type {
RuntimeType::Docker => {
if !executor::docker::is_available() {
if !wrkflw_executor::docker::is_available() {
println!("⚠️ Docker is not available. Using emulation mode instead.");
logging::warning("Docker is not available. Using emulation mode instead.");
wrkflw_logging::warning("Docker is not available. Using emulation mode instead.");
RuntimeType::Emulation
} else {
RuntimeType::Docker
}
}
RuntimeType::Podman => {
if !wrkflw_executor::podman::is_available() {
println!("⚠️ Podman is not available. Using emulation mode instead.");
wrkflw_logging::warning("Podman is not available. Using emulation mode instead.");
RuntimeType::Emulation
} else {
RuntimeType::Podman
}
}
RuntimeType::Emulation => RuntimeType::Emulation,
};
@@ -120,20 +129,20 @@ pub async fn execute_workflow_cli(
println!("Runtime mode: {:?}", runtime_type);
// Log the start of the execution in debug mode with more details
logging::debug(&format!(
wrkflw_logging::debug(&format!(
"Starting workflow execution: path={}, runtime={:?}, verbose={}",
path.display(),
runtime_type,
verbose
));
let config = executor::ExecutionConfig {
let config = wrkflw_executor::ExecutionConfig {
runtime_type,
verbose,
preserve_containers_on_failure: false, // Default for this path
};
match executor::execute_workflow(path, config).await {
match wrkflw_executor::execute_workflow(path, config).await {
Ok(result) => {
println!("\nWorkflow execution results:");
@@ -157,7 +166,7 @@ pub async fn execute_workflow_cli(
println!("-------------------------");
// Log the job details for debug purposes
logging::debug(&format!("Job: {}, Status: {:?}", job.name, job.status));
wrkflw_logging::debug(&format!("Job: {}, Status: {:?}", job.name, job.status));
for step in job.steps.iter() {
match step.status {
@@ -193,7 +202,7 @@ pub async fn execute_workflow_cli(
}
// Show command/run details in debug mode
if logging::get_log_level() <= logging::LogLevel::Debug {
if wrkflw_logging::get_log_level() <= wrkflw_logging::LogLevel::Debug {
if let Some(cmd_output) = step
.output
.lines()
@@ -233,7 +242,7 @@ pub async fn execute_workflow_cli(
}
// Always log the step details for debug purposes
logging::debug(&format!(
wrkflw_logging::debug(&format!(
"Step: {}, Status: {:?}, Output length: {} lines",
step.name,
step.status,
@@ -241,10 +250,10 @@ pub async fn execute_workflow_cli(
));
// In debug mode, log all step output
if logging::get_log_level() == logging::LogLevel::Debug
if wrkflw_logging::get_log_level() == wrkflw_logging::LogLevel::Debug
&& !step.output.trim().is_empty()
{
logging::debug(&format!(
wrkflw_logging::debug(&format!(
"Step output for '{}': \n{}",
step.name, step.output
));
@@ -256,7 +265,7 @@ pub async fn execute_workflow_cli(
println!("\n❌ Workflow completed with failures");
// In the case of failure, we'll also inform the user about the debug option
// if they're not already using it
if logging::get_log_level() > logging::LogLevel::Debug {
if wrkflw_logging::get_log_level() > wrkflw_logging::LogLevel::Debug {
println!(" Run with --debug for more detailed output");
}
} else {
@@ -267,7 +276,7 @@ pub async fn execute_workflow_cli(
}
Err(e) => {
println!("❌ Failed to execute workflow: {}", e);
logging::error(&format!("Failed to execute workflow: {}", e));
wrkflw_logging::error(&format!("Failed to execute workflow: {}", e));
Err(io::Error::other(e))
}
}
@@ -277,7 +286,7 @@ pub async fn execute_workflow_cli(
pub async fn execute_curl_trigger(
workflow_name: &str,
branch: Option<&str>,
) -> Result<(Vec<executor::JobResult>, ()), String> {
) -> Result<(Vec<wrkflw_executor::JobResult>, ()), String> {
// Get GitHub token
let token = std::env::var("GITHUB_TOKEN").map_err(|_| {
"GitHub token not found. Please set GITHUB_TOKEN environment variable".to_string()
@@ -285,13 +294,13 @@ pub async fn execute_curl_trigger(
// Debug log to check if GITHUB_TOKEN is set
match std::env::var("GITHUB_TOKEN") {
Ok(token) => logging::info(&format!("GITHUB_TOKEN is set: {}", &token[..5])), // Log first 5 characters for security
Err(_) => logging::error("GITHUB_TOKEN is not set"),
Ok(token) => wrkflw_logging::info(&format!("GITHUB_TOKEN is set: {}", &token[..5])), // Log first 5 characters for security
Err(_) => wrkflw_logging::error("GITHUB_TOKEN is not set"),
}
// Get repository information
let repo_info =
github::get_repo_info().map_err(|e| format!("Failed to get repository info: {}", e))?;
let repo_info = wrkflw_github::get_repo_info()
.map_err(|e| format!("Failed to get repository info: {}", e))?;
// Determine branch to use
let branch_ref = branch.unwrap_or(&repo_info.default_branch);
@@ -306,7 +315,7 @@ pub async fn execute_curl_trigger(
workflow_name
};
logging::info(&format!("Using workflow name: {}", workflow_name));
wrkflw_logging::info(&format!("Using workflow name: {}", workflow_name));
// Construct JSON payload
let payload = serde_json::json!({
@@ -319,7 +328,7 @@ pub async fn execute_curl_trigger(
repo_info.owner, repo_info.repo, workflow_name
);
logging::info(&format!("Triggering workflow at URL: {}", url));
wrkflw_logging::info(&format!("Triggering workflow at URL: {}", url));
// Create a reqwest client
let client = reqwest::Client::new();
@@ -353,12 +362,12 @@ pub async fn execute_curl_trigger(
);
// Create a job result structure
let job_result = executor::JobResult {
let job_result = wrkflw_executor::JobResult {
name: "GitHub Trigger".to_string(),
status: executor::JobStatus::Success,
steps: vec![executor::StepResult {
status: wrkflw_executor::JobStatus::Success,
steps: vec![wrkflw_executor::StepResult {
name: "Remote Trigger".to_string(),
status: executor::StepStatus::Success,
status: wrkflw_executor::StepStatus::Success,
output: success_msg,
}],
logs: "Workflow triggered remotely on GitHub".to_string(),
@@ -382,41 +391,69 @@ pub fn start_next_workflow_execution(
if verbose {
app.logs
.push("Verbose mode: Step outputs will be displayed in full".to_string());
logging::info("Verbose mode: Step outputs will be displayed in full");
wrkflw_logging::info("Verbose mode: Step outputs will be displayed in full");
} else {
app.logs.push(
"Standard mode: Only step status will be shown (use --verbose for full output)"
.to_string(),
);
logging::info(
wrkflw_logging::info(
"Standard mode: Only step status will be shown (use --verbose for full output)",
);
}
// Check Docker availability again if Docker runtime is selected
// Check container runtime availability again if container runtime is selected
let runtime_type = match app.runtime_type {
RuntimeType::Docker => {
// Use safe FD redirection to check Docker availability
let is_docker_available =
match utils::fd::with_stderr_to_null(executor::docker::is_available) {
Ok(result) => result,
Err(_) => {
logging::debug(
"Failed to redirect stderr when checking Docker availability.",
);
false
}
};
let is_docker_available = match wrkflw_utils::fd::with_stderr_to_null(
wrkflw_executor::docker::is_available,
) {
Ok(result) => result,
Err(_) => {
wrkflw_logging::debug(
"Failed to redirect stderr when checking Docker availability.",
);
false
}
};
if !is_docker_available {
app.logs
.push("Docker is not available. Using emulation mode instead.".to_string());
logging::warning("Docker is not available. Using emulation mode instead.");
wrkflw_logging::warning(
"Docker is not available. Using emulation mode instead.",
);
RuntimeType::Emulation
} else {
RuntimeType::Docker
}
}
RuntimeType::Podman => {
// Use safe FD redirection to check Podman availability
let is_podman_available = match wrkflw_utils::fd::with_stderr_to_null(
wrkflw_executor::podman::is_available,
) {
Ok(result) => result,
Err(_) => {
wrkflw_logging::debug(
"Failed to redirect stderr when checking Podman availability.",
);
false
}
};
if !is_podman_available {
app.logs
.push("Podman is not available. Using emulation mode instead.".to_string());
wrkflw_logging::warning(
"Podman is not available. Using emulation mode instead.",
);
RuntimeType::Emulation
} else {
RuntimeType::Podman
}
}
RuntimeType::Emulation => RuntimeType::Emulation,
};
@@ -456,21 +493,21 @@ pub fn start_next_workflow_execution(
Ok(validation_result) => {
// Create execution result based on validation
let status = if validation_result.is_valid {
executor::JobStatus::Success
wrkflw_executor::JobStatus::Success
} else {
executor::JobStatus::Failure
wrkflw_executor::JobStatus::Failure
};
// Create a synthetic job result for validation
let jobs = vec![executor::JobResult {
let jobs = vec![wrkflw_executor::JobResult {
name: "Validation".to_string(),
status,
steps: vec![executor::StepResult {
steps: vec![wrkflw_executor::StepResult {
name: "Validator".to_string(),
status: if validation_result.is_valid {
executor::StepStatus::Success
wrkflw_executor::StepStatus::Success
} else {
executor::StepStatus::Failure
wrkflw_executor::StepStatus::Failure
},
output: validation_result.issues.join("\n"),
}],
@@ -490,15 +527,15 @@ pub fn start_next_workflow_execution(
}
} else {
// Use safe FD redirection for execution
let config = executor::ExecutionConfig {
let config = wrkflw_executor::ExecutionConfig {
runtime_type,
verbose,
preserve_containers_on_failure,
};
let execution_result = utils::fd::with_stderr_to_null(|| {
let execution_result = wrkflw_utils::fd::with_stderr_to_null(|| {
futures::executor::block_on(async {
executor::execute_workflow(&workflow_path, config).await
wrkflw_executor::execute_workflow(&workflow_path, config).await
})
})
.map_err(|e| format!("Failed to redirect stderr during execution: {}", e))?;
@@ -515,7 +552,7 @@ pub fn start_next_workflow_execution(
// Only send if we get a valid result
if let Err(e) = tx_clone_inner.send((next_idx, result)) {
logging::error(&format!("Error sending execution result: {}", e));
wrkflw_logging::error(&format!("Error sending execution result: {}", e));
}
});
} else {
@@ -523,6 +560,6 @@ pub fn start_next_workflow_execution(
let timestamp = Local::now().format("%H:%M:%S").to_string();
app.logs
.push(format!("[{}] All workflows completed execution", timestamp));
logging::info("All workflows completed execution");
wrkflw_logging::info("All workflows completed execution");
}
}

View File

@@ -12,6 +12,7 @@
pub mod app;
pub mod components;
pub mod handlers;
pub mod log_processor;
pub mod models;
pub mod utils;
pub mod views;

View File

@@ -0,0 +1,305 @@
// Background log processor for asynchronous log filtering and formatting
use crate::models::LogFilterLevel;
use ratatui::{
style::{Color, Style},
text::{Line, Span},
widgets::{Cell, Row},
};
use std::sync::mpsc;
use std::thread;
use std::time::{Duration, Instant};
/// Processed log entry ready for rendering
#[derive(Debug, Clone)]
pub struct ProcessedLogEntry {
pub timestamp: String,
pub log_type: String,
pub log_style: Style,
pub content_spans: Vec<Span<'static>>,
}
impl ProcessedLogEntry {
/// Convert to a table row for rendering
pub fn to_row(&self) -> Row<'static> {
Row::new(vec![
Cell::from(self.timestamp.clone()),
Cell::from(self.log_type.clone()).style(self.log_style),
Cell::from(Line::from(self.content_spans.clone())),
])
}
}
/// Request to update log processing parameters
#[derive(Debug, Clone)]
pub struct LogProcessingRequest {
pub search_query: String,
pub filter_level: Option<LogFilterLevel>,
pub app_logs: Vec<String>, // Complete app logs
pub app_logs_count: usize, // To detect changes in app logs
pub system_logs_count: usize, // To detect changes in system logs
}
/// Response with processed logs
#[derive(Debug, Clone)]
pub struct LogProcessingResponse {
pub processed_logs: Vec<ProcessedLogEntry>,
pub total_log_count: usize,
pub filtered_count: usize,
pub search_matches: Vec<usize>, // Indices of logs that match search
}
/// Background log processor
pub struct LogProcessor {
request_tx: mpsc::Sender<LogProcessingRequest>,
response_rx: mpsc::Receiver<LogProcessingResponse>,
_worker_handle: thread::JoinHandle<()>,
}
impl LogProcessor {
/// Create a new log processor with a background worker thread
pub fn new() -> Self {
let (request_tx, request_rx) = mpsc::channel::<LogProcessingRequest>();
let (response_tx, response_rx) = mpsc::channel::<LogProcessingResponse>();
let worker_handle = thread::spawn(move || {
Self::worker_loop(request_rx, response_tx);
});
Self {
request_tx,
response_rx,
_worker_handle: worker_handle,
}
}
/// Send a processing request (non-blocking)
pub fn request_update(
&self,
request: LogProcessingRequest,
) -> Result<(), mpsc::SendError<LogProcessingRequest>> {
self.request_tx.send(request)
}
/// Try to get the latest processed logs (non-blocking)
pub fn try_get_update(&self) -> Option<LogProcessingResponse> {
self.response_rx.try_recv().ok()
}
/// Background worker loop
fn worker_loop(
request_rx: mpsc::Receiver<LogProcessingRequest>,
response_tx: mpsc::Sender<LogProcessingResponse>,
) {
let mut last_request: Option<LogProcessingRequest> = None;
let mut last_processed_time = Instant::now();
let mut cached_logs: Vec<String> = Vec::new();
let mut cached_app_logs_count = 0;
let mut cached_system_logs_count = 0;
loop {
// Check for new requests with a timeout to allow periodic processing
let request = match request_rx.recv_timeout(Duration::from_millis(100)) {
Ok(req) => Some(req),
Err(mpsc::RecvTimeoutError::Timeout) => None,
Err(mpsc::RecvTimeoutError::Disconnected) => break,
};
// Update request if we received one
if let Some(req) = request {
last_request = Some(req);
}
// Process if we have a request and enough time has passed since last processing
if let Some(ref req) = last_request {
let should_process = last_processed_time.elapsed() > Duration::from_millis(50)
&& (cached_app_logs_count != req.app_logs_count
|| cached_system_logs_count != req.system_logs_count
|| cached_logs.is_empty());
if should_process {
// Refresh log cache if log counts changed
if cached_app_logs_count != req.app_logs_count
|| cached_system_logs_count != req.system_logs_count
|| cached_logs.is_empty()
{
cached_logs = Self::get_combined_logs(&req.app_logs);
cached_app_logs_count = req.app_logs_count;
cached_system_logs_count = req.system_logs_count;
}
let response = Self::process_logs(&cached_logs, req);
if response_tx.send(response).is_err() {
break; // Receiver disconnected
}
last_processed_time = Instant::now();
}
}
}
}
/// Get combined app and system logs
fn get_combined_logs(app_logs: &[String]) -> Vec<String> {
let mut all_logs = Vec::new();
// Add app logs
for log in app_logs {
all_logs.push(log.clone());
}
// Add system logs
for log in wrkflw_logging::get_logs() {
all_logs.push(log.clone());
}
all_logs
}
/// Process logs according to search and filter criteria
fn process_logs(all_logs: &[String], request: &LogProcessingRequest) -> LogProcessingResponse {
// Filter logs based on search query and filter level
let mut filtered_logs = Vec::new();
let mut search_matches = Vec::new();
for (idx, log) in all_logs.iter().enumerate() {
let passes_filter = match &request.filter_level {
None => true,
Some(level) => level.matches(log),
};
let matches_search = if request.search_query.is_empty() {
true
} else {
log.to_lowercase()
.contains(&request.search_query.to_lowercase())
};
if passes_filter && matches_search {
filtered_logs.push((idx, log));
if matches_search && !request.search_query.is_empty() {
search_matches.push(filtered_logs.len() - 1);
}
}
}
// Process filtered logs into display format
let processed_logs: Vec<ProcessedLogEntry> = filtered_logs
.iter()
.map(|(_, log_line)| Self::process_log_entry(log_line, &request.search_query))
.collect();
LogProcessingResponse {
processed_logs,
total_log_count: all_logs.len(),
filtered_count: filtered_logs.len(),
search_matches,
}
}
/// Process a single log entry into display format
fn process_log_entry(log_line: &str, search_query: &str) -> ProcessedLogEntry {
// Extract timestamp from log format [HH:MM:SS]
let timestamp = if log_line.starts_with('[') && log_line.contains(']') {
let end = log_line.find(']').unwrap_or(0);
if end > 1 {
log_line[1..end].to_string()
} else {
"??:??:??".to_string()
}
} else {
"??:??:??".to_string()
};
// Determine log type and style
let (log_type, log_style) =
if log_line.contains("Error") || log_line.contains("error") || log_line.contains("")
{
("ERROR", Style::default().fg(Color::Red))
} else if log_line.contains("Warning")
|| log_line.contains("warning")
|| log_line.contains("⚠️")
{
("WARN", Style::default().fg(Color::Yellow))
} else if log_line.contains("Success")
|| log_line.contains("success")
|| log_line.contains("")
{
("SUCCESS", Style::default().fg(Color::Green))
} else if log_line.contains("Running")
|| log_line.contains("running")
|| log_line.contains("")
{
("INFO", Style::default().fg(Color::Cyan))
} else if log_line.contains("Triggering") || log_line.contains("triggered") {
("TRIG", Style::default().fg(Color::Magenta))
} else {
("INFO", Style::default().fg(Color::Gray))
};
// Extract content after timestamp
let content = if log_line.starts_with('[') && log_line.contains(']') {
let start = log_line.find(']').unwrap_or(0) + 1;
log_line[start..].trim()
} else {
log_line
};
// Create content spans with search highlighting
let content_spans = if !search_query.is_empty() {
Self::highlight_search_matches(content, search_query)
} else {
vec![Span::raw(content.to_string())]
};
ProcessedLogEntry {
timestamp,
log_type: log_type.to_string(),
log_style,
content_spans,
}
}
/// Highlight search matches in content
fn highlight_search_matches(content: &str, search_query: &str) -> Vec<Span<'static>> {
let mut spans = Vec::new();
let lowercase_content = content.to_lowercase();
let lowercase_query = search_query.to_lowercase();
if lowercase_content.contains(&lowercase_query) {
let mut last_idx = 0;
while let Some(idx) = lowercase_content[last_idx..].find(&lowercase_query) {
let real_idx = last_idx + idx;
// Add text before match
if real_idx > last_idx {
spans.push(Span::raw(content[last_idx..real_idx].to_string()));
}
// Add matched text with highlight
let match_end = real_idx + search_query.len();
spans.push(Span::styled(
content[real_idx..match_end].to_string(),
Style::default().bg(Color::Yellow).fg(Color::Black),
));
last_idx = match_end;
}
// Add remaining text after last match
if last_idx < content.len() {
spans.push(Span::raw(content[last_idx..].to_string()));
}
} else {
spans.push(Span::raw(content.to_string()));
}
spans
}
}
impl Default for LogProcessor {
fn default() -> Self {
Self::new()
}
}

View File

@@ -1,10 +1,10 @@
// UI Models for wrkflw
use chrono::Local;
use executor::{JobStatus, StepStatus};
use std::path::PathBuf;
use wrkflw_executor::{JobStatus, StepStatus};
/// Type alias for the complex execution result type
pub type ExecutionResultMsg = (usize, Result<(Vec<executor::JobResult>, ()), String>);
pub type ExecutionResultMsg = (usize, Result<(Vec<wrkflw_executor::JobResult>, ()), String>);
/// Represents an individual workflow file
pub struct Workflow {
@@ -50,6 +50,7 @@ pub struct StepExecution {
}
/// Log filter levels
#[derive(Debug, Clone, PartialEq)]
pub enum LogFilterLevel {
Info,
Warning,

View File

@@ -1,7 +1,7 @@
// UI utilities
use crate::models::{Workflow, WorkflowStatus};
use std::path::{Path, PathBuf};
use utils::is_workflow_file;
use wrkflw_utils::is_workflow_file;
/// Find and load all workflow files in a directory
pub fn load_workflows(dir_path: &Path) -> Vec<Workflow> {

View File

@@ -145,15 +145,17 @@ pub fn render_execution_tab(
.iter()
.map(|job| {
let status_symbol = match job.status {
executor::JobStatus::Success => "",
executor::JobStatus::Failure => "",
executor::JobStatus::Skipped => "",
wrkflw_executor::JobStatus::Success => "",
wrkflw_executor::JobStatus::Failure => "",
wrkflw_executor::JobStatus::Skipped => "",
};
let status_style = match job.status {
executor::JobStatus::Success => Style::default().fg(Color::Green),
executor::JobStatus::Failure => Style::default().fg(Color::Red),
executor::JobStatus::Skipped => Style::default().fg(Color::Gray),
wrkflw_executor::JobStatus::Success => {
Style::default().fg(Color::Green)
}
wrkflw_executor::JobStatus::Failure => Style::default().fg(Color::Red),
wrkflw_executor::JobStatus::Skipped => Style::default().fg(Color::Gray),
};
// Count completed and total steps
@@ -162,8 +164,8 @@ pub fn render_execution_tab(
.steps
.iter()
.filter(|s| {
s.status == executor::StepStatus::Success
|| s.status == executor::StepStatus::Failure
s.status == wrkflw_executor::StepStatus::Success
|| s.status == wrkflw_executor::StepStatus::Failure
})
.count();

View File

@@ -46,15 +46,15 @@ pub fn render_job_detail_view(
// Job title section
let status_text = match job.status {
executor::JobStatus::Success => "Success",
executor::JobStatus::Failure => "Failed",
executor::JobStatus::Skipped => "Skipped",
wrkflw_executor::JobStatus::Success => "Success",
wrkflw_executor::JobStatus::Failure => "Failed",
wrkflw_executor::JobStatus::Skipped => "Skipped",
};
let status_style = match job.status {
executor::JobStatus::Success => Style::default().fg(Color::Green),
executor::JobStatus::Failure => Style::default().fg(Color::Red),
executor::JobStatus::Skipped => Style::default().fg(Color::Yellow),
wrkflw_executor::JobStatus::Success => Style::default().fg(Color::Green),
wrkflw_executor::JobStatus::Failure => Style::default().fg(Color::Red),
wrkflw_executor::JobStatus::Skipped => Style::default().fg(Color::Yellow),
};
let job_title = Paragraph::new(vec![
@@ -101,15 +101,19 @@ pub fn render_job_detail_view(
let rows = job.steps.iter().map(|step| {
let status_symbol = match step.status {
executor::StepStatus::Success => "",
executor::StepStatus::Failure => "",
executor::StepStatus::Skipped => "",
wrkflw_executor::StepStatus::Success => "",
wrkflw_executor::StepStatus::Failure => "",
wrkflw_executor::StepStatus::Skipped => "",
};
let status_style = match step.status {
executor::StepStatus::Success => Style::default().fg(Color::Green),
executor::StepStatus::Failure => Style::default().fg(Color::Red),
executor::StepStatus::Skipped => Style::default().fg(Color::Gray),
wrkflw_executor::StepStatus::Success => {
Style::default().fg(Color::Green)
}
wrkflw_executor::StepStatus::Failure => Style::default().fg(Color::Red),
wrkflw_executor::StepStatus::Skipped => {
Style::default().fg(Color::Gray)
}
};
Row::new(vec![
@@ -147,15 +151,21 @@ pub fn render_job_detail_view(
// Show step output with proper styling
let status_text = match step.status {
executor::StepStatus::Success => "Success",
executor::StepStatus::Failure => "Failed",
executor::StepStatus::Skipped => "Skipped",
wrkflw_executor::StepStatus::Success => "Success",
wrkflw_executor::StepStatus::Failure => "Failed",
wrkflw_executor::StepStatus::Skipped => "Skipped",
};
let status_style = match step.status {
executor::StepStatus::Success => Style::default().fg(Color::Green),
executor::StepStatus::Failure => Style::default().fg(Color::Red),
executor::StepStatus::Skipped => Style::default().fg(Color::Yellow),
wrkflw_executor::StepStatus::Success => {
Style::default().fg(Color::Green)
}
wrkflw_executor::StepStatus::Failure => {
Style::default().fg(Color::Red)
}
wrkflw_executor::StepStatus::Skipped => {
Style::default().fg(Color::Yellow)
}
};
let mut output_text = step.output.clone();

View File

@@ -140,45 +140,8 @@ pub fn render_logs_tab(f: &mut Frame<CrosstermBackend<io::Stdout>>, app: &App, a
f.render_widget(search_block, chunks[1]);
}
// Combine application logs with system logs
let mut all_logs = Vec::new();
// Now all logs should have timestamps in the format [HH:MM:SS]
// Process app logs
for log in &app.logs {
all_logs.push(log.clone());
}
// Process system logs
for log in logging::get_logs() {
all_logs.push(log.clone());
}
// Filter logs based on search query and filter level
let filtered_logs = if !app.log_search_query.is_empty() || app.log_filter_level.is_some() {
all_logs
.iter()
.filter(|log| {
let passes_filter = match &app.log_filter_level {
None => true,
Some(level) => level.matches(log),
};
let matches_search = if app.log_search_query.is_empty() {
true
} else {
log.to_lowercase()
.contains(&app.log_search_query.to_lowercase())
};
passes_filter && matches_search
})
.cloned()
.collect::<Vec<String>>()
} else {
all_logs.clone() // Clone to avoid moving all_logs
};
// Use processed logs from background thread instead of processing on every frame
let filtered_logs = &app.processed_logs;
// Create a table for logs for better organization
let header_cells = ["Time", "Type", "Message"]
@@ -189,109 +152,10 @@ pub fn render_logs_tab(f: &mut Frame<CrosstermBackend<io::Stdout>>, app: &App, a
.style(Style::default().add_modifier(Modifier::BOLD))
.height(1);
let rows = filtered_logs.iter().map(|log_line| {
// Parse log line to extract timestamp, type and message
// Extract timestamp from log format [HH:MM:SS]
let timestamp = if log_line.starts_with('[') && log_line.contains(']') {
let end = log_line.find(']').unwrap_or(0);
if end > 1 {
log_line[1..end].to_string()
} else {
"??:??:??".to_string() // Show placeholder for malformed logs
}
} else {
"??:??:??".to_string() // Show placeholder for malformed logs
};
let (log_type, log_style, _) =
if log_line.contains("Error") || log_line.contains("error") || log_line.contains("")
{
("ERROR", Style::default().fg(Color::Red), log_line.as_str())
} else if log_line.contains("Warning")
|| log_line.contains("warning")
|| log_line.contains("⚠️")
{
(
"WARN",
Style::default().fg(Color::Yellow),
log_line.as_str(),
)
} else if log_line.contains("Success")
|| log_line.contains("success")
|| log_line.contains("")
{
(
"SUCCESS",
Style::default().fg(Color::Green),
log_line.as_str(),
)
} else if log_line.contains("Running")
|| log_line.contains("running")
|| log_line.contains("")
{
("INFO", Style::default().fg(Color::Cyan), log_line.as_str())
} else if log_line.contains("Triggering") || log_line.contains("triggered") {
(
"TRIG",
Style::default().fg(Color::Magenta),
log_line.as_str(),
)
} else {
("INFO", Style::default().fg(Color::Gray), log_line.as_str())
};
// Extract content after timestamp
let content = if log_line.starts_with('[') && log_line.contains(']') {
let start = log_line.find(']').unwrap_or(0) + 1;
log_line[start..].trim()
} else {
log_line.as_str()
};
// Highlight search matches in content if search is active
let mut content_spans = Vec::new();
if !app.log_search_query.is_empty() {
let lowercase_content = content.to_lowercase();
let lowercase_query = app.log_search_query.to_lowercase();
if lowercase_content.contains(&lowercase_query) {
let mut last_idx = 0;
while let Some(idx) = lowercase_content[last_idx..].find(&lowercase_query) {
let real_idx = last_idx + idx;
// Add text before match
if real_idx > last_idx {
content_spans.push(Span::raw(content[last_idx..real_idx].to_string()));
}
// Add matched text with highlight
let match_end = real_idx + app.log_search_query.len();
content_spans.push(Span::styled(
content[real_idx..match_end].to_string(),
Style::default().bg(Color::Yellow).fg(Color::Black),
));
last_idx = match_end;
}
// Add remaining text after last match
if last_idx < content.len() {
content_spans.push(Span::raw(content[last_idx..].to_string()));
}
} else {
content_spans.push(Span::raw(content));
}
} else {
content_spans.push(Span::raw(content));
}
Row::new(vec![
Cell::from(timestamp),
Cell::from(log_type).style(log_style),
Cell::from(Line::from(content_spans)),
])
});
// Convert processed logs to table rows - this is now very fast since logs are pre-processed
let rows = filtered_logs
.iter()
.map(|processed_log| processed_log.to_row());
let content_idx = if show_search_bar { 2 } else { 1 };

View File

@@ -1,6 +1,5 @@
// Status bar rendering
use crate::app::App;
use executor::RuntimeType;
use ratatui::{
backend::CrosstermBackend,
layout::{Alignment, Rect},
@@ -10,6 +9,7 @@ use ratatui::{
Frame,
};
use std::io;
use wrkflw_executor::RuntimeType;
// Render the status bar
pub fn render_status_bar(f: &mut Frame<CrosstermBackend<io::Stdout>>, app: &App, area: Rect) {
@@ -40,38 +40,77 @@ pub fn render_status_bar(f: &mut Frame<CrosstermBackend<io::Stdout>>, app: &App,
Style::default()
.bg(match app.runtime_type {
RuntimeType::Docker => Color::Blue,
RuntimeType::Podman => Color::Cyan,
RuntimeType::Emulation => Color::Magenta,
})
.fg(Color::White),
));
// Add Docker status if relevant
if app.runtime_type == RuntimeType::Docker {
// Check Docker silently using safe FD redirection
let is_docker_available =
match utils::fd::with_stderr_to_null(executor::docker::is_available) {
// Add container runtime status if relevant
match app.runtime_type {
RuntimeType::Docker => {
// Check Docker silently using safe FD redirection
let is_docker_available = match wrkflw_utils::fd::with_stderr_to_null(
wrkflw_executor::docker::is_available,
) {
Ok(result) => result,
Err(_) => {
logging::debug("Failed to redirect stderr when checking Docker availability.");
wrkflw_logging::debug(
"Failed to redirect stderr when checking Docker availability.",
);
false
}
};
status_items.push(Span::raw(" "));
status_items.push(Span::styled(
if is_docker_available {
" Docker: Connected "
} else {
" Docker: Not Available "
},
Style::default()
.bg(if is_docker_available {
Color::Green
status_items.push(Span::raw(" "));
status_items.push(Span::styled(
if is_docker_available {
" Docker: Connected "
} else {
Color::Red
})
.fg(Color::White),
));
" Docker: Not Available "
},
Style::default()
.bg(if is_docker_available {
Color::Green
} else {
Color::Red
})
.fg(Color::White),
));
}
RuntimeType::Podman => {
// Check Podman silently using safe FD redirection
let is_podman_available = match wrkflw_utils::fd::with_stderr_to_null(
wrkflw_executor::podman::is_available,
) {
Ok(result) => result,
Err(_) => {
wrkflw_logging::debug(
"Failed to redirect stderr when checking Podman availability.",
);
false
}
};
status_items.push(Span::raw(" "));
status_items.push(Span::styled(
if is_podman_available {
" Podman: Connected "
} else {
" Podman: Not Available "
},
Style::default()
.bg(if is_podman_available {
Color::Green
} else {
Color::Red
})
.fg(Color::White),
));
}
RuntimeType::Emulation => {
// No need to check anything for emulation mode
}
}
// Add validation/execution mode
@@ -122,7 +161,7 @@ pub fn render_status_bar(f: &mut Frame<CrosstermBackend<io::Stdout>>, app: &App,
}
2 => {
// For logs tab, show scrolling instructions
let log_count = app.logs.len() + logging::get_logs().len();
let log_count = app.logs.len() + wrkflw_logging::get_logs().len();
if log_count > 0 {
// Convert to a static string for consistent return type
let scroll_text = format!(

View File

@@ -1,13 +1,18 @@
[package]
name = "utils"
name = "wrkflw-utils"
version.workspace = true
edition.workspace = true
description = "utility functions for wrkflw"
description = "Utility functions for wrkflw workflow execution engine"
license.workspace = true
documentation.workspace = true
homepage.workspace = true
repository.workspace = true
keywords.workspace = true
categories.workspace = true
[dependencies]
# Internal crates
models = { path = "../models" }
wrkflw-models = { path = "../models", version = "0.6.0" }
# External dependencies
serde.workspace = true

21
crates/utils/README.md Normal file
View File

@@ -0,0 +1,21 @@
## wrkflw-utils
Shared helpers used across crates.
- Workflow file detection (`.github/workflows/*.yml`, `.gitlab-ci.yml`)
- File-descriptor redirection utilities for silencing noisy subprocess output
### Example
```rust
use std::path::Path;
use wrkflw_utils::{is_workflow_file, fd::with_stderr_to_null};
assert!(is_workflow_file(Path::new(".github/workflows/ci.yml")));
let value = with_stderr_to_null(|| {
eprintln!("this is hidden");
42
}).unwrap();
assert_eq!(value, 42);
```

View File

@@ -1,14 +1,19 @@
[package]
name = "validators"
name = "wrkflw-validators"
version.workspace = true
edition.workspace = true
description = "validation functionality for wrkflw"
description = "Workflow validation functionality for wrkflw execution engine"
license.workspace = true
documentation.workspace = true
homepage.workspace = true
repository.workspace = true
keywords.workspace = true
categories.workspace = true
[dependencies]
# Internal crates
models = { path = "../models" }
matrix = { path = "../matrix" }
wrkflw-models = { path = "../models", version = "0.6.0" }
wrkflw-matrix = { path = "../matrix", version = "0.6.0" }
# External dependencies
serde.workspace = true

View File

@@ -0,0 +1,29 @@
## wrkflw-validators
Validation utilities for workflows and steps.
- Validates GitHub Actions sections: jobs, steps, actions references, triggers
- GitLab pipeline validation helpers
- Matrix-specific validation
### Example
```rust
use serde_yaml::Value;
use wrkflw_models::ValidationResult;
use wrkflw_validators::{validate_jobs, validate_triggers};
let yaml: Value = serde_yaml::from_str(r#"name: demo
on: [workflow_dispatch]
jobs: { build: { runs-on: ubuntu-latest, steps: [] } }
"#).unwrap();
let mut res = ValidationResult::new();
if let Some(on) = yaml.get("on") {
validate_triggers(on, &mut res);
}
if let Some(jobs) = yaml.get("jobs") {
validate_jobs(jobs, &mut res);
}
assert!(res.is_valid);
```

View File

@@ -1,4 +1,4 @@
use models::ValidationResult;
use wrkflw_models::ValidationResult;
pub fn validate_action_reference(
action_ref: &str,

View File

@@ -1,6 +1,6 @@
use models::gitlab::{Job, Pipeline};
use models::ValidationResult;
use std::collections::HashMap;
use wrkflw_models::gitlab::{Job, Pipeline};
use wrkflw_models::ValidationResult;
/// Validate a GitLab CI/CD pipeline
pub fn validate_gitlab_pipeline(pipeline: &Pipeline) -> ValidationResult {
@@ -65,7 +65,7 @@ fn validate_jobs(jobs: &HashMap<String, Job>, result: &mut ValidationResult) {
// Check retry configuration
if let Some(retry) = &job.retry {
match retry {
models::gitlab::Retry::MaxAttempts(attempts) => {
wrkflw_models::gitlab::Retry::MaxAttempts(attempts) => {
if *attempts > 10 {
result.add_issue(format!(
"Job '{}' has excessive retry count: {}. Consider reducing to avoid resource waste",
@@ -73,7 +73,7 @@ fn validate_jobs(jobs: &HashMap<String, Job>, result: &mut ValidationResult) {
));
}
}
models::gitlab::Retry::Detailed { max, when: _ } => {
wrkflw_models::gitlab::Retry::Detailed { max, when: _ } => {
if *max > 10 {
result.add_issue(format!(
"Job '{}' has excessive retry count: {}. Consider reducing to avoid resource waste",

View File

@@ -1,6 +1,6 @@
use crate::{validate_matrix, validate_steps};
use models::ValidationResult;
use serde_yaml::Value;
use wrkflw_models::ValidationResult;
pub fn validate_jobs(jobs: &Value, result: &mut ValidationResult) {
if let Value::Mapping(jobs_map) = jobs {

View File

@@ -1,5 +1,5 @@
use models::ValidationResult;
use serde_yaml::Value;
use wrkflw_models::ValidationResult;
pub fn validate_matrix(matrix: &Value, result: &mut ValidationResult) {
// Check if matrix is a mapping

View File

@@ -1,7 +1,7 @@
use crate::validate_action_reference;
use models::ValidationResult;
use serde_yaml::Value;
use std::collections::HashSet;
use wrkflw_models::ValidationResult;
pub fn validate_steps(steps: &[Value], job_name: &str, result: &mut ValidationResult) {
let mut step_ids: HashSet<String> = HashSet::new();

View File

@@ -1,5 +1,5 @@
use models::ValidationResult;
use serde_yaml::Value;
use wrkflw_models::ValidationResult;
pub fn validate_triggers(on: &Value, result: &mut ValidationResult) {
let valid_events = vec![

View File

@@ -12,18 +12,18 @@ license.workspace = true
[dependencies]
# Workspace crates
models = { path = "../models" }
executor = { path = "../executor" }
github = { path = "../github" }
gitlab = { path = "../gitlab" }
logging = { path = "../logging" }
matrix = { path = "../matrix" }
parser = { path = "../parser" }
runtime = { path = "../runtime" }
ui = { path = "../ui" }
utils = { path = "../utils" }
validators = { path = "../validators" }
evaluator = { path = "../evaluator" }
wrkflw-models = { path = "../models", version = "0.6.0" }
wrkflw-executor = { path = "../executor", version = "0.6.0" }
wrkflw-github = { path = "../github", version = "0.6.0" }
wrkflw-gitlab = { path = "../gitlab", version = "0.6.0" }
wrkflw-logging = { path = "../logging", version = "0.6.0" }
wrkflw-matrix = { path = "../matrix", version = "0.6.0" }
wrkflw-parser = { path = "../parser", version = "0.6.0" }
wrkflw-runtime = { path = "../runtime", version = "0.6.0" }
wrkflw-ui = { path = "../ui", version = "0.6.0" }
wrkflw-utils = { path = "../utils", version = "0.6.0" }
wrkflw-validators = { path = "../validators", version = "0.6.0" }
wrkflw-evaluator = { path = "../evaluator", version = "0.6.0" }
# External dependencies
clap.workspace = true

108
crates/wrkflw/README.md Normal file
View File

@@ -0,0 +1,108 @@
## WRKFLW (CLI and Library)
This crate provides the `wrkflw` command-line interface and a thin library surface that ties together all WRKFLW subcrates. It lets you validate and execute GitHub Actions workflows and GitLab CI pipelines locally, with a built-in TUI for an interactive experience.
- **Validate**: Lints structure and common mistakes in workflow/pipeline files
- **Run**: Executes jobs locally using Docker, Podman, or emulation (no containers)
- **TUI**: Interactive terminal UI for browsing workflows, running, and viewing logs
- **Trigger**: Manually trigger remote runs on GitHub/GitLab
### Installation
```bash
cargo install wrkflw
```
### Quick start
```bash
# Launch the TUI (auto-loads .github/workflows)
wrkflw
# Validate all workflows in the default directory
wrkflw validate
# Validate a specific file or directory
wrkflw validate .github/workflows/ci.yml
wrkflw validate path/to/workflows
# Run a workflow (Docker by default)
wrkflw run .github/workflows/ci.yml
# Use Podman or emulation instead of Docker
wrkflw run --runtime podman .github/workflows/ci.yml
wrkflw run --runtime emulation .github/workflows/ci.yml
# Open the TUI explicitly
wrkflw tui
wrkflw tui --runtime podman
```
### Commands
- **validate**: Validate a workflow/pipeline file or directory
- GitHub (default): `.github/workflows/*.yml`
- GitLab: `.gitlab-ci.yml` or files ending with `gitlab-ci.yml`
- Exit code behavior (by default): `1` when validation failures are detected
- Flags: `--gitlab`, `--exit-code`, `--no-exit-code`, `--verbose`
- **run**: Execute a workflow or pipeline locally
- Runtimes: `docker` (default), `podman`, `emulation`
- Flags: `--runtime`, `--preserve-containers-on-failure`, `--gitlab`, `--verbose`
- **tui**: Interactive terminal interface
- Browse workflows, execute, and inspect logs and job details
- **trigger**: Trigger a GitHub workflow (requires `GITHUB_TOKEN`)
- **trigger-gitlab**: Trigger a GitLab pipeline (requires `GITLAB_TOKEN`)
- **list**: Show detected workflows and pipelines in the repo
### Environment variables
- **GITHUB_TOKEN**: Required for `trigger` when calling GitHub
- **GITLAB_TOKEN**: Required for `trigger-gitlab` (api scope)
### Exit codes
- `validate`: `0` if all pass; `1` if any fail (unless `--no-exit-code`)
- `run`: `0` on success, `1` if execution fails
### Library usage
This crate re-exports subcrates for convenience if you want to embed functionality:
```rust
use std::path::Path;
use wrkflw::executor::{execute_workflow, ExecutionConfig, RuntimeType};
# tokio_test::block_on(async {
let cfg = ExecutionConfig {
runtime_type: RuntimeType::Docker,
verbose: true,
preserve_containers_on_failure: false,
};
let result = execute_workflow(Path::new(".github/workflows/ci.yml"), cfg).await?;
println!("status: {:?}", result.summary_status);
# Ok::<_, Box<dyn std::error::Error>>(())
# })?;
```
You can also run the TUI programmatically:
```rust
use std::path::PathBuf;
use wrkflw::executor::RuntimeType;
use wrkflw::ui::run_wrkflw_tui;
# tokio_test::block_on(async {
let path = PathBuf::from(".github/workflows");
run_wrkflw_tui(Some(&path), RuntimeType::Docker, true, false).await?;
# Ok::<_, Box<dyn std::error::Error>>(())
# })?;
```
### Notes
- See the repository root README for feature details, limitations, and a full walkthrough.
- Service containers and advanced Actions features are best supported in Docker/Podman modes.
- Emulation mode skips containerized steps and runs commands on the host.

View File

@@ -1,12 +1,12 @@
pub use evaluator;
pub use executor;
pub use github;
pub use gitlab;
pub use logging;
pub use matrix;
pub use models;
pub use parser;
pub use runtime;
pub use ui;
pub use utils;
pub use validators;
pub use wrkflw_evaluator as evaluator;
pub use wrkflw_executor as executor;
pub use wrkflw_github as github;
pub use wrkflw_gitlab as gitlab;
pub use wrkflw_logging as logging;
pub use wrkflw_matrix as matrix;
pub use wrkflw_models as models;
pub use wrkflw_parser as parser;
pub use wrkflw_runtime as runtime;
pub use wrkflw_ui as ui;
pub use wrkflw_utils as utils;
pub use wrkflw_validators as validators;

View File

@@ -1,15 +1,35 @@
use bollard::Docker;
use clap::{Parser, Subcommand};
use clap::{Parser, Subcommand, ValueEnum};
use std::collections::HashMap;
use std::path::Path;
use std::path::PathBuf;
#[derive(Debug, Clone, ValueEnum)]
enum RuntimeChoice {
/// Use Docker containers for isolation
Docker,
/// Use Podman containers for isolation
Podman,
/// Use process emulation mode (no containers)
Emulation,
}
impl From<RuntimeChoice> for wrkflw_executor::RuntimeType {
fn from(choice: RuntimeChoice) -> Self {
match choice {
RuntimeChoice::Docker => wrkflw_executor::RuntimeType::Docker,
RuntimeChoice::Podman => wrkflw_executor::RuntimeType::Podman,
RuntimeChoice::Emulation => wrkflw_executor::RuntimeType::Emulation,
}
}
}
#[derive(Debug, Parser)]
#[command(
name = "wrkflw",
about = "GitHub & GitLab CI/CD validator and executor",
version,
long_about = "A CI/CD validator and executor that runs workflows locally.\n\nExamples:\n wrkflw validate # Validate all workflows in .github/workflows\n wrkflw run .github/workflows/build.yml # Run a specific workflow\n wrkflw run .gitlab-ci.yml # Run a GitLab CI pipeline\n wrkflw --verbose run .github/workflows/build.yml # Run with more output\n wrkflw --debug run .github/workflows/build.yml # Run with detailed debug information\n wrkflw run --emulate .github/workflows/build.yml # Use emulation mode instead of Docker\n wrkflw run --preserve-containers-on-failure .github/workflows/build.yml # Keep failed containers for debugging"
long_about = "A CI/CD validator and executor that runs workflows locally.\n\nExamples:\n wrkflw validate # Validate all workflows in .github/workflows\n wrkflw run .github/workflows/build.yml # Run a specific workflow\n wrkflw run .gitlab-ci.yml # Run a GitLab CI pipeline\n wrkflw --verbose run .github/workflows/build.yml # Run with more output\n wrkflw --debug run .github/workflows/build.yml # Run with detailed debug information\n wrkflw run --runtime emulation .github/workflows/build.yml # Use emulation mode instead of containers\n wrkflw run --runtime podman .github/workflows/build.yml # Use Podman instead of Docker\n wrkflw run --preserve-containers-on-failure .github/workflows/build.yml # Keep failed containers for debugging"
)]
struct Wrkflw {
#[command(subcommand)]
@@ -49,9 +69,9 @@ enum Commands {
/// Path to workflow/pipeline file to execute
path: PathBuf,
/// Use emulation mode instead of Docker
#[arg(short, long)]
emulate: bool,
/// Container runtime to use (docker, podman, emulation)
#[arg(short, long, value_enum, default_value = "docker")]
runtime: RuntimeChoice,
/// Show 'Would execute GitHub action' messages in emulation mode
#[arg(long, default_value_t = false)]
@@ -71,9 +91,9 @@ enum Commands {
/// Path to workflow file or directory (defaults to .github/workflows)
path: Option<PathBuf>,
/// Use emulation mode instead of Docker
#[arg(short, long)]
emulate: bool,
/// Container runtime to use (docker, podman, emulation)
#[arg(short, long, value_enum, default_value = "docker")]
runtime: RuntimeChoice,
/// Show 'Would execute GitHub action' messages in emulation mode
#[arg(long, default_value_t = false)]
@@ -123,7 +143,7 @@ fn parse_key_val(s: &str) -> Result<(String, String), String> {
}
// Make this function public for testing? Or move to a utils/cleanup mod?
// Or call executor::cleanup and runtime::cleanup directly?
// Or call wrkflw_executor::cleanup and wrkflw_runtime::cleanup directly?
// Let's try calling them directly for now.
async fn cleanup_on_exit() {
// Clean up Docker resources if available, but don't let it block indefinitely
@@ -131,35 +151,35 @@ async fn cleanup_on_exit() {
match Docker::connect_with_local_defaults() {
Ok(docker) => {
// Assuming cleanup_resources exists in executor crate
executor::cleanup_resources(&docker).await;
wrkflw_executor::cleanup_resources(&docker).await;
}
Err(_) => {
// Docker not available
logging::info("Docker not available, skipping Docker cleanup");
wrkflw_logging::info("Docker not available, skipping Docker cleanup");
}
}
})
.await
{
Ok(_) => logging::debug("Docker cleanup completed successfully"),
Err(_) => {
logging::warning("Docker cleanup timed out after 3 seconds, continuing with shutdown")
}
Ok(_) => wrkflw_logging::debug("Docker cleanup completed successfully"),
Err(_) => wrkflw_logging::warning(
"Docker cleanup timed out after 3 seconds, continuing with shutdown",
),
}
// Always clean up emulation resources
match tokio::time::timeout(
std::time::Duration::from_secs(2),
// Assuming cleanup_resources exists in runtime::emulation module
runtime::emulation::cleanup_resources(),
// Assuming cleanup_resources exists in wrkflw_runtime::emulation module
wrkflw_runtime::emulation::cleanup_resources(),
)
.await
{
Ok(_) => logging::debug("Emulation cleanup completed successfully"),
Err(_) => logging::warning("Emulation cleanup timed out, continuing with shutdown"),
Ok(_) => wrkflw_logging::debug("Emulation cleanup completed successfully"),
Err(_) => wrkflw_logging::warning("Emulation cleanup timed out, continuing with shutdown"),
}
logging::info("Resource cleanup completed");
wrkflw_logging::info("Resource cleanup completed");
}
async fn handle_signals() {
@@ -187,7 +207,7 @@ async fn handle_signals() {
"Cleanup taking too long (over {} seconds), forcing exit...",
hard_exit_time.as_secs()
);
logging::error("Forced exit due to cleanup timeout");
wrkflw_logging::error("Forced exit due to cleanup timeout");
std::process::exit(1);
});
@@ -252,13 +272,13 @@ async fn main() {
// Set log level based on command line flags
if debug {
logging::set_log_level(logging::LogLevel::Debug);
logging::debug("Debug mode enabled - showing detailed logs");
wrkflw_logging::set_log_level(wrkflw_logging::LogLevel::Debug);
wrkflw_logging::debug("Debug mode enabled - showing detailed logs");
} else if verbose {
logging::set_log_level(logging::LogLevel::Info);
logging::info("Verbose mode enabled");
wrkflw_logging::set_log_level(wrkflw_logging::LogLevel::Info);
wrkflw_logging::info("Verbose mode enabled");
} else {
logging::set_log_level(logging::LogLevel::Warning);
wrkflw_logging::set_log_level(wrkflw_logging::LogLevel::Warning);
}
// Setup a Ctrl+C handler that runs in the background
@@ -334,18 +354,14 @@ async fn main() {
}
Some(Commands::Run {
path,
emulate,
runtime,
show_action_messages: _,
preserve_containers_on_failure,
gitlab,
}) => {
// Create execution configuration
let config = executor::ExecutionConfig {
runtime_type: if *emulate {
executor::RuntimeType::Emulation
} else {
executor::RuntimeType::Docker
},
let config = wrkflw_executor::ExecutionConfig {
runtime_type: runtime.clone().into(),
verbose,
preserve_containers_on_failure: *preserve_containers_on_failure,
};
@@ -358,10 +374,10 @@ async fn main() {
"GitHub workflow"
};
logging::info(&format!("Running {} at: {}", workflow_type, path.display()));
wrkflw_logging::info(&format!("Running {} at: {}", workflow_type, path.display()));
// Execute the workflow
let result = executor::execute_workflow(path, config)
let result = wrkflw_executor::execute_workflow(path, config)
.await
.unwrap_or_else(|e| {
eprintln!("Error executing workflow: {}", e);
@@ -403,15 +419,15 @@ async fn main() {
println!(
" {} {} ({})",
match job.status {
executor::JobStatus::Success => "",
executor::JobStatus::Failure => "",
executor::JobStatus::Skipped => "⏭️",
wrkflw_executor::JobStatus::Success => "",
wrkflw_executor::JobStatus::Failure => "",
wrkflw_executor::JobStatus::Skipped => "⏭️",
},
job.name,
match job.status {
executor::JobStatus::Success => "success",
executor::JobStatus::Failure => "failure",
executor::JobStatus::Skipped => "skipped",
wrkflw_executor::JobStatus::Success => "success",
wrkflw_executor::JobStatus::Failure => "failure",
wrkflw_executor::JobStatus::Skipped => "skipped",
}
);
@@ -419,15 +435,15 @@ async fn main() {
println!(" Steps:");
for step in job.steps {
let step_status = match step.status {
executor::StepStatus::Success => "",
executor::StepStatus::Failure => "",
executor::StepStatus::Skipped => "⏭️",
wrkflw_executor::StepStatus::Success => "",
wrkflw_executor::StepStatus::Failure => "",
wrkflw_executor::StepStatus::Skipped => "⏭️",
};
println!(" {} {}", step_status, step.name);
// If step failed and we're not in verbose mode, show condensed error info
if step.status == executor::StepStatus::Failure && !verbose {
if step.status == wrkflw_executor::StepStatus::Failure && !verbose {
// Extract error information from step output
let error_lines = step
.output
@@ -466,26 +482,22 @@ async fn main() {
.map(|v| v.iter().cloned().collect::<HashMap<String, String>>());
// Trigger the pipeline
if let Err(e) = gitlab::trigger_pipeline(branch.as_deref(), variables).await {
if let Err(e) = wrkflw_gitlab::trigger_pipeline(branch.as_deref(), variables).await {
eprintln!("Error triggering GitLab pipeline: {}", e);
std::process::exit(1);
}
}
Some(Commands::Tui {
path,
emulate,
runtime,
show_action_messages: _,
preserve_containers_on_failure,
}) => {
// Set runtime type based on the emulate flag
let runtime_type = if *emulate {
executor::RuntimeType::Emulation
} else {
executor::RuntimeType::Docker
};
// Set runtime type based on the runtime choice
let runtime_type = runtime.clone().into();
// Call the TUI implementation from the ui crate
if let Err(e) = ui::run_wrkflw_tui(
if let Err(e) = wrkflw_ui::run_wrkflw_tui(
path.as_ref(),
runtime_type,
verbose,
@@ -508,7 +520,9 @@ async fn main() {
.map(|i| i.iter().cloned().collect::<HashMap<String, String>>());
// Trigger the workflow
if let Err(e) = github::trigger_workflow(workflow, branch.as_deref(), inputs).await {
if let Err(e) =
wrkflw_github::trigger_workflow(workflow, branch.as_deref(), inputs).await
{
eprintln!("Error triggering GitHub workflow: {}", e);
std::process::exit(1);
}
@@ -518,10 +532,10 @@ async fn main() {
}
None => {
// Launch TUI by default when no command is provided
let runtime_type = executor::RuntimeType::Docker;
let runtime_type = wrkflw_executor::RuntimeType::Docker;
// Call the TUI implementation from the ui crate with default path
if let Err(e) = ui::run_wrkflw_tui(None, runtime_type, verbose, false).await {
if let Err(e) = wrkflw_ui::run_wrkflw_tui(None, runtime_type, verbose, false).await {
eprintln!("Error running TUI: {}", e);
std::process::exit(1);
}
@@ -535,13 +549,13 @@ fn validate_github_workflow(path: &Path, verbose: bool) -> bool {
print!("Validating GitHub workflow file: {}... ", path.display());
// Use the ui crate's validate_workflow function
match ui::validate_workflow(path, verbose) {
match wrkflw_ui::validate_workflow(path, verbose) {
Ok(_) => {
// The detailed validation output is already printed by the function
// We need to check if there were validation issues
// Since ui::validate_workflow doesn't return the validation result directly,
// Since wrkflw_ui::validate_workflow doesn't return the validation result directly,
// we need to call the evaluator directly to get the result
match evaluator::evaluate_workflow_file(path, verbose) {
match wrkflw_evaluator::evaluate_workflow_file(path, verbose) {
Ok(result) => !result.is_valid,
Err(_) => true, // Parse errors count as validation failure
}
@@ -559,12 +573,12 @@ fn validate_gitlab_pipeline(path: &Path, verbose: bool) -> bool {
print!("Validating GitLab CI pipeline file: {}... ", path.display());
// Parse and validate the pipeline file
match parser::gitlab::parse_pipeline(path) {
match wrkflw_parser::gitlab::parse_pipeline(path) {
Ok(pipeline) => {
println!("✅ Valid syntax");
// Additional structural validation
let validation_result = validators::validate_gitlab_pipeline(&pipeline);
let validation_result = wrkflw_validators::validate_gitlab_pipeline(&pipeline);
if !validation_result.is_valid {
println!("⚠️ Validation issues:");

71
publish_crates.sh Executable file
View File

@@ -0,0 +1,71 @@
#!/bin/bash
# Simple script to publish all wrkflw crates to crates.io in dependency order
set -e
DRY_RUN=${1:-""}
if [[ "$DRY_RUN" == "--dry-run" ]]; then
echo "🧪 DRY RUN: Testing wrkflw crates publication"
else
echo "🚀 Publishing wrkflw crates to crates.io"
fi
# Check if we're logged in to crates.io
if [ ! -f ~/.cargo/credentials.toml ] && [ ! -f ~/.cargo/credentials ]; then
echo "❌ Not logged in to crates.io. Please run: cargo login <your-token>"
exit 1
fi
# Publication order (respecting dependencies)
CRATES=(
"models"
"logging"
"utils"
"matrix"
"validators"
"github"
"gitlab"
"parser"
"runtime"
"evaluator"
"executor"
"ui"
"wrkflw"
)
echo "📦 Publishing crates in dependency order..."
for crate in "${CRATES[@]}"; do
if [[ "$DRY_RUN" == "--dry-run" ]]; then
echo "Testing $crate..."
cd "crates/$crate"
cargo publish --dry-run --allow-dirty
echo "$crate dry-run successful"
else
echo "Publishing $crate..."
cd "crates/$crate"
cargo publish --allow-dirty
echo "✅ Published $crate"
fi
cd - > /dev/null
# Small delay to avoid rate limiting (except for the last crate and in dry-run)
if [[ "$crate" != "wrkflw" ]] && [[ "$DRY_RUN" != "--dry-run" ]]; then
echo " Waiting 10 seconds to avoid rate limits..."
sleep 10
fi
done
if [[ "$DRY_RUN" == "--dry-run" ]]; then
echo "🎉 All crates passed dry-run tests!"
echo ""
echo "To actually publish, run:"
echo " ./publish_crates.sh"
else
echo "🎉 All crates published successfully!"
echo ""
echo "Users can now install wrkflw with:"
echo " cargo install wrkflw"
fi

View File

@@ -1,6 +1,6 @@
# Testing Strategy
This directory contains integration tests for the `wrkflw` project. We follow the Rust testing best practices by organizing tests as follows:
This directory contains all tests and test-related files for the `wrkflw` project. We follow the Rust testing best practices by organizing tests as follows:
## Test Organization
@@ -11,6 +11,17 @@ This directory contains integration tests for the `wrkflw` project. We follow th
- **End-to-End Tests**: Also located in this `tests/` directory
- `cleanup_test.rs` - Tests for cleanup functionality with Docker resources
## Test Directory Structure
- **`fixtures/`**: Test data and configuration files
- `gitlab-ci/` - GitLab CI configuration files for testing
- **`workflows/`**: GitHub Actions workflow files for testing
- Various YAML files for testing workflow validation and execution
- **`scripts/`**: Test automation scripts
- `test-podman-basic.sh` - Basic Podman integration test script
- `test-preserve-containers.sh` - Container preservation testing script
- **`TESTING_PODMAN.md`**: Comprehensive Podman testing documentation
## Running Tests
To run all tests:

487
tests/TESTING_PODMAN.md Normal file
View File

@@ -0,0 +1,487 @@
# Testing Podman Support in WRKFLW
This document provides comprehensive testing steps to verify that Podman support is working correctly in wrkflw.
## Prerequisites
### 1. Install Podman
Choose the installation method for your operating system:
#### macOS (using Homebrew)
```bash
brew install podman
```
#### Ubuntu/Debian
```bash
sudo apt-get update
sudo apt-get install podman
```
#### RHEL/CentOS/Fedora
```bash
# Fedora
sudo dnf install podman
# RHEL/CentOS 8+
sudo dnf install podman
```
#### Windows
```bash
# Using Chocolatey
choco install podman-desktop
# Or download from https://podman.io/getting-started/installation
```
### 2. Initialize Podman (macOS/Windows only)
```bash
podman machine init
podman machine start
```
### 3. Verify Podman Installation
```bash
podman version
podman info
```
Expected output should show Podman version and system information without errors.
### 4. Build WRKFLW with Podman Support
```bash
cd /path/to/wrkflw
cargo build --release
```
## Test Plan
### Test 1: CLI Runtime Selection
#### 1.1 Test Default Runtime (Docker)
```bash
# Should default to Docker
./target/release/wrkflw run --help | grep -A 5 "runtime"
```
Expected: Should show `--runtime` option with default value `docker`.
#### 1.2 Test Podman Runtime Selection
```bash
# Should accept podman as runtime
./target/release/wrkflw run --runtime podman tests/workflows/example.yml
```
Expected: Should run without CLI argument errors.
#### 1.3 Test Emulation Runtime Selection
```bash
# Should accept emulation as runtime
./target/release/wrkflw run --runtime emulation tests/workflows/example.yml
```
Expected: Should run without CLI argument errors.
#### 1.4 Test Invalid Runtime Selection
```bash
# Should reject invalid runtime
./target/release/wrkflw run --runtime invalid tests/workflows/example.yml
```
Expected: Should show error about invalid runtime choice.
### Test 2: Podman Availability Detection
#### 2.1 Test with Podman Available
```bash
# Ensure Podman is running
podman info > /dev/null && echo "Podman is available"
# Test wrkflw detection
./target/release/wrkflw run --runtime podman --verbose test-workflows/example.yml
```
Expected: Should show "Podman is available, using Podman runtime" in logs.
#### 2.2 Test with Podman Unavailable
```bash
# Temporarily make podman unavailable
sudo mv /usr/local/bin/podman /usr/local/bin/podman.bak 2>/dev/null || echo "Podman not in /usr/local/bin"
# Test fallback to emulation
./target/release/wrkflw run --runtime podman --verbose test-workflows/example.yml
# Restore podman
sudo mv /usr/local/bin/podman.bak /usr/local/bin/podman 2>/dev/null || echo "Nothing to restore"
```
Expected: Should show "Podman is not available. Using emulation mode instead."
### Test 3: Container Execution with Podman
#### 3.1 Create a Simple Test Workflow
Create `test-podman-workflow.yml`:
```yaml
name: Test Podman Workflow
on: [workflow_dispatch]
jobs:
test-podman:
runs-on: ubuntu-latest
container: ubuntu:20.04
steps:
- name: Test basic commands
run: |
echo "Testing Podman container execution"
whoami
pwd
ls -la
echo "Container test completed successfully"
- name: Test environment variables
env:
TEST_VAR: "podman-test"
run: |
echo "Testing environment variables"
echo "TEST_VAR: $TEST_VAR"
echo "GITHUB_WORKSPACE: $GITHUB_WORKSPACE"
echo "RUNNER_OS: $RUNNER_OS"
- name: Test volume mounting
run: |
echo "Testing volume mounting"
echo "test-file-content" > test-file.txt
cat test-file.txt
ls -la test-file.txt
```
#### 3.2 Test Podman Container Execution
```bash
./target/release/wrkflw run --runtime podman --verbose test-podman-workflow.yml
```
Expected: Should execute all steps successfully using Podman containers.
#### 3.3 Compare with Docker Execution
```bash
# Test same workflow with Docker
./target/release/wrkflw run --runtime docker --verbose test-podman-workflow.yml
# Test same workflow with emulation
./target/release/wrkflw run --runtime emulation --verbose test-podman-workflow.yml
```
Expected: All three runtimes should produce similar results (emulation may have limitations).
### Test 4: TUI Interface Testing
#### 4.1 Test TUI Runtime Selection
```bash
./target/release/wrkflw tui tests/workflows/
```
**Test Steps:**
1. Launch TUI
2. Press `e` key to cycle through runtimes
3. Verify status bar shows: Docker → Podman → Emulation → Docker
4. Check that Podman status shows "Connected" or "Not Available"
5. Select a workflow and run it with Podman runtime
#### 4.2 Test TUI with Specific Runtime
```bash
# Start TUI with Podman runtime
./target/release/wrkflw tui --runtime podman test-workflows/
# Start TUI with emulation runtime
./target/release/wrkflw tui --runtime emulation test-workflows/
```
Expected: TUI should start with the specified runtime active.
### Test 5: Container Preservation Testing
**Note**: Container preservation is fully supported with Podman and works correctly.
#### 5.1 Test Container Cleanup (Default)
```bash
# Run a workflow that will fail
echo 'name: Failing Test
on: [workflow_dispatch]
jobs:
fail:
runs-on: ubuntu-latest
container: ubuntu:20.04
steps:
- run: exit 1' > test-fail-workflow.yml
./target/release/wrkflw run --runtime podman test-fail-workflow.yml
# Check if containers were cleaned up
podman ps -a --filter "name=wrkflw-"
```
Expected: No wrkflw containers should remain.
#### 5.2 Test Container Preservation on Failure
```bash
./target/release/wrkflw run --runtime podman --preserve-containers-on-failure test-fail-workflow.yml
# Check if failed container was preserved
podman ps -a --filter "name=wrkflw-"
```
Expected: Should show preserved container. Note the container ID for inspection.
#### 5.3 Test Container Inspection
```bash
# Get container ID from previous step
CONTAINER_ID=$(podman ps -a --filter "name=wrkflw-" --format "{{.ID}}" | head -1)
# Inspect the preserved container
podman exec -it $CONTAINER_ID bash
# Inside container: explore the environment, check files, etc.
# Exit with: exit
# Clean up manually
podman rm $CONTAINER_ID
```
### Test 6: Image Operations Testing
#### 6.1 Test Image Pulling
```bash
# Create workflow that uses a specific image
echo 'name: Image Pull Test
on: [workflow_dispatch]
jobs:
test:
runs-on: ubuntu-latest
container: node:18-alpine
steps:
- run: node --version' > test-image-pull.yml
./target/release/wrkflw run --runtime podman --verbose test-image-pull.yml
```
Expected: Should pull node:18-alpine image and execute successfully.
#### 6.2 Test Custom Image Building
```bash
# Create a workflow that builds a custom image (if supported)
# This tests the build_image functionality
mkdir -p test-build
echo 'FROM ubuntu:20.04
RUN apt-get update && apt-get install -y curl
CMD ["echo", "Custom image test"]' > test-build/Dockerfile
echo 'name: Image Build Test
on: [workflow_dispatch]
jobs:
test:
runs-on: ubuntu-latest
steps:
- name: Build and test custom image
run: |
echo "Testing custom image scenarios"
curl --version' > test-custom-image.yml
# Note: This test depends on language environment preparation
./target/release/wrkflw run --runtime podman --verbose test-custom-image.yml
```
### Test 7: Error Handling and Edge Cases
#### 7.1 Test Invalid Container Image
```bash
echo 'name: Invalid Image Test
on: [workflow_dispatch]
jobs:
test:
runs-on: ubuntu-latest
container: nonexistent-image:invalid-tag
steps:
- run: echo "This should fail"' > test-invalid-image.yml
./target/release/wrkflw run --runtime podman test-invalid-image.yml
```
Expected: Should handle image pull failure gracefully with clear error message.
#### 7.2 Test Network Connectivity
```bash
echo 'name: Network Test
on: [workflow_dispatch]
jobs:
test:
runs-on: ubuntu-latest
container: ubuntu:20.04
steps:
- name: Test network access
run: |
apt-get update
apt-get install -y curl
curl -s https://httpbin.org/get
- name: Test DNS resolution
run: nslookup google.com' > test-network.yml
./target/release/wrkflw run --runtime podman --verbose test-network.yml
```
Expected: Should have network access and complete successfully.
#### 7.3 Test Resource Intensive Workflow
```bash
echo 'name: Resource Test
on: [workflow_dispatch]
jobs:
test:
runs-on: ubuntu-latest
container: ubuntu:20.04
steps:
- name: Memory test
run: |
echo "Testing memory usage"
free -h
dd if=/dev/zero of=/tmp/test bs=1M count=100
ls -lh /tmp/test
rm /tmp/test
- name: CPU test
run: |
echo "Testing CPU usage"
yes > /dev/null &
PID=$!
sleep 2
kill $PID
echo "CPU test completed"' > test-resources.yml
./target/release/wrkflw run --runtime podman --verbose test-resources.yml
```
### Test 8: Comparison Testing
#### 8.1 Create Comprehensive Test Workflow
```bash
echo 'name: Comprehensive Runtime Comparison
on: [workflow_dispatch]
env:
GLOBAL_VAR: "global-value"
jobs:
test-all-features:
runs-on: ubuntu-latest
container: ubuntu:20.04
env:
JOB_VAR: "job-value"
steps:
- name: Environment test
env:
STEP_VAR: "step-value"
run: |
echo "=== Environment Variables ==="
echo "GLOBAL_VAR: $GLOBAL_VAR"
echo "JOB_VAR: $JOB_VAR"
echo "STEP_VAR: $STEP_VAR"
echo "GITHUB_WORKSPACE: $GITHUB_WORKSPACE"
echo "GITHUB_REPOSITORY: $GITHUB_REPOSITORY"
echo "RUNNER_OS: $RUNNER_OS"
- name: File system test
run: |
echo "=== File System Test ==="
pwd
ls -la
whoami
id
df -h
- name: Network test
run: |
echo "=== Network Test ==="
apt-get update -q
apt-get install -y curl iputils-ping
ping -c 3 8.8.8.8
curl -s https://httpbin.org/ip
- name: Process test
run: |
echo "=== Process Test ==="
ps aux
top -b -n 1 | head -10
- name: Package installation test
run: |
echo "=== Package Test ==="
apt-get install -y python3 python3-pip
python3 --version
pip3 --version' > comprehensive-test.yml
```
#### 8.2 Run Comprehensive Test with All Runtimes
```bash
echo "Testing with Docker:"
./target/release/wrkflw run --runtime docker --verbose comprehensive-test.yml > docker-test.log 2>&1
echo "Testing with Podman:"
./target/release/wrkflw run --runtime podman --verbose comprehensive-test.yml > podman-test.log 2>&1
echo "Testing with Emulation:"
./target/release/wrkflw run --runtime emulation --verbose comprehensive-test.yml > emulation-test.log 2>&1
# Compare results
echo "=== Comparing Results ==="
echo "Docker exit code: $?"
echo "Podman exit code: $?"
echo "Emulation exit code: $?"
# Optional: Compare log outputs
diff docker-test.log podman-test.log | head -20
```
## Expected Results Summary
### ✅ **Should Work:**
- CLI accepts `--runtime podman` without errors
- TUI cycles through Docker → Podman → Emulation with 'e' key
- Status bar shows Podman availability correctly
- Container execution works identically to Docker
- Container cleanup respects preservation settings
- Image pulling and basic image operations work
- Environment variables are passed correctly
- Volume mounting works for workspace access
- Network connectivity is available in containers
- Error handling is graceful and informative
### ⚠️ **Limitations to Expect:**
- Some advanced Docker-specific features may not work identically
- Performance characteristics may differ from Docker
- Podman-specific configuration might be needed for complex scenarios
- Error messages may differ between Docker and Podman
### 🚨 **Should Fail Gracefully:**
- Invalid runtime selection should show clear error
- Missing Podman should fall back to emulation with warning
- Invalid container images should show helpful error messages
- Network issues should be reported clearly
## Cleanup
After testing, clean up test files:
```bash
rm -f test-podman-workflow.yml test-fail-workflow.yml test-image-pull.yml
rm -f test-custom-image.yml test-invalid-image.yml test-network.yml
rm -f test-resources.yml comprehensive-test.yml
rm -f docker-test.log podman-test.log emulation-test.log
rm -rf test-build/
podman system prune -f # Clean up unused containers and images
```
## Troubleshooting
### Common Issues:
1. **"Podman not available"**
- Verify Podman installation: `podman version`
- Check Podman service: `podman machine list` (macOS/Windows)
2. **Permission errors**
- Podman should work rootless by default
- Check user namespaces: `podman unshare cat /proc/self/uid_map`
3. **Network issues**
- Test basic connectivity: `podman run --rm ubuntu:20.04 ping -c 1 8.8.8.8`
4. **Container startup failures**
- Check Podman logs: `podman logs <container-id>`
- Verify image availability: `podman images`
This comprehensive testing plan should verify that Podman support is working correctly and help identify any issues that need to be addressed.

View File

@@ -0,0 +1,120 @@
use std::fs;
use tempfile::tempdir;
use wrkflw::executor::engine::{execute_workflow, ExecutionConfig, RuntimeType};
fn write_file(path: &std::path::Path, content: &str) {
fs::write(path, content).expect("failed to write file");
}
#[tokio::test]
async fn test_local_reusable_workflow_execution_success() {
// Create temp workspace
let dir = tempdir().unwrap();
let called_path = dir.path().join("called.yml");
let caller_path = dir.path().join("caller.yml");
// Minimal called workflow with one successful job
let called = r#"
name: Called
on: workflow_dispatch
jobs:
inner:
runs-on: ubuntu-latest
steps:
- run: echo "hello from called"
"#;
write_file(&called_path, called);
// Caller workflow that uses the called workflow via absolute local path
let caller = format!(
r#"
name: Caller
on: workflow_dispatch
jobs:
call:
uses: {}
with:
foo: bar
secrets:
token: testsecret
"#,
called_path.display()
);
write_file(&caller_path, &caller);
// Execute caller workflow with emulation runtime
let cfg = ExecutionConfig {
runtime_type: RuntimeType::Emulation,
verbose: false,
preserve_containers_on_failure: false,
};
let result = execute_workflow(&caller_path, cfg)
.await
.expect("workflow execution failed");
// Expect a single caller job summarized
assert_eq!(result.jobs.len(), 1, "expected one caller job result");
let job = &result.jobs[0];
assert_eq!(job.name, "call");
assert_eq!(format!("{:?}", job.status), "Success");
// Summary step should include reference to called workflow and inner job status
assert!(job
.logs
.contains("Called workflow:"),
"expected summary logs to include called workflow path");
assert!(job.logs.contains("- inner: Success"), "expected inner job success in summary");
}
#[tokio::test]
async fn test_local_reusable_workflow_execution_failure_propagates() {
// Create temp workspace
let dir = tempdir().unwrap();
let called_path = dir.path().join("called.yml");
let caller_path = dir.path().join("caller.yml");
// Called workflow with failing job
let called = r#"
name: Called
on: workflow_dispatch
jobs:
inner:
runs-on: ubuntu-latest
steps:
- run: false
"#;
write_file(&called_path, called);
// Caller workflow
let caller = format!(
r#"
name: Caller
on: workflow_dispatch
jobs:
call:
uses: {}
"#,
called_path.display()
);
write_file(&caller_path, &caller);
// Execute caller workflow
let cfg = ExecutionConfig {
runtime_type: RuntimeType::Emulation,
verbose: false,
preserve_containers_on_failure: false,
};
let result = execute_workflow(&caller_path, cfg)
.await
.expect("workflow execution failed");
assert_eq!(result.jobs.len(), 1);
let job = &result.jobs[0];
assert_eq!(job.name, "call");
assert_eq!(format!("{:?}", job.status), "Failure");
assert!(job.logs.contains("- inner: Failure"));
}

View File

@@ -0,0 +1,215 @@
#!/bin/bash
# Basic Podman Support Test Script for WRKFLW
# This script performs quick verification of Podman integration
set -e # Exit on any error
echo "🚀 WRKFLW Podman Support - Basic Test Script"
echo "============================================="
# Colors for output
RED='\033[0;31m'
GREEN='\033[0;32m'
YELLOW='\033[1;33m'
BLUE='\033[0;34m'
NC='\033[0m' # No Color
# Function to print colored output
print_status() {
echo -e "${BLUE}[INFO]${NC} $1"
}
print_success() {
echo -e "${GREEN}[SUCCESS]${NC} $1"
}
print_warning() {
echo -e "${YELLOW}[WARNING]${NC} $1"
}
print_error() {
echo -e "${RED}[ERROR]${NC} $1"
}
# Check if wrkflw binary exists
print_status "Checking if wrkflw is built..."
if [ ! -f "./target/release/wrkflw" ]; then
print_warning "Release binary not found. Building wrkflw..."
cargo build --release
if [ $? -eq 0 ]; then
print_success "Build completed successfully"
else
print_error "Build failed"
exit 1
fi
else
print_success "Found wrkflw binary"
fi
# Test 1: Check CLI help shows runtime options
print_status "Test 1: Checking CLI runtime options..."
HELP_OUTPUT=$(./target/release/wrkflw run --help 2>&1)
if echo "$HELP_OUTPUT" | grep -q "runtime.*podman"; then
print_success "CLI shows Podman runtime option"
else
print_error "CLI does not show Podman runtime option"
exit 1
fi
# Test 2: Check invalid runtime rejection
print_status "Test 2: Testing invalid runtime rejection..."
if ./target/release/wrkflw run --runtime invalid tests/workflows/example.yml 2>&1 | grep -q "invalid value"; then
print_success "Invalid runtime properly rejected"
else
print_error "Invalid runtime not properly rejected"
exit 1
fi
# Test 3: Check Podman availability detection
print_status "Test 3: Testing Podman availability detection..."
if command -v podman &> /dev/null; then
print_success "Podman is installed and available"
PODMAN_VERSION=$(podman version --format json | python3 -c "import sys, json; print(json.load(sys.stdin)['Client']['Version'])" 2>/dev/null || echo "unknown")
print_status "Podman version: $PODMAN_VERSION"
# Test basic podman functionality
if podman info > /dev/null 2>&1; then
print_success "Podman daemon is responsive"
PODMAN_AVAILABLE=true
else
print_warning "Podman installed but not responsive (may need podman machine start)"
PODMAN_AVAILABLE=false
fi
else
print_warning "Podman not installed - will test fallback behavior"
PODMAN_AVAILABLE=false
fi
# Create a simple test workflow
print_status "Creating test workflow..."
cat > test-basic-workflow.yml << 'EOF'
name: Basic Test Workflow
on: [workflow_dispatch]
jobs:
test:
runs-on: ubuntu-latest
container: ubuntu:20.04
steps:
- name: Basic test
run: |
echo "Testing basic container execution"
echo "Current user: $(whoami)"
echo "Working directory: $(pwd)"
echo "Container test completed"
- name: Environment test
env:
TEST_VAR: "test-value"
run: |
echo "Environment variable TEST_VAR: $TEST_VAR"
echo "GitHub workspace: $GITHUB_WORKSPACE"
EOF
# Test 4: Test emulation mode (should always work)
print_status "Test 4: Testing emulation mode..."
if ./target/release/wrkflw run --runtime emulation test-basic-workflow.yml > /dev/null 2>&1; then
print_success "Emulation mode works correctly"
else
print_error "Emulation mode failed"
exit 1
fi
# Test 5: Test Podman mode
print_status "Test 5: Testing Podman mode..."
if [ "$PODMAN_AVAILABLE" = true ]; then
print_status "Running test workflow with Podman runtime..."
if ./target/release/wrkflw run --runtime podman --verbose test-basic-workflow.yml > podman-test.log 2>&1; then
print_success "Podman mode executed successfully"
# Check if it actually used Podman
if grep -q "Podman: Running container" podman-test.log; then
print_success "Confirmed Podman was used for container execution"
elif grep -q "Podman is not available.*emulation" podman-test.log; then
print_warning "Podman fell back to emulation mode"
else
print_warning "Could not confirm Podman usage in logs"
fi
else
print_error "Podman mode failed to execute"
echo "Error log:"
tail -10 podman-test.log
exit 1
fi
else
print_status "Testing Podman fallback behavior..."
if ./target/release/wrkflw run --runtime podman test-basic-workflow.yml 2>&1 | grep -q "emulation.*instead"; then
print_success "Podman correctly falls back to emulation when unavailable"
else
print_error "Podman fallback behavior not working correctly"
exit 1
fi
fi
# Test 6: Test Docker mode (if available)
print_status "Test 6: Testing Docker mode for comparison..."
if command -v docker &> /dev/null && docker info > /dev/null 2>&1; then
print_status "Docker is available, testing for comparison..."
if ./target/release/wrkflw run --runtime docker test-basic-workflow.yml > /dev/null 2>&1; then
print_success "Docker mode works correctly"
else
print_warning "Docker mode failed (this is okay for Podman testing)"
fi
else
print_warning "Docker not available - skipping Docker comparison test"
fi
# Test 7: Test TUI compilation (basic check)
print_status "Test 7: Testing TUI startup..."
timeout 5s ./target/release/wrkflw tui --help > /dev/null 2>&1 || true
print_success "TUI help command works"
# Test 8: Runtime switching in TUI (simulate)
print_status "Test 8: Checking TUI runtime parameter..."
if ./target/release/wrkflw tui --runtime podman --help > /dev/null 2>&1; then
print_success "TUI accepts runtime parameter"
else
print_error "TUI does not accept runtime parameter"
exit 1
fi
# Cleanup
print_status "Cleaning up test files..."
rm -f test-basic-workflow.yml podman-test.log
echo ""
echo "🎉 Basic Podman Support Test Summary:"
echo "======================================"
if [ "$PODMAN_AVAILABLE" = true ]; then
print_success "✅ Podman is available and working"
print_success "✅ WRKFLW can execute workflows with Podman"
else
print_warning "⚠️ Podman not available, but fallback works correctly"
fi
print_success "✅ CLI runtime selection works"
print_success "✅ Error handling works"
print_success "✅ TUI integration works"
print_success "✅ Basic container execution works"
echo ""
print_status "🔍 For comprehensive testing, run: ./TESTING_PODMAN.md"
print_status "📋 To install Podman: https://podman.io/getting-started/installation"
if [ "$PODMAN_AVAILABLE" = false ]; then
echo ""
print_warning "💡 To test full Podman functionality:"
echo " 1. Install Podman for your system"
echo " 2. Initialize Podman (if on macOS/Windows): podman machine init && podman machine start"
echo " 3. Re-run this test script"
fi
echo ""
print_success "🎯 Basic Podman support test completed successfully!"

View File

@@ -0,0 +1,256 @@
#!/bin/bash
# Test script to verify --preserve-containers-on-failure works with Podman
set -e
echo "🧪 Testing --preserve-containers-on-failure with Podman"
echo "======================================================="
# Colors for output
RED='\033[0;31m'
GREEN='\033[0;32m'
YELLOW='\033[1;33m'
BLUE='\033[0;34m'
NC='\033[0m' # No Color
print_status() { echo -e "${BLUE}[INFO]${NC} $1"; }
print_success() { echo -e "${GREEN}[SUCCESS]${NC} $1"; }
print_warning() { echo -e "${YELLOW}[WARNING]${NC} $1"; }
print_error() { echo -e "${RED}[ERROR]${NC} $1"; }
# Check if Podman is available
if ! command -v podman &> /dev/null; then
print_error "Podman is not installed. Please install Podman to run this test."
exit 1
fi
if ! podman info > /dev/null 2>&1; then
print_error "Podman is not responsive. Please start Podman (e.g., 'podman machine start' on macOS)."
exit 1
fi
print_success "Podman is available and responsive"
# Create a failing workflow for testing
print_status "Creating test workflows..."
cat > test-success-workflow.yml << 'EOF'
name: Success Test
on: [workflow_dispatch]
jobs:
success:
runs-on: ubuntu-latest
container: ubuntu:20.04
steps:
- name: Successful step
run: |
echo "This step will succeed"
echo "Exit code will be 0"
exit 0
EOF
cat > test-failure-workflow.yml << 'EOF'
name: Failure Test
on: [workflow_dispatch]
jobs:
failure:
runs-on: ubuntu-latest
container: ubuntu:20.04
steps:
- name: Failing step
run: exit 1
EOF
# Function to count wrkflw containers
count_wrkflw_containers() {
podman ps -a --filter "name=wrkflw-" --format "{{.Names}}" | wc -l
}
# Function to get wrkflw container names
get_wrkflw_containers() {
podman ps -a --filter "name=wrkflw-" --format "{{.Names}}"
}
# Clean up any existing wrkflw containers
print_status "Cleaning up any existing wrkflw containers..."
EXISTING_CONTAINERS=$(get_wrkflw_containers)
if [ -n "$EXISTING_CONTAINERS" ]; then
echo "$EXISTING_CONTAINERS" | xargs -r podman rm -f
print_status "Removed existing containers"
fi
echo ""
print_status "=== Test 1: Success case without preserve flag ==="
BEFORE_COUNT=$(count_wrkflw_containers)
print_status "Containers before: $BEFORE_COUNT"
./target/release/wrkflw run --runtime podman test-success-workflow.yml > /dev/null 2>&1
AFTER_COUNT=$(count_wrkflw_containers)
print_status "Containers after: $AFTER_COUNT"
if [ "$AFTER_COUNT" -eq "$BEFORE_COUNT" ]; then
print_success "✅ Success case without preserve: containers cleaned up correctly"
else
print_error "❌ Success case without preserve: containers not cleaned up"
exit 1
fi
echo ""
print_status "=== Test 2: Success case with preserve flag ==="
BEFORE_COUNT=$(count_wrkflw_containers)
print_status "Containers before: $BEFORE_COUNT"
./target/release/wrkflw run --runtime podman --preserve-containers-on-failure test-success-workflow.yml > /dev/null 2>&1
AFTER_COUNT=$(count_wrkflw_containers)
print_status "Containers after: $AFTER_COUNT"
if [ "$AFTER_COUNT" -eq "$BEFORE_COUNT" ]; then
print_success "✅ Success case with preserve: successful containers cleaned up correctly"
else
print_error "❌ Success case with preserve: successful containers not cleaned up"
exit 1
fi
echo ""
print_status "=== Test 3: Failure case without preserve flag ==="
BEFORE_COUNT=$(count_wrkflw_containers)
print_status "Containers before: $BEFORE_COUNT"
./target/release/wrkflw run --runtime podman test-failure-workflow.yml > /dev/null 2>&1 || true
AFTER_COUNT=$(count_wrkflw_containers)
print_status "Containers after: $AFTER_COUNT"
if [ "$AFTER_COUNT" -eq "$BEFORE_COUNT" ]; then
print_success "✅ Failure case without preserve: containers cleaned up correctly"
else
print_error "❌ Failure case without preserve: containers not cleaned up"
exit 1
fi
echo ""
print_status "=== Test 4: Failure case with preserve flag ==="
BEFORE_COUNT=$(count_wrkflw_containers)
print_status "Containers before: $BEFORE_COUNT"
print_status "Running failing workflow with --preserve-containers-on-failure..."
./target/release/wrkflw run --runtime podman --preserve-containers-on-failure test-failure-workflow.yml > preserve-test.log 2>&1 || true
AFTER_COUNT=$(count_wrkflw_containers)
print_status "Containers after: $AFTER_COUNT"
PRESERVED_CONTAINERS=$(get_wrkflw_containers)
if [ "$AFTER_COUNT" -gt "$BEFORE_COUNT" ]; then
print_success "✅ Failure case with preserve: failed container preserved"
print_status "Preserved containers: $PRESERVED_CONTAINERS"
# Check if the log mentions preservation
if grep -q "Preserving.*container.*debugging" preserve-test.log; then
print_success "✅ Preservation message found in logs"
else
print_warning "⚠️ Preservation message not found in logs"
fi
# Test that we can inspect the preserved container
CONTAINER_NAME=$(echo "$PRESERVED_CONTAINERS" | head -1)
if [ -n "$CONTAINER_NAME" ]; then
print_status "Testing container inspection..."
if podman exec "$CONTAINER_NAME" echo "Container inspection works" > /dev/null 2>&1; then
print_success "✅ Can inspect preserved container"
else
print_warning "⚠️ Cannot inspect preserved container (container may have exited)"
fi
# Clean up the preserved container
print_status "Cleaning up preserved container for testing..."
podman rm -f "$CONTAINER_NAME" > /dev/null 2>&1
fi
else
print_error "❌ Failure case with preserve: failed container not preserved"
echo "Log output:"
cat preserve-test.log
exit 1
fi
echo ""
print_status "=== Test 5: Multiple failures with preserve flag ==="
BEFORE_COUNT=$(count_wrkflw_containers)
print_status "Containers before: $BEFORE_COUNT"
print_status "Running multiple failing workflows..."
for i in {1..3}; do
./target/release/wrkflw run --runtime podman --preserve-containers-on-failure test-failure-workflow.yml > /dev/null 2>&1 || true
done
AFTER_COUNT=$(count_wrkflw_containers)
print_status "Containers after: $AFTER_COUNT"
EXPECTED_COUNT=$((BEFORE_COUNT + 3))
if [ "$AFTER_COUNT" -eq "$EXPECTED_COUNT" ]; then
print_success "✅ Multiple failures: all failed containers preserved"
else
print_warning "⚠️ Multiple failures: expected $EXPECTED_COUNT containers, got $AFTER_COUNT"
fi
# Clean up all preserved containers
PRESERVED_CONTAINERS=$(get_wrkflw_containers)
if [ -n "$PRESERVED_CONTAINERS" ]; then
print_status "Cleaning up all preserved containers..."
echo "$PRESERVED_CONTAINERS" | xargs -r podman rm -f
fi
echo ""
print_status "=== Test 6: Comparison with Docker (if available) ==="
if command -v docker &> /dev/null && docker info > /dev/null 2>&1; then
print_status "Docker available, testing for comparison..."
# Test Docker with preserve flag
BEFORE_COUNT=$(docker ps -a --filter "name=wrkflw-" --format "{{.Names}}" | wc -l)
./target/release/wrkflw run --runtime docker --preserve-containers-on-failure test-failure-workflow.yml > /dev/null 2>&1 || true
AFTER_COUNT=$(docker ps -a --filter "name=wrkflw-" --format "{{.Names}}" | wc -l)
if [ "$AFTER_COUNT" -gt "$BEFORE_COUNT" ]; then
print_success "✅ Docker also preserves containers correctly"
# Clean up Docker containers
DOCKER_CONTAINERS=$(docker ps -a --filter "name=wrkflw-" --format "{{.Names}}")
if [ -n "$DOCKER_CONTAINERS" ]; then
echo "$DOCKER_CONTAINERS" | xargs -r docker rm -f
fi
else
print_warning "⚠️ Docker preserve behavior differs from Podman"
fi
else
print_status "Docker not available, skipping comparison"
fi
# Cleanup test files
print_status "Cleaning up test files..."
rm -f test-success-workflow.yml test-failure-workflow.yml preserve-test.log
echo ""
print_success "🎉 Container preservation test completed successfully!"
echo ""
print_status "📋 Test Summary:"
print_success "✅ Successful containers are cleaned up (with and without preserve flag)"
print_success "✅ Failed containers are cleaned up when preserve flag is NOT used"
print_success "✅ Failed containers are preserved when preserve flag IS used"
print_success "✅ Preserved containers can be inspected"
print_success "✅ Multiple failed containers are handled correctly"
echo ""
print_status "💡 Usage examples:"
echo " # Normal execution (cleanup all containers):"
echo " wrkflw run --runtime podman workflow.yml"
echo ""
echo " # Preserve failed containers for debugging:"
echo " wrkflw run --runtime podman --preserve-containers-on-failure workflow.yml"
echo ""
echo " # Inspect preserved container:"
echo " podman ps -a --filter \"name=wrkflw-\""
echo " podman exec -it <container-name> bash"
echo ""
echo " # Clean up preserved containers:"
echo " podman ps -a --filter \"name=wrkflw-\" --format \"{{.Names}}\" | xargs podman rm -f"

View File

@@ -0,0 +1,18 @@
name: Test Runs-On Array Format
on: [push]
jobs:
test-array-runs-on:
timeout-minutes: 15
runs-on: [self-hosted, ubuntu, small]
steps:
- name: Test step
run: echo "Testing array format for runs-on"
test-string-runs-on:
timeout-minutes: 15
runs-on: ubuntu-latest
steps:
- name: Test step
run: echo "Testing string format for runs-on"