Compare commits

..

9 Commits

Author SHA1 Message Date
bahdotsh
d1268d55cf feat: move log stream composition and filtering to background thread
- Resolves #29: UI unresponsiveness in logs tab
- Add LogProcessor with background thread for async log processing
- Implement pre-processed log caching with ProcessedLogEntry
- Replace frame-by-frame log processing with cached results
- Add automatic log change detection for app and system logs
- Optimize rendering from O(n) to O(1) complexity
- Maintain all search, filter, and highlighting functionality
- Fix clippy warning for redundant pattern matching

Performance improvements:
- Log processing moved to separate thread with 50ms debouncing
- UI rendering no longer blocks on log filtering/formatting
- Supports thousands of logs without UI lag
- Non-blocking request/response pattern with mpsc channels
2025-08-13 13:38:17 +05:30
Gokul
a146d94c35 Merge pull request #34 from bahdotsh/fix/runs-on-array-support
fix: Support array format for runs-on field in GitHub Actions workflows
2025-08-13 13:24:35 +05:30
bahdotsh
7636195380 fix: Support array format for runs-on field in GitHub Actions workflows
- Add custom deserializer for runs-on field to handle both string and array formats
- Update Job struct to use Vec<String> instead of String for runs-on field
- Modify executor to extract first element from runs-on array for runner selection
- Add test workflow to verify both string and array formats work correctly
- Maintain backwards compatibility with existing string-based workflows

Fixes issue where workflows with runs-on: [self-hosted, ubuntu, small] format
would fail with 'invalid type: sequence, expected a string' error.

This change aligns with GitHub Actions specification which supports:
- String format: runs-on: ubuntu-latest
- Array format: runs-on: [self-hosted, ubuntu, small]
2025-08-13 13:21:58 +05:30
Gokul
98afdb3372 Merge pull request #33 from bahdotsh/docs/add-crate-readmes
docs(readme): add per-crate READMEs and enhance wrkflw crate README
2025-08-12 15:12:44 +05:30
bahdotsh
58de01e69f docs(readme): add per-crate READMEs and enhance wrkflw crate README 2025-08-12 15:09:38 +05:30
Gokul
880cae3899 Merge pull request #32 from bahdotsh/bahdotsh/reusable-workflow-execution
feat: add execution support for reusable workflows
2025-08-12 14:57:49 +05:30
bahdotsh
66e540645d feat(executor,parser,docs): add execution support for reusable workflows (jobs.<id>.uses)\n\n- Parser: make jobs.runs-on optional; add job-level uses/with/secrets for caller jobs\n- Executor: resolve and run local/remote called workflows; propagate inputs/secrets; summarize results\n- Docs: document feature, usage, and current limits in README\n- Tests: add execution tests for local reusable workflows (success/failure)\n\nLimits:\n- Does not propagate outputs back to caller\n- secrets: inherit not special-cased; use mapping\n- Remote private repos not yet supported; public only\n- Cycle detection for nested calls unchanged 2025-08-12 14:53:07 +05:30
bahdotsh
79b6389f54 fix: resolve schema file path issues for cargo publish
- Copied schema files into parser crate src directory
- Updated include_str! paths to be relative to source files
- Ensures schemas are bundled with crate during publish
- Resolves packaging and verification issues during publication

Fixes the build error that was preventing crate publication.
2025-08-09 18:14:25 +05:30
bahdotsh
5d55812872 fix: correct schema file paths for cargo publish
- Updated include_str! paths from ../../../ to ../../../../
- This resolves packaging issues during cargo publish
- Fixes schema loading for parser crate publication
2025-08-09 18:12:56 +05:30
28 changed files with 5927 additions and 161 deletions

View File

@@ -26,6 +26,7 @@ WRKFLW is a powerful command-line tool for validating and executing GitHub Actio
- Composite actions
- Local actions
- **Special Action Handling**: Native handling for commonly used actions like `actions/checkout`
- **Reusable Workflows (Caller Jobs)**: Execute jobs that call reusable workflows via `jobs.<id>.uses` (local path or `owner/repo/path@ref`)
- **Output Capturing**: View logs, step outputs, and execution details
- **Parallel Job Execution**: Runs independent jobs in parallel for faster workflow execution
- **Trigger Workflows Remotely**: Manually trigger workflow runs on GitHub or GitLab
@@ -372,6 +373,7 @@ podman ps -a --filter "name=wrkflw-" --format "{{.Names}}" | xargs podman rm -f
- ✅ Composite actions (all composite actions, including nested and local composite actions, are supported)
- ✅ Local actions (actions referenced with local paths are supported)
- ✅ Special handling for common actions (e.g., `actions/checkout` is natively supported)
- ✅ Reusable workflows (caller): Jobs that use `jobs.<id>.uses` to call local or remote workflows are executed; inputs and secrets are propagated to the called workflow
- ✅ Workflow triggering via `workflow_dispatch` (manual triggering of workflows is supported)
- ✅ GitLab pipeline triggering (manual triggering of GitLab pipelines is supported)
- ✅ Environment files (`GITHUB_OUTPUT`, `GITHUB_ENV`, `GITHUB_PATH`, `GITHUB_STEP_SUMMARY` are fully supported)
@@ -395,6 +397,42 @@ podman ps -a --filter "name=wrkflw-" --format "{{.Names}}" | xargs podman rm -f
- ❌ Job/step timeouts: Custom timeouts for jobs and steps are NOT enforced.
- ❌ Job/step concurrency and cancellation: Features like `concurrency` and job cancellation are NOT supported.
- ❌ Expressions and advanced YAML features: Most common expressions are supported, but some advanced or edge-case expressions may not be fully implemented.
- ⚠️ Reusable workflows (limits):
- Outputs from called workflows are not propagated back to the caller (`needs.<id>.outputs.*` not supported)
- `secrets: inherit` is not special-cased; provide a mapping to pass secrets
- Remote calls clone public repos via HTTPS; private repos require preconfigured access (not yet implemented)
- Deeply nested reusable calls work but lack cycle detection beyond regular job dependency checks
## Reusable Workflows
WRKFLW supports executing reusable workflow caller jobs.
### Syntax
```yaml
jobs:
call-local:
uses: ./.github/workflows/shared.yml
call-remote:
uses: my-org/my-repo/.github/workflows/shared.yml@v1
with:
foo: bar
secrets:
token: ${{ secrets.MY_TOKEN }}
```
### Behavior
- Local references are resolved relative to the current working directory.
- Remote references are shallow-cloned at the specified `@ref` into a temporary directory.
- `with:` entries are exposed to the called workflow as environment variables `INPUT_<KEY>`.
- `secrets:` mapping entries are exposed as environment variables `SECRET_<KEY>`.
- The called workflow executes according to its own `jobs`/`needs`; a summary of its job results is reported as a single result for the caller job.
### Current limitations
- Outputs from called workflows are not surfaced back to the caller.
- `secrets: inherit` is not supported; specify an explicit mapping.
- Private repositories for remote `uses:` are not yet supported.
### Runtime Mode Differences
- **Docker Mode**: Provides the closest match to GitHub's environment, including support for Docker container actions, service containers, and Linux-based jobs. Some advanced container configurations may still require manual setup.

View File

@@ -0,0 +1,29 @@
## wrkflw-evaluator
Small, focused helper for statically evaluating GitHub Actions workflow files.
- **Purpose**: Fast structural checks (e.g., `name`, `on`, `jobs`) before deeper validation/execution
- **Used by**: `wrkflw` CLI and TUI during validation flows
### Example
```rust
use std::path::Path;
let result = wrkflw_evaluator::evaluate_workflow_file(
Path::new(".github/workflows/ci.yml"),
/* verbose */ true,
).expect("evaluation failed");
if result.is_valid {
println!("Workflow looks structurally sound");
} else {
for issue in result.issues {
println!("- {}", issue);
}
}
```
### Notes
- This crate focuses on structural checks; deeper rules live in `wrkflw-validators`.
- Most consumers should prefer the top-level `wrkflw` CLI for end-to-end UX.

29
crates/executor/README.md Normal file
View File

@@ -0,0 +1,29 @@
## wrkflw-executor
The execution engine that runs GitHub Actions workflows locally (Docker, Podman, or emulation).
- **Features**:
- Job graph execution with `needs` ordering and parallelism
- Docker/Podman container steps and emulation mode
- Basic environment/context wiring compatible with Actions
- **Used by**: `wrkflw` CLI and TUI
### API sketch
```rust
use wrkflw_executor::{execute_workflow, ExecutionConfig, RuntimeType};
let cfg = ExecutionConfig {
runtime: RuntimeType::Docker,
verbose: true,
preserve_containers_on_failure: false,
};
// Path to a workflow YAML
let workflow_path = std::path::Path::new(".github/workflows/ci.yml");
let result = execute_workflow(workflow_path, cfg).await?;
println!("workflow status: {:?}", result.summary_status);
```
Prefer using the `wrkflw` binary for a complete UX across validation, execution, and logs.

View File

@@ -652,6 +652,12 @@ async fn execute_job(ctx: JobExecutionContext<'_>) -> Result<JobResult, Executio
ExecutionError::Execution(format!("Job '{}' not found in workflow", ctx.job_name))
})?;
// Handle reusable workflow jobs (job-level 'uses')
if let Some(uses) = &job.uses {
return execute_reusable_workflow_job(&ctx, uses, job.with.as_ref(), job.secrets.as_ref())
.await;
}
// Clone context and add job-specific variables
let mut job_env = ctx.env_context.clone();
@@ -685,6 +691,9 @@ async fn execute_job(ctx: JobExecutionContext<'_>) -> Result<JobResult, Executio
let mut job_success = true;
// Execute job steps
// Determine runner image (default if not provided)
let runner_image_value = get_runner_image_from_opt(&job.runs_on);
for (idx, step) in job.steps.iter().enumerate() {
let step_result = execute_step(StepExecutionContext {
step,
@@ -693,7 +702,7 @@ async fn execute_job(ctx: JobExecutionContext<'_>) -> Result<JobResult, Executio
working_dir: job_dir.path(),
runtime: ctx.runtime,
workflow: ctx.workflow,
runner_image: &get_runner_image(&job.runs_on),
runner_image: &runner_image_value,
verbose: ctx.verbose,
matrix_combination: &None,
})
@@ -882,6 +891,9 @@ async fn execute_matrix_job(
true
} else {
// Execute each step
// Determine runner image (default if not provided)
let runner_image_value = get_runner_image_from_opt(&job_template.runs_on);
for (idx, step) in job_template.steps.iter().enumerate() {
match execute_step(StepExecutionContext {
step,
@@ -890,7 +902,7 @@ async fn execute_matrix_job(
working_dir: job_dir.path(),
runtime,
workflow,
runner_image: &get_runner_image(&job_template.runs_on),
runner_image: &runner_image_value,
verbose,
matrix_combination: &Some(combination.values.clone()),
})
@@ -1750,6 +1762,189 @@ fn get_runner_image(runs_on: &str) -> String {
.to_string()
}
fn get_runner_image_from_opt(runs_on: &Option<Vec<String>>) -> String {
let default = "ubuntu-latest";
let ro = runs_on
.as_ref()
.and_then(|vec| vec.first())
.map(|s| s.as_str())
.unwrap_or(default);
get_runner_image(ro)
}
async fn execute_reusable_workflow_job(
ctx: &JobExecutionContext<'_>,
uses: &str,
with: Option<&HashMap<String, String>>,
secrets: Option<&serde_yaml::Value>,
) -> Result<JobResult, ExecutionError> {
wrkflw_logging::info(&format!(
"Executing reusable workflow job '{}' -> {}",
ctx.job_name, uses
));
// Resolve the called workflow file path
enum UsesRef<'a> {
LocalPath(&'a str),
Remote {
owner: String,
repo: String,
path: String,
r#ref: String,
},
}
let uses_ref = if uses.starts_with("./") || uses.starts_with('/') {
UsesRef::LocalPath(uses)
} else {
// Expect format owner/repo/path/to/workflow.yml@ref
let parts: Vec<&str> = uses.split('@').collect();
if parts.len() != 2 {
return Err(ExecutionError::Execution(format!(
"Invalid reusable workflow reference: {}",
uses
)));
}
let left = parts[0];
let r#ref = parts[1].to_string();
let mut segs = left.splitn(3, '/');
let owner = segs.next().unwrap_or("").to_string();
let repo = segs.next().unwrap_or("").to_string();
let path = segs.next().unwrap_or("").to_string();
if owner.is_empty() || repo.is_empty() || path.is_empty() {
return Err(ExecutionError::Execution(format!(
"Invalid reusable workflow reference: {}",
uses
)));
}
UsesRef::Remote {
owner,
repo,
path,
r#ref,
}
};
// Load workflow file
let workflow_path = match uses_ref {
UsesRef::LocalPath(p) => {
// Resolve relative to current directory
let current_dir = std::env::current_dir().map_err(|e| {
ExecutionError::Execution(format!("Failed to get current dir: {}", e))
})?;
let path = current_dir.join(p);
if !path.exists() {
return Err(ExecutionError::Execution(format!(
"Reusable workflow not found at path: {}",
path.display()
)));
}
path
}
UsesRef::Remote {
owner,
repo,
path,
r#ref,
} => {
// Clone minimal repository and checkout ref
let tempdir = tempfile::tempdir().map_err(|e| {
ExecutionError::Execution(format!("Failed to create temp dir: {}", e))
})?;
let repo_url = format!("https://github.com/{}/{}.git", owner, repo);
// git clone
let status = Command::new("git")
.arg("clone")
.arg("--depth")
.arg("1")
.arg("--branch")
.arg(&r#ref)
.arg(&repo_url)
.arg(tempdir.path())
.status()
.map_err(|e| ExecutionError::Execution(format!("Failed to execute git: {}", e)))?;
if !status.success() {
return Err(ExecutionError::Execution(format!(
"Failed to clone {}@{}",
repo_url, r#ref
)));
}
let joined = tempdir.path().join(path);
if !joined.exists() {
return Err(ExecutionError::Execution(format!(
"Reusable workflow file not found in repo: {}",
joined.display()
)));
}
joined
}
};
// Parse called workflow
let called = parse_workflow(&workflow_path)?;
// Create child env context
let mut child_env = ctx.env_context.clone();
if let Some(with_map) = with {
for (k, v) in with_map {
child_env.insert(format!("INPUT_{}", k.to_uppercase()), v.clone());
}
}
if let Some(secrets_val) = secrets {
if let Some(map) = secrets_val.as_mapping() {
for (k, v) in map {
if let (Some(key), Some(value)) = (k.as_str(), v.as_str()) {
child_env.insert(format!("SECRET_{}", key.to_uppercase()), value.to_string());
}
}
}
}
// Execute called workflow
let plan = dependency::resolve_dependencies(&called)?;
let mut all_results = Vec::new();
let mut any_failed = false;
for batch in plan {
let results =
execute_job_batch(&batch, &called, ctx.runtime, &child_env, ctx.verbose).await?;
for r in &results {
if r.status == JobStatus::Failure {
any_failed = true;
}
}
all_results.extend(results);
}
// Summarize into a single JobResult
let mut logs = String::new();
logs.push_str(&format!("Called workflow: {}\n", workflow_path.display()));
for r in &all_results {
logs.push_str(&format!("- {}: {:?}\n", r.name, r.status));
}
// Represent as one summary step for UI
let summary_step = StepResult {
name: format!("Run reusable workflow: {}", uses),
status: if any_failed {
StepStatus::Failure
} else {
StepStatus::Success
},
output: logs.clone(),
};
Ok(JobResult {
name: ctx.job_name.to_string(),
status: if any_failed {
JobStatus::Failure
} else {
JobStatus::Success
},
steps: vec![summary_step],
logs,
})
}
#[allow(dead_code)]
async fn prepare_runner_image(
image: &str,

23
crates/github/README.md Normal file
View File

@@ -0,0 +1,23 @@
## wrkflw-github
GitHub integration helpers used by `wrkflw` to list/trigger workflows.
- **List workflows** in `.github/workflows`
- **Trigger workflow_dispatch** events over the GitHub API
### Example
```rust
use wrkflw_github::{get_repo_info, trigger_workflow};
# tokio_test::block_on(async {
let info = get_repo_info()?;
println!("{}/{} (default branch: {})", info.owner, info.repo, info.default_branch);
// Requires GITHUB_TOKEN in env
trigger_workflow("ci", Some("main"), None).await?;
# Ok::<_, Box<dyn std::error::Error>>(())
# })?;
```
Notes: set `GITHUB_TOKEN` with the `workflow` scope; only public repos are supported out-of-the-box.

23
crates/gitlab/README.md Normal file
View File

@@ -0,0 +1,23 @@
## wrkflw-gitlab
GitLab integration helpers used by `wrkflw` to trigger pipelines.
- Reads repo info from local git remote
- Triggers pipelines via GitLab API
### Example
```rust
use wrkflw_gitlab::{get_repo_info, trigger_pipeline};
# tokio_test::block_on(async {
let info = get_repo_info()?;
println!("{}/{} (default branch: {})", info.namespace, info.project, info.default_branch);
// Requires GITLAB_TOKEN in env (api scope)
trigger_pipeline(Some("main"), None).await?;
# Ok::<_, Box<dyn std::error::Error>>(())
# })?;
```
Notes: looks for `.gitlab-ci.yml` in the repo root when listing pipelines.

22
crates/logging/README.md Normal file
View File

@@ -0,0 +1,22 @@
## wrkflw-logging
Lightweight in-memory logging with simple levels for TUI/CLI output.
- Thread-safe, timestamped messages
- Level filtering (Debug/Info/Warning/Error)
- Pluggable into UI for live log views
### Example
```rust
use wrkflw_logging::{info, warning, error, LogLevel, set_log_level, get_logs};
set_log_level(LogLevel::Info);
info("starting");
warning("be careful");
error("boom");
for line in get_logs() {
println!("{}", line);
}
```

20
crates/matrix/README.md Normal file
View File

@@ -0,0 +1,20 @@
## wrkflw-matrix
Matrix expansion utilities used to compute all job combinations and format labels.
- Supports `include`, `exclude`, `max-parallel`, and `fail-fast`
- Provides display helpers for UI/CLI
### Example
```rust
use wrkflw_matrix::{MatrixConfig, expand_matrix};
use serde_yaml::Value;
use std::collections::HashMap;
let mut cfg = MatrixConfig::default();
cfg.parameters.insert("os".into(), Value::from(vec!["ubuntu", "alpine"])) ;
let combos = expand_matrix(&cfg).expect("expand");
assert!(!combos.is_empty());
```

16
crates/models/README.md Normal file
View File

@@ -0,0 +1,16 @@
## wrkflw-models
Common data structures shared across crates.
- `ValidationResult` for structural/semantic checks
- GitLab pipeline models (serde types)
### Example
```rust
use wrkflw_models::ValidationResult;
let mut res = ValidationResult::new();
res.add_issue("missing jobs".into());
assert!(!res.is_valid);
```

13
crates/parser/README.md Normal file
View File

@@ -0,0 +1,13 @@
## wrkflw-parser
Parsers and schema helpers for GitHub/GitLab workflow files.
- GitHub Actions workflow parsing and JSON Schema validation
- GitLab CI parsing helpers
### Example
```rust
// High-level crates (`wrkflw` and `wrkflw-executor`) wrap parser usage.
// Use those unless you are extending parsing behavior directly.
```

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

View File

@@ -130,7 +130,7 @@ pub fn convert_to_workflow_format(pipeline: &Pipeline) -> workflow::WorkflowDefi
// Create a new job
let mut job = workflow::Job {
runs_on: "ubuntu-latest".to_string(), // Default runner
runs_on: Some(vec!["ubuntu-latest".to_string()]), // Default runner
needs: None,
steps: Vec::new(),
env: HashMap::new(),
@@ -139,6 +139,9 @@ pub fn convert_to_workflow_format(pipeline: &Pipeline) -> workflow::WorkflowDefi
if_condition: None,
outputs: None,
permissions: None,
uses: None,
with: None,
secrets: None,
};
// Add job-specific environment variables
@@ -230,13 +233,13 @@ pub fn convert_to_workflow_format(pipeline: &Pipeline) -> workflow::WorkflowDefi
#[cfg(test)]
mod tests {
use super::*;
use std::path::PathBuf;
// use std::path::PathBuf; // unused
use tempfile::NamedTempFile;
#[test]
fn test_parse_simple_pipeline() {
// Create a temporary file with a simple GitLab CI/CD pipeline
let mut file = NamedTempFile::new().unwrap();
let file = NamedTempFile::new().unwrap();
let content = r#"
stages:
- build

View File

@@ -3,8 +3,8 @@ use serde_json::Value;
use std::fs;
use std::path::Path;
const GITHUB_WORKFLOW_SCHEMA: &str = include_str!("../../../schemas/github-workflow.json");
const GITLAB_CI_SCHEMA: &str = include_str!("../../../schemas/gitlab-ci.json");
const GITHUB_WORKFLOW_SCHEMA: &str = include_str!("github-workflow.json");
const GITLAB_CI_SCHEMA: &str = include_str!("gitlab-ci.json");
#[derive(Debug, Clone, Copy)]
pub enum SchemaType {

View File

@@ -26,6 +26,26 @@ where
}
}
// Custom deserializer for runs-on field that handles both string and array formats
fn deserialize_runs_on<'de, D>(deserializer: D) -> Result<Option<Vec<String>>, D::Error>
where
D: Deserializer<'de>,
{
#[derive(Deserialize)]
#[serde(untagged)]
enum StringOrVec {
String(String),
Vec(Vec<String>),
}
let value = Option::<StringOrVec>::deserialize(deserializer)?;
match value {
Some(StringOrVec::String(s)) => Ok(Some(vec![s])),
Some(StringOrVec::Vec(v)) => Ok(Some(v)),
None => Ok(None),
}
}
#[derive(Debug, Deserialize, Serialize)]
pub struct WorkflowDefinition {
pub name: String,
@@ -38,10 +58,11 @@ pub struct WorkflowDefinition {
#[derive(Debug, Deserialize, Serialize)]
pub struct Job {
#[serde(rename = "runs-on")]
pub runs_on: String,
#[serde(rename = "runs-on", default, deserialize_with = "deserialize_runs_on")]
pub runs_on: Option<Vec<String>>,
#[serde(default, deserialize_with = "deserialize_needs")]
pub needs: Option<Vec<String>>,
#[serde(default)]
pub steps: Vec<Step>,
#[serde(default)]
pub env: HashMap<String, String>,
@@ -55,6 +76,13 @@ pub struct Job {
pub outputs: Option<HashMap<String, String>>,
#[serde(default)]
pub permissions: Option<HashMap<String, String>>,
// Reusable workflow (job-level 'uses') support
#[serde(default)]
pub uses: Option<String>,
#[serde(default)]
pub with: Option<HashMap<String, String>>,
#[serde(default)]
pub secrets: Option<serde_yaml::Value>,
}
#[derive(Debug, Deserialize, Serialize)]

13
crates/runtime/README.md Normal file
View File

@@ -0,0 +1,13 @@
## wrkflw-runtime
Runtime abstractions for executing steps in containers or emulation.
- Container management primitives used by the executor
- Emulation mode helpers (run on host without containers)
### Example
```rust
// This crate is primarily consumed by `wrkflw-executor`.
// Prefer using the executor API instead of calling runtime directly.
```

23
crates/ui/README.md Normal file
View File

@@ -0,0 +1,23 @@
## wrkflw-ui
Terminal user interface for browsing workflows, running them, and viewing logs.
- Tabs: Workflows, Execution, Logs, Help
- Hotkeys: `1-4`, `Tab`, `Enter`, `r`, `R`, `t`, `v`, `e`, `q`, etc.
- Integrates with `wrkflw-executor` and `wrkflw-logging`
### Example
```rust
use std::path::PathBuf;
use wrkflw_executor::RuntimeType;
use wrkflw_ui::run_wrkflw_tui;
# tokio_test::block_on(async {
let path = PathBuf::from(".github/workflows");
run_wrkflw_tui(Some(&path), RuntimeType::Docker, true, false).await?;
# Ok::<_, Box<dyn std::error::Error>>(())
# })?;
```
Most users should run the `wrkflw` binary and select TUI mode: `wrkflw tui`.

View File

@@ -154,6 +154,15 @@ fn run_tui_event_loop(
if last_tick.elapsed() >= tick_rate {
app.tick();
app.update_running_workflow_progress();
// Check for log processing updates (includes system log change detection)
app.check_log_processing_updates();
// Request log processing if needed
if app.logs_need_update {
app.request_log_processing_update();
}
last_tick = Instant::now();
}

View File

@@ -1,4 +1,5 @@
// App state for the UI
use crate::log_processor::{LogProcessingRequest, LogProcessor, ProcessedLogEntry};
use crate::models::{
ExecutionResultMsg, JobExecution, LogFilterLevel, StepExecution, Workflow, WorkflowExecution,
WorkflowStatus,
@@ -40,6 +41,12 @@ pub struct App {
pub log_filter_level: Option<LogFilterLevel>, // Current log level filter
pub log_search_matches: Vec<usize>, // Indices of logs that match the search
pub log_search_match_idx: usize, // Current match index for navigation
// Background log processing
pub log_processor: LogProcessor,
pub processed_logs: Vec<ProcessedLogEntry>,
pub logs_need_update: bool, // Flag to trigger log processing
pub last_system_logs_count: usize, // Track system log changes
}
impl App {
@@ -199,6 +206,12 @@ impl App {
log_filter_level: Some(LogFilterLevel::All),
log_search_matches: Vec::new(),
log_search_match_idx: 0,
// Background log processing
log_processor: LogProcessor::new(),
processed_logs: Vec::new(),
logs_need_update: true,
last_system_logs_count: 0,
}
}
@@ -429,10 +442,9 @@ impl App {
if let Some(idx) = self.workflow_list_state.selected() {
if idx < self.workflows.len() && !self.execution_queue.contains(&idx) {
self.execution_queue.push(idx);
let timestamp = Local::now().format("%H:%M:%S").to_string();
self.logs.push(format!(
"[{}] Added '{}' to execution queue. Press 'Enter' to start.",
timestamp, self.workflows[idx].name
self.add_timestamped_log(&format!(
"Added '{}' to execution queue. Press 'Enter' to start.",
self.workflows[idx].name
));
}
}
@@ -635,10 +647,11 @@ impl App {
self.log_search_active = false;
self.log_search_query.clear();
self.log_search_matches.clear();
self.mark_logs_for_update();
}
KeyCode::Backspace => {
self.log_search_query.pop();
self.update_log_search_matches();
self.mark_logs_for_update();
}
KeyCode::Enter => {
self.log_search_active = false;
@@ -646,7 +659,7 @@ impl App {
}
KeyCode::Char(c) => {
self.log_search_query.push(c);
self.update_log_search_matches();
self.mark_logs_for_update();
}
_ => {}
}
@@ -658,8 +671,8 @@ impl App {
if !self.log_search_active {
// Don't clear the query, this allows toggling the search UI while keeping the filter
} else {
// When activating search, update matches
self.update_log_search_matches();
// When activating search, trigger update
self.mark_logs_for_update();
}
}
@@ -670,8 +683,8 @@ impl App {
Some(level) => Some(level.next()),
};
// Update search matches when filter changes
self.update_log_search_matches();
// Trigger log processing update when filter changes
self.mark_logs_for_update();
}
// Clear log search and filter
@@ -680,6 +693,7 @@ impl App {
self.log_filter_level = None;
self.log_search_matches.clear();
self.log_search_match_idx = 0;
self.mark_logs_for_update();
}
// Update matches based on current search and filter
@@ -955,4 +969,82 @@ impl App {
}
}
}
/// Request log processing update from background thread
pub fn request_log_processing_update(&mut self) {
let request = LogProcessingRequest {
search_query: self.log_search_query.clone(),
filter_level: self.log_filter_level.clone(),
app_logs: self.logs.clone(),
app_logs_count: self.logs.len(),
system_logs_count: wrkflw_logging::get_logs().len(),
};
if self.log_processor.request_update(request).is_err() {
// Log processor channel disconnected, recreate it
self.log_processor = LogProcessor::new();
self.logs_need_update = true;
}
}
/// Check for and apply log processing updates
pub fn check_log_processing_updates(&mut self) {
// Check if system logs have changed
let current_system_logs_count = wrkflw_logging::get_logs().len();
if current_system_logs_count != self.last_system_logs_count {
self.last_system_logs_count = current_system_logs_count;
self.mark_logs_for_update();
}
if let Some(response) = self.log_processor.try_get_update() {
self.processed_logs = response.processed_logs;
self.log_search_matches = response.search_matches;
// Update scroll position to first match if we have search results
if !self.log_search_matches.is_empty() && !self.log_search_query.is_empty() {
self.log_search_match_idx = 0;
if let Some(&idx) = self.log_search_matches.first() {
self.log_scroll = idx;
}
}
self.logs_need_update = false;
}
}
/// Trigger log processing when search/filter changes
pub fn mark_logs_for_update(&mut self) {
self.logs_need_update = true;
self.request_log_processing_update();
}
/// Get combined app and system logs for background processing
pub fn get_combined_logs(&self) -> Vec<String> {
let mut all_logs = Vec::new();
// Add app logs
for log in &self.logs {
all_logs.push(log.clone());
}
// Add system logs
for log in wrkflw_logging::get_logs() {
all_logs.push(log.clone());
}
all_logs
}
/// Add a log entry and trigger log processing update
pub fn add_log(&mut self, message: String) {
self.logs.push(message);
self.mark_logs_for_update();
}
/// Add a formatted log entry with timestamp and trigger log processing update
pub fn add_timestamped_log(&mut self, message: &str) {
let timestamp = Local::now().format("%H:%M:%S").to_string();
let formatted_message = format!("[{}] {}", timestamp, message);
self.add_log(formatted_message);
}
}

View File

@@ -12,6 +12,7 @@
pub mod app;
pub mod components;
pub mod handlers;
pub mod log_processor;
pub mod models;
pub mod utils;
pub mod views;

View File

@@ -0,0 +1,305 @@
// Background log processor for asynchronous log filtering and formatting
use crate::models::LogFilterLevel;
use ratatui::{
style::{Color, Style},
text::{Line, Span},
widgets::{Cell, Row},
};
use std::sync::mpsc;
use std::thread;
use std::time::{Duration, Instant};
/// Processed log entry ready for rendering
#[derive(Debug, Clone)]
pub struct ProcessedLogEntry {
pub timestamp: String,
pub log_type: String,
pub log_style: Style,
pub content_spans: Vec<Span<'static>>,
}
impl ProcessedLogEntry {
/// Convert to a table row for rendering
pub fn to_row(&self) -> Row<'static> {
Row::new(vec![
Cell::from(self.timestamp.clone()),
Cell::from(self.log_type.clone()).style(self.log_style),
Cell::from(Line::from(self.content_spans.clone())),
])
}
}
/// Request to update log processing parameters
#[derive(Debug, Clone)]
pub struct LogProcessingRequest {
pub search_query: String,
pub filter_level: Option<LogFilterLevel>,
pub app_logs: Vec<String>, // Complete app logs
pub app_logs_count: usize, // To detect changes in app logs
pub system_logs_count: usize, // To detect changes in system logs
}
/// Response with processed logs
#[derive(Debug, Clone)]
pub struct LogProcessingResponse {
pub processed_logs: Vec<ProcessedLogEntry>,
pub total_log_count: usize,
pub filtered_count: usize,
pub search_matches: Vec<usize>, // Indices of logs that match search
}
/// Background log processor
pub struct LogProcessor {
request_tx: mpsc::Sender<LogProcessingRequest>,
response_rx: mpsc::Receiver<LogProcessingResponse>,
_worker_handle: thread::JoinHandle<()>,
}
impl LogProcessor {
/// Create a new log processor with a background worker thread
pub fn new() -> Self {
let (request_tx, request_rx) = mpsc::channel::<LogProcessingRequest>();
let (response_tx, response_rx) = mpsc::channel::<LogProcessingResponse>();
let worker_handle = thread::spawn(move || {
Self::worker_loop(request_rx, response_tx);
});
Self {
request_tx,
response_rx,
_worker_handle: worker_handle,
}
}
/// Send a processing request (non-blocking)
pub fn request_update(
&self,
request: LogProcessingRequest,
) -> Result<(), mpsc::SendError<LogProcessingRequest>> {
self.request_tx.send(request)
}
/// Try to get the latest processed logs (non-blocking)
pub fn try_get_update(&self) -> Option<LogProcessingResponse> {
self.response_rx.try_recv().ok()
}
/// Background worker loop
fn worker_loop(
request_rx: mpsc::Receiver<LogProcessingRequest>,
response_tx: mpsc::Sender<LogProcessingResponse>,
) {
let mut last_request: Option<LogProcessingRequest> = None;
let mut last_processed_time = Instant::now();
let mut cached_logs: Vec<String> = Vec::new();
let mut cached_app_logs_count = 0;
let mut cached_system_logs_count = 0;
loop {
// Check for new requests with a timeout to allow periodic processing
let request = match request_rx.recv_timeout(Duration::from_millis(100)) {
Ok(req) => Some(req),
Err(mpsc::RecvTimeoutError::Timeout) => None,
Err(mpsc::RecvTimeoutError::Disconnected) => break,
};
// Update request if we received one
if let Some(req) = request {
last_request = Some(req);
}
// Process if we have a request and enough time has passed since last processing
if let Some(ref req) = last_request {
let should_process = last_processed_time.elapsed() > Duration::from_millis(50)
&& (cached_app_logs_count != req.app_logs_count
|| cached_system_logs_count != req.system_logs_count
|| cached_logs.is_empty());
if should_process {
// Refresh log cache if log counts changed
if cached_app_logs_count != req.app_logs_count
|| cached_system_logs_count != req.system_logs_count
|| cached_logs.is_empty()
{
cached_logs = Self::get_combined_logs(&req.app_logs);
cached_app_logs_count = req.app_logs_count;
cached_system_logs_count = req.system_logs_count;
}
let response = Self::process_logs(&cached_logs, req);
if response_tx.send(response).is_err() {
break; // Receiver disconnected
}
last_processed_time = Instant::now();
}
}
}
}
/// Get combined app and system logs
fn get_combined_logs(app_logs: &[String]) -> Vec<String> {
let mut all_logs = Vec::new();
// Add app logs
for log in app_logs {
all_logs.push(log.clone());
}
// Add system logs
for log in wrkflw_logging::get_logs() {
all_logs.push(log.clone());
}
all_logs
}
/// Process logs according to search and filter criteria
fn process_logs(all_logs: &[String], request: &LogProcessingRequest) -> LogProcessingResponse {
// Filter logs based on search query and filter level
let mut filtered_logs = Vec::new();
let mut search_matches = Vec::new();
for (idx, log) in all_logs.iter().enumerate() {
let passes_filter = match &request.filter_level {
None => true,
Some(level) => level.matches(log),
};
let matches_search = if request.search_query.is_empty() {
true
} else {
log.to_lowercase()
.contains(&request.search_query.to_lowercase())
};
if passes_filter && matches_search {
filtered_logs.push((idx, log));
if matches_search && !request.search_query.is_empty() {
search_matches.push(filtered_logs.len() - 1);
}
}
}
// Process filtered logs into display format
let processed_logs: Vec<ProcessedLogEntry> = filtered_logs
.iter()
.map(|(_, log_line)| Self::process_log_entry(log_line, &request.search_query))
.collect();
LogProcessingResponse {
processed_logs,
total_log_count: all_logs.len(),
filtered_count: filtered_logs.len(),
search_matches,
}
}
/// Process a single log entry into display format
fn process_log_entry(log_line: &str, search_query: &str) -> ProcessedLogEntry {
// Extract timestamp from log format [HH:MM:SS]
let timestamp = if log_line.starts_with('[') && log_line.contains(']') {
let end = log_line.find(']').unwrap_or(0);
if end > 1 {
log_line[1..end].to_string()
} else {
"??:??:??".to_string()
}
} else {
"??:??:??".to_string()
};
// Determine log type and style
let (log_type, log_style) =
if log_line.contains("Error") || log_line.contains("error") || log_line.contains("")
{
("ERROR", Style::default().fg(Color::Red))
} else if log_line.contains("Warning")
|| log_line.contains("warning")
|| log_line.contains("⚠️")
{
("WARN", Style::default().fg(Color::Yellow))
} else if log_line.contains("Success")
|| log_line.contains("success")
|| log_line.contains("")
{
("SUCCESS", Style::default().fg(Color::Green))
} else if log_line.contains("Running")
|| log_line.contains("running")
|| log_line.contains("")
{
("INFO", Style::default().fg(Color::Cyan))
} else if log_line.contains("Triggering") || log_line.contains("triggered") {
("TRIG", Style::default().fg(Color::Magenta))
} else {
("INFO", Style::default().fg(Color::Gray))
};
// Extract content after timestamp
let content = if log_line.starts_with('[') && log_line.contains(']') {
let start = log_line.find(']').unwrap_or(0) + 1;
log_line[start..].trim()
} else {
log_line
};
// Create content spans with search highlighting
let content_spans = if !search_query.is_empty() {
Self::highlight_search_matches(content, search_query)
} else {
vec![Span::raw(content.to_string())]
};
ProcessedLogEntry {
timestamp,
log_type: log_type.to_string(),
log_style,
content_spans,
}
}
/// Highlight search matches in content
fn highlight_search_matches(content: &str, search_query: &str) -> Vec<Span<'static>> {
let mut spans = Vec::new();
let lowercase_content = content.to_lowercase();
let lowercase_query = search_query.to_lowercase();
if lowercase_content.contains(&lowercase_query) {
let mut last_idx = 0;
while let Some(idx) = lowercase_content[last_idx..].find(&lowercase_query) {
let real_idx = last_idx + idx;
// Add text before match
if real_idx > last_idx {
spans.push(Span::raw(content[last_idx..real_idx].to_string()));
}
// Add matched text with highlight
let match_end = real_idx + search_query.len();
spans.push(Span::styled(
content[real_idx..match_end].to_string(),
Style::default().bg(Color::Yellow).fg(Color::Black),
));
last_idx = match_end;
}
// Add remaining text after last match
if last_idx < content.len() {
spans.push(Span::raw(content[last_idx..].to_string()));
}
} else {
spans.push(Span::raw(content.to_string()));
}
spans
}
}
impl Default for LogProcessor {
fn default() -> Self {
Self::new()
}
}

View File

@@ -50,6 +50,7 @@ pub struct StepExecution {
}
/// Log filter levels
#[derive(Debug, Clone, PartialEq)]
pub enum LogFilterLevel {
Info,
Warning,

View File

@@ -140,45 +140,8 @@ pub fn render_logs_tab(f: &mut Frame<CrosstermBackend<io::Stdout>>, app: &App, a
f.render_widget(search_block, chunks[1]);
}
// Combine application logs with system logs
let mut all_logs = Vec::new();
// Now all logs should have timestamps in the format [HH:MM:SS]
// Process app logs
for log in &app.logs {
all_logs.push(log.clone());
}
// Process system logs
for log in wrkflw_logging::get_logs() {
all_logs.push(log.clone());
}
// Filter logs based on search query and filter level
let filtered_logs = if !app.log_search_query.is_empty() || app.log_filter_level.is_some() {
all_logs
.iter()
.filter(|log| {
let passes_filter = match &app.log_filter_level {
None => true,
Some(level) => level.matches(log),
};
let matches_search = if app.log_search_query.is_empty() {
true
} else {
log.to_lowercase()
.contains(&app.log_search_query.to_lowercase())
};
passes_filter && matches_search
})
.cloned()
.collect::<Vec<String>>()
} else {
all_logs.clone() // Clone to avoid moving all_logs
};
// Use processed logs from background thread instead of processing on every frame
let filtered_logs = &app.processed_logs;
// Create a table for logs for better organization
let header_cells = ["Time", "Type", "Message"]
@@ -189,109 +152,10 @@ pub fn render_logs_tab(f: &mut Frame<CrosstermBackend<io::Stdout>>, app: &App, a
.style(Style::default().add_modifier(Modifier::BOLD))
.height(1);
let rows = filtered_logs.iter().map(|log_line| {
// Parse log line to extract timestamp, type and message
// Extract timestamp from log format [HH:MM:SS]
let timestamp = if log_line.starts_with('[') && log_line.contains(']') {
let end = log_line.find(']').unwrap_or(0);
if end > 1 {
log_line[1..end].to_string()
} else {
"??:??:??".to_string() // Show placeholder for malformed logs
}
} else {
"??:??:??".to_string() // Show placeholder for malformed logs
};
let (log_type, log_style, _) =
if log_line.contains("Error") || log_line.contains("error") || log_line.contains("")
{
("ERROR", Style::default().fg(Color::Red), log_line.as_str())
} else if log_line.contains("Warning")
|| log_line.contains("warning")
|| log_line.contains("⚠️")
{
(
"WARN",
Style::default().fg(Color::Yellow),
log_line.as_str(),
)
} else if log_line.contains("Success")
|| log_line.contains("success")
|| log_line.contains("")
{
(
"SUCCESS",
Style::default().fg(Color::Green),
log_line.as_str(),
)
} else if log_line.contains("Running")
|| log_line.contains("running")
|| log_line.contains("")
{
("INFO", Style::default().fg(Color::Cyan), log_line.as_str())
} else if log_line.contains("Triggering") || log_line.contains("triggered") {
(
"TRIG",
Style::default().fg(Color::Magenta),
log_line.as_str(),
)
} else {
("INFO", Style::default().fg(Color::Gray), log_line.as_str())
};
// Extract content after timestamp
let content = if log_line.starts_with('[') && log_line.contains(']') {
let start = log_line.find(']').unwrap_or(0) + 1;
log_line[start..].trim()
} else {
log_line.as_str()
};
// Highlight search matches in content if search is active
let mut content_spans = Vec::new();
if !app.log_search_query.is_empty() {
let lowercase_content = content.to_lowercase();
let lowercase_query = app.log_search_query.to_lowercase();
if lowercase_content.contains(&lowercase_query) {
let mut last_idx = 0;
while let Some(idx) = lowercase_content[last_idx..].find(&lowercase_query) {
let real_idx = last_idx + idx;
// Add text before match
if real_idx > last_idx {
content_spans.push(Span::raw(content[last_idx..real_idx].to_string()));
}
// Add matched text with highlight
let match_end = real_idx + app.log_search_query.len();
content_spans.push(Span::styled(
content[real_idx..match_end].to_string(),
Style::default().bg(Color::Yellow).fg(Color::Black),
));
last_idx = match_end;
}
// Add remaining text after last match
if last_idx < content.len() {
content_spans.push(Span::raw(content[last_idx..].to_string()));
}
} else {
content_spans.push(Span::raw(content));
}
} else {
content_spans.push(Span::raw(content));
}
Row::new(vec![
Cell::from(timestamp),
Cell::from(log_type).style(log_style),
Cell::from(Line::from(content_spans)),
])
});
// Convert processed logs to table rows - this is now very fast since logs are pre-processed
let rows = filtered_logs
.iter()
.map(|processed_log| processed_log.to_row());
let content_idx = if show_search_bar { 2 } else { 1 };

21
crates/utils/README.md Normal file
View File

@@ -0,0 +1,21 @@
## wrkflw-utils
Shared helpers used across crates.
- Workflow file detection (`.github/workflows/*.yml`, `.gitlab-ci.yml`)
- File-descriptor redirection utilities for silencing noisy subprocess output
### Example
```rust
use std::path::Path;
use wrkflw_utils::{is_workflow_file, fd::with_stderr_to_null};
assert!(is_workflow_file(Path::new(".github/workflows/ci.yml")));
let value = with_stderr_to_null(|| {
eprintln!("this is hidden");
42
}).unwrap();
assert_eq!(value, 42);
```

View File

@@ -0,0 +1,29 @@
## wrkflw-validators
Validation utilities for workflows and steps.
- Validates GitHub Actions sections: jobs, steps, actions references, triggers
- GitLab pipeline validation helpers
- Matrix-specific validation
### Example
```rust
use serde_yaml::Value;
use wrkflw_models::ValidationResult;
use wrkflw_validators::{validate_jobs, validate_triggers};
let yaml: Value = serde_yaml::from_str(r#"name: demo
on: [workflow_dispatch]
jobs: { build: { runs-on: ubuntu-latest, steps: [] } }
"#).unwrap();
let mut res = ValidationResult::new();
if let Some(on) = yaml.get("on") {
validate_triggers(on, &mut res);
}
if let Some(jobs) = yaml.get("jobs") {
validate_jobs(jobs, &mut res);
}
assert!(res.is_valid);
```

108
crates/wrkflw/README.md Normal file
View File

@@ -0,0 +1,108 @@
## WRKFLW (CLI and Library)
This crate provides the `wrkflw` command-line interface and a thin library surface that ties together all WRKFLW subcrates. It lets you validate and execute GitHub Actions workflows and GitLab CI pipelines locally, with a built-in TUI for an interactive experience.
- **Validate**: Lints structure and common mistakes in workflow/pipeline files
- **Run**: Executes jobs locally using Docker, Podman, or emulation (no containers)
- **TUI**: Interactive terminal UI for browsing workflows, running, and viewing logs
- **Trigger**: Manually trigger remote runs on GitHub/GitLab
### Installation
```bash
cargo install wrkflw
```
### Quick start
```bash
# Launch the TUI (auto-loads .github/workflows)
wrkflw
# Validate all workflows in the default directory
wrkflw validate
# Validate a specific file or directory
wrkflw validate .github/workflows/ci.yml
wrkflw validate path/to/workflows
# Run a workflow (Docker by default)
wrkflw run .github/workflows/ci.yml
# Use Podman or emulation instead of Docker
wrkflw run --runtime podman .github/workflows/ci.yml
wrkflw run --runtime emulation .github/workflows/ci.yml
# Open the TUI explicitly
wrkflw tui
wrkflw tui --runtime podman
```
### Commands
- **validate**: Validate a workflow/pipeline file or directory
- GitHub (default): `.github/workflows/*.yml`
- GitLab: `.gitlab-ci.yml` or files ending with `gitlab-ci.yml`
- Exit code behavior (by default): `1` when validation failures are detected
- Flags: `--gitlab`, `--exit-code`, `--no-exit-code`, `--verbose`
- **run**: Execute a workflow or pipeline locally
- Runtimes: `docker` (default), `podman`, `emulation`
- Flags: `--runtime`, `--preserve-containers-on-failure`, `--gitlab`, `--verbose`
- **tui**: Interactive terminal interface
- Browse workflows, execute, and inspect logs and job details
- **trigger**: Trigger a GitHub workflow (requires `GITHUB_TOKEN`)
- **trigger-gitlab**: Trigger a GitLab pipeline (requires `GITLAB_TOKEN`)
- **list**: Show detected workflows and pipelines in the repo
### Environment variables
- **GITHUB_TOKEN**: Required for `trigger` when calling GitHub
- **GITLAB_TOKEN**: Required for `trigger-gitlab` (api scope)
### Exit codes
- `validate`: `0` if all pass; `1` if any fail (unless `--no-exit-code`)
- `run`: `0` on success, `1` if execution fails
### Library usage
This crate re-exports subcrates for convenience if you want to embed functionality:
```rust
use std::path::Path;
use wrkflw::executor::{execute_workflow, ExecutionConfig, RuntimeType};
# tokio_test::block_on(async {
let cfg = ExecutionConfig {
runtime_type: RuntimeType::Docker,
verbose: true,
preserve_containers_on_failure: false,
};
let result = execute_workflow(Path::new(".github/workflows/ci.yml"), cfg).await?;
println!("status: {:?}", result.summary_status);
# Ok::<_, Box<dyn std::error::Error>>(())
# })?;
```
You can also run the TUI programmatically:
```rust
use std::path::PathBuf;
use wrkflw::executor::RuntimeType;
use wrkflw::ui::run_wrkflw_tui;
# tokio_test::block_on(async {
let path = PathBuf::from(".github/workflows");
run_wrkflw_tui(Some(&path), RuntimeType::Docker, true, false).await?;
# Ok::<_, Box<dyn std::error::Error>>(())
# })?;
```
### Notes
- See the repository root README for feature details, limitations, and a full walkthrough.
- Service containers and advanced Actions features are best supported in Docker/Podman modes.
- Emulation mode skips containerized steps and runs commands on the host.

View File

@@ -0,0 +1,120 @@
use std::fs;
use tempfile::tempdir;
use wrkflw::executor::engine::{execute_workflow, ExecutionConfig, RuntimeType};
fn write_file(path: &std::path::Path, content: &str) {
fs::write(path, content).expect("failed to write file");
}
#[tokio::test]
async fn test_local_reusable_workflow_execution_success() {
// Create temp workspace
let dir = tempdir().unwrap();
let called_path = dir.path().join("called.yml");
let caller_path = dir.path().join("caller.yml");
// Minimal called workflow with one successful job
let called = r#"
name: Called
on: workflow_dispatch
jobs:
inner:
runs-on: ubuntu-latest
steps:
- run: echo "hello from called"
"#;
write_file(&called_path, called);
// Caller workflow that uses the called workflow via absolute local path
let caller = format!(
r#"
name: Caller
on: workflow_dispatch
jobs:
call:
uses: {}
with:
foo: bar
secrets:
token: testsecret
"#,
called_path.display()
);
write_file(&caller_path, &caller);
// Execute caller workflow with emulation runtime
let cfg = ExecutionConfig {
runtime_type: RuntimeType::Emulation,
verbose: false,
preserve_containers_on_failure: false,
};
let result = execute_workflow(&caller_path, cfg)
.await
.expect("workflow execution failed");
// Expect a single caller job summarized
assert_eq!(result.jobs.len(), 1, "expected one caller job result");
let job = &result.jobs[0];
assert_eq!(job.name, "call");
assert_eq!(format!("{:?}", job.status), "Success");
// Summary step should include reference to called workflow and inner job status
assert!(job
.logs
.contains("Called workflow:"),
"expected summary logs to include called workflow path");
assert!(job.logs.contains("- inner: Success"), "expected inner job success in summary");
}
#[tokio::test]
async fn test_local_reusable_workflow_execution_failure_propagates() {
// Create temp workspace
let dir = tempdir().unwrap();
let called_path = dir.path().join("called.yml");
let caller_path = dir.path().join("caller.yml");
// Called workflow with failing job
let called = r#"
name: Called
on: workflow_dispatch
jobs:
inner:
runs-on: ubuntu-latest
steps:
- run: false
"#;
write_file(&called_path, called);
// Caller workflow
let caller = format!(
r#"
name: Caller
on: workflow_dispatch
jobs:
call:
uses: {}
"#,
called_path.display()
);
write_file(&caller_path, &caller);
// Execute caller workflow
let cfg = ExecutionConfig {
runtime_type: RuntimeType::Emulation,
verbose: false,
preserve_containers_on_failure: false,
};
let result = execute_workflow(&caller_path, cfg)
.await
.expect("workflow execution failed");
assert_eq!(result.jobs.len(), 1);
let job = &result.jobs[0];
assert_eq!(job.name, "call");
assert_eq!(format!("{:?}", job.status), "Failure");
assert!(job.logs.contains("- inner: Failure"));
}

View File

@@ -0,0 +1,18 @@
name: Test Runs-On Array Format
on: [push]
jobs:
test-array-runs-on:
timeout-minutes: 15
runs-on: [self-hosted, ubuntu, small]
steps:
- name: Test step
run: echo "Testing array format for runs-on"
test-string-runs-on:
timeout-minutes: 15
runs-on: ubuntu-latest
steps:
- name: Test step
run: echo "Testing string format for runs-on"