Compare commits

..

336 Commits

Author SHA1 Message Date
vegu-ai-tools
65a8cbf853 linting 2025-10-24 18:02:48 +03:00
vegu-ai-tools
7dbff08056 restore input focus after autocomplete 2025-10-24 18:02:20 +03:00
vegu-ai-tools
159efc500d improve configuration issue alert visibility 2025-10-24 17:56:38 +03:00
vegu-ai-tools
c0a11b8546 fix some errors during kcpp client deletion 2025-10-24 17:46:42 +03:00
vegu-ai-tools
f90099999c nothing to detemrine of no model is sent 2025-10-24 17:46:28 +03:00
vegu-ai-tools
3ca4299214 linting 2025-10-23 19:02:49 +03:00
vegu-ai-tools
260f9fa374 enhance error logging in background processing to include traceback information 2025-10-23 19:02:08 +03:00
vegu-ai-tools
bed6158003 remove debug cruft 2025-10-23 01:47:53 +03:00
vegu-ai-tools
bc5234a3a4 linting 2025-10-21 23:55:14 +03:00
vegu-ai-tools
ca1f761ad7 separate message processing from main loop 2025-10-21 23:53:05 +03:00
vegu-ai-tools
1422fc4541 unhandled errors at the loop level should not crash the entire scene 2025-10-21 23:46:01 +03:00
vegu-ai-tools
b71d9a580f formatting fixes 2025-10-21 02:10:18 +03:00
vegu-ai-tools
248333139a increase font size 2025-10-21 02:01:58 +03:00
vegu-ai-tools
fb4e73c6e8 fix issue where cancelling some generations would cause errors 2025-10-21 01:54:57 +03:00
vegu-ai-tools
ae36f8491c removing base attrib ute or detail also clears it from shared list 2025-10-21 01:48:08 +03:00
vegu-ai-tools
501668b2fe docs 2025-10-19 23:04:01 +03:00
vegu-ai-tools
cbcef92ed7 linting 2025-10-19 22:49:15 +03:00
vegu-ai-tools
8a95b02099 : in world entry titles will now load correctly 2025-10-19 22:49:01 +03:00
vegu-ai-tools
5c33723b7b docs 2025-10-19 22:04:07 +03:00
vegu-ai-tools
1ab24396f1 summarizer fire off of push_history.after 2025-10-19 18:49:56 +03:00
vegu-ai-tools
b6729f3290 tweak defaults 2025-10-19 18:31:26 +03:00
vegu-ai-tools
913db13590 linting 2025-10-19 17:32:30 +03:00
vegu-ai-tools
ba1e64d359 only allow forking on saved messages 2025-10-19 17:32:14 +03:00
vegu-ai-tools
519b600bc9 Update RequestInput.vue to handle extra_params more robustly, ensuring defaults are set correctly for input. 2025-10-19 17:31:44 +03:00
vegu-ai-tools
9901c36af6 emit_status export rev 2025-10-19 17:29:27 +03:00
vegu-ai-tools
6d4bfd59ac forked scenes reset memory id and are not immutable 2025-10-19 17:17:37 +03:00
vegu-ai-tools
466bac8061 Refactor scene reference handling in delete_changelog_files to prevent incorrect deletions. Added a test to verify proper scene reference construction and ensure changelog files are deleted correctly. 2025-10-19 17:17:20 +03:00
vegu-ai-tools
6328062c3d gracefully handle removed attributes 2025-10-19 17:10:37 +03:00
vegu-ai-tools
f9c1228b3e linting 2025-10-19 15:26:50 +03:00
vegu-ai-tools
5d40b650dc prompt tweaks 2025-10-19 15:26:39 +03:00
vegu-ai-tools
aeef4c266f improve autocomplete handling when prefill isn't available 2025-10-19 15:20:34 +03:00
vegu-ai-tools
e7180b0dd5 fix issue where fork / restore would restore duplicate messages 2025-10-19 14:45:01 +03:00
vegu-ai-tools
154a02adf0 opse 4.5 isnt a thing 2025-10-17 19:30:34 +03:00
vegu-ai-tools
fe73970f67 add heiku 4.5 model and make default 2025-10-17 19:26:42 +03:00
vegu-ai-tools
007b944c4a linting 2025-10-17 18:32:21 +03:00
vegu-ai-tools
8a79edc693 Update default_player_character assignment in ConfigPlugin to use GamePlayerCharacter schema for improved data validation 2025-10-17 18:32:05 +03:00
vegu-ai-tools
1e29c7eab4 fix issue where valid data processed in extract_data_with_ai_fallback was not returned 2025-10-15 02:20:10 +03:00
vegu-ai-tools
07f1a72618 Add field validator for lock_template in Client model to ensure boolean value is returned 2025-10-14 01:37:22 +03:00
vegu-ai-tools
d51c7a2700 Refactor lock_template field in Client model and ClientModal component to ensure consistent boolean handling 2025-10-14 01:34:11 +03:00
vegu-ai-tools
bb14f90669 Remove unused template_file field from Defaults model in Client configuration 2025-10-14 01:29:50 +03:00
vegu-ai-tools
4791918e34 Update lock_template field in Client model to allow None type in addition to bool 2025-10-14 01:26:01 +03:00
vegu-ai-tools
5d6a4eef63 docs 2025-10-14 00:55:58 +03:00
vegu-ai-tools
875deb2682 Update CharacterContextItem to allow 'value' to accept dict type in addition to existing types 2025-10-14 00:10:18 +03:00
vegu-ai-tools
cc60ee1beb fix direct_narrator character argument 2025-10-13 23:48:22 +03:00
vegu-ai-tools
db8c021b68 There is no longer a point to enforcing creative mode when there are no characters 2025-10-13 23:26:51 +03:00
vegu-ai-tools
7d7f210a2f persist client template lock through model changes 2025-10-13 12:01:33 +03:00
vegu-ai-tools
27bc0a5b2f linting 2025-10-13 00:41:13 +03:00
vegu-ai-tools
d25fdd9422 Add lock_template feature to Client configuration and update related components for template management 2025-10-13 00:40:58 +03:00
vegu-ai-tools
f5a2b9a67b Add TODO comments in finalize_llama3 and finalize_YI methods to indicate removable cruft 2025-10-12 23:26:01 +03:00
vegu-ai-tools
8ebcd4ba5d fix crash when attempting to delete some clients 2025-10-12 22:51:29 +03:00
vegu-ai-tools
ffc6d75f46 Refine agent persona description in WorldStateManagerTemplates to clarify assignment per agent in Scene Settings, maintaining focus on current director-only support. 2025-10-12 18:11:06 +03:00
vegu-ai-tools
160b9c4e69 Update agent persona description in WorldStateManagerTemplates to specify current support for director only, enhancing clarity for users. 2025-10-12 18:08:47 +03:00
vegu-ai-tools
2e925c95b5 Enhance NewSceneSetupModal to include subtitles for writing styles and director personas, improving context and usability. 2025-10-12 18:08:40 +03:00
vegu-ai-tools
7edb830246 docs 2025-10-12 18:04:07 +03:00
vegu-ai-tools
61bde467cd Enhance NodeEditorLibrary by adding primary color to tree component for improved visibility and user experience. 2025-10-12 18:03:53 +03:00
vegu-ai-tools
1274155e84 docs 2025-10-12 17:06:02 +03:00
vegu-ai-tools
7569998cdb Add support for project-specific grouping in NodeEditorLibrary for templates/modules, enhancing organization of node groups. 2025-10-12 17:05:11 +03:00
vegu-ai-tools
a9fed9a4dd Add Nexus agent persona to talemate template and initialize phrases array 2025-10-12 14:51:41 +03:00
vegu-ai-tools
7f0c8a339e 0.33 added 2025-10-12 01:14:00 +03:00
vegu-ai-tools
2cc1c8c6ed director action module updates 2025-10-11 19:31:40 +03:00
vegu-ai-tools
ce03243ecf linting 2025-10-11 19:31:08 +03:00
vegu-ai-tools
1923282727 Increase maximum changelog file size limit from 500KB to 1MB to accommodate larger change logs. 2025-10-11 19:28:47 +03:00
vegu-ai-tools
0b41f76b07 Update EmitWorldEditorSync node to include websocket passthrough in sync action for improved event handling. 2025-10-11 19:18:13 +03:00
vegu-ai-tools
5d9fdca5d8 Update card styles in IntroRecentScenes.vue for improved visual consistency; change card color to grey-darken-3 and adjust text classes for titles and subtitles. 2025-10-11 19:17:47 +03:00
vegu-ai-tools
0b1a2c5159 Remove debug diagnostics from DirectorConsoleChats.vue to clean up console output. 2025-10-11 19:17:42 +03:00
vegu-ai-tools
b9dbe9179c Update usageCheatSheet in DirectorConsoleChatsToolbar.vue to include recommendation for 100B+ models. 2025-10-11 19:17:36 +03:00
vegu-ai-tools
9ec55a0004 director action updates 2025-10-11 14:08:44 +03:00
vegu-ai-tools
7529726df9 direct context update fn 2025-10-11 13:52:59 +03:00
vegu-ai-tools
ebc2ae804b director action module updates 2025-10-11 13:51:47 +03:00
vegu-ai-tools
2b3cb8d101 Update Anthropic client with new models and adjust default settings; introduce limited parameter models for specific configurations. 2025-10-11 13:51:22 +03:00
vegu-ai-tools
6c3b53e37c Add EmitWorldEditorSync node to handle world editor synchronization; update WorldStateManager to refresh active tab on sync action. 2025-10-11 13:51:14 +03:00
vegu-ai-tools
3dee0ec0e9 relock 2025-10-11 12:39:11 +03:00
vegu-ai-tools
504707796c linting 2025-10-10 16:28:44 +03:00
vegu-ai-tools
0f59e1cd21 Add cover image and writing style sections to story and character templates; update chat common tasks with new scene restrictions and user guide reference. 2025-10-10 16:28:34 +03:00
vegu-ai-tools
2f742b95d6 Update usageCheatSheet text in DirectorConsoleChatsToolbar.vue for clarity and add pre-wrap styling to tooltip 2025-10-10 16:18:00 +03:00
vegu-ai-tools
2627972d8b Refactor toggleNavigation method to accept an 'open' parameter for direct control over drawer visibility in TalemateApp.vue 2025-10-10 16:03:33 +03:00
vegu-ai-tools
934e62dded Add building blocks template for story configuration and scene management 2025-10-10 16:03:24 +03:00
vegu-ai-tools
238ff1dfe5 fix update_introduction 2025-10-10 15:46:44 +03:00
vegu-ai-tools
8b4e1962c4 nodes 2025-10-10 15:43:18 +03:00
vegu-ai-tools
d3a9c7f2c1 fix tests 2025-10-09 04:33:07 +03:00
vegu-ai-tools
b8df7cfed8 linting 2025-10-09 04:17:07 +03:00
vegu-ai-tools
4b27917173 fix issue with data structure parsing 2025-10-09 04:16:17 +03:00
vegu-ai-tools
8b94312f1f Add MAX_CONTENT_WIDTH constant and update components to use it for consistent max width styling 2025-10-09 03:13:52 +03:00
vegu-ai-tools
15aea906c5 Update SharedContext to use await for set_shared method, ensuring proper asynchronous handling when modifying character sharing status. 2025-10-09 02:51:32 +03:00
vegu-ai-tools
fe36431a27 Refactor WorldStateManagerSceneSharedContext.vue to improve cancel functionality by introducing a dedicated cancelCreate method and removing the direct dialog toggle from the Cancel button. This enhances code clarity and maintainability. 2025-10-09 02:51:28 +03:00
vegu-ai-tools
f6dabc18eb Add intent_state to SceneInitialization model and update load_scene_from_data function to handle intent state. Introduce story_intent property in Scene class and reset method in SceneIntent class. Update WorldStateManagerSceneSharedContext.vue to include intent state in scene initialization parameters. 2025-10-09 02:41:01 +03:00
vegu-ai-tools
b5e30600fa Refactor CoverImage component to enhance drag-and-drop functionality and improve styling for empty portrait state. 2025-10-09 01:56:42 +03:00
vegu-ai-tools
765cd5799c Add assets field to SceneInitialization model and update load_scene_from_data function to handle scene assets. Update WorldStateManagerSceneSharedContext.vue to include assets in scene initialization parameters. 2025-10-09 01:45:13 +03:00
vegu-ai-tools
9457620767 linting 2025-10-08 00:37:30 +03:00
vegu-ai-tools
8b07422939 nodes updated 2025-10-07 03:33:57 +03:00
vegu-ai-tools
4eb185e895 Add CreateStaticArchiveEntry and RemoveStaticArchiveEntry nodes for managing static history entries. Implement input/output properties and error handling for entry creation and deletion. 2025-10-07 03:33:16 +03:00
vegu-ai-tools
78d434afd3 Add "static history" option to ContextualGenerate node for enhanced contextual generation capabilities. 2025-10-07 03:33:10 +03:00
vegu-ai-tools
dc2b1c9149 Add is_static property to HistoryEntry for static history entry identification 2025-10-07 03:33:01 +03:00
vegu-ai-tools
511a33f69f allow contextual generation of static history entries 2025-10-07 03:32:52 +03:00
vegu-ai-tools
0e3eb15fce linting 2025-10-05 03:29:33 +03:00
vegu-ai-tools
f925758319 Refactor response identifier in RevisionMixin to dynamically use calculated response length for improved prompt handling. 2025-10-05 03:27:27 +03:00
vegu-ai-tools
81e1da3c21 Update response length calculation in RevisionMixin to include token count for improved text processing. 2025-10-05 03:25:02 +03:00
vegu-ai-tools
9dea2daef5 Enhance NarratorAgent to support dynamic response length configuration. Updated max generation length from 192 to 256 tokens and introduced a new method to calculate response length. Modified narration methods to accept and utilize response length parameter. Added response length property in GenerateNarrationBase class and updated templates to include response length handling. 2025-10-05 03:24:39 +03:00
vegu-ai-tools
1c592a438f Add Seed.jinja2 template for LLM prompts with reasoning patterns and user interaction handling 2025-10-05 03:10:44 +03:00
vegu-ai-tools
fadf4b8f2d allow prompt templates to specify reasoning pattern 2025-10-05 03:10:31 +03:00
vegu-ai-tools
22f97f60ea Update GLM-no-reasoning template to include <think></think> tag before coercion message for improved prompt structure. 2025-10-05 01:52:32 +03:00
vegu-ai-tools
6782bfe93f Change log level from warning to debug for migrate_narrator_source_to_meta error handling in NarratorMessage class. 2025-10-04 23:18:07 +03:00
vegu-ai-tools
2c2d2f160c linting 2025-10-04 22:35:06 +03:00
vegu-ai-tools
3b85d007e2 Refactor WorldStateManager components to enhance history management and sharing capabilities. Added summarized history titles, improved UI for sharing static history, and integrated scene summarization functionality. Removed deprecated methods related to shared context settings. 2025-10-04 22:29:43 +03:00
vegu-ai-tools
ca1a1872ec Update icon for AgentWebsocketHandler in NodeEditorLibrary component to mdi-web-box 2025-10-04 22:29:32 +03:00
vegu-ai-tools
1f5baf9958 Update scene loading to allow setting scene ID from data and include ID in scene serialization 2025-10-04 22:29:22 +03:00
vegu-ai-tools
b9dcfd54a5 Enhance GetWorldEntry node to include 'shared' property in output values from world entry context 2025-10-04 22:29:08 +03:00
vegu-ai-tools
ceb998088f Update manual context handling in WorldStateManager to include shared property from existing context 2025-10-04 22:28:55 +03:00
vegu-ai-tools
a21117674b Add data property to QueueResponse class for websocket communication and update run method to include action and data in output values. 2025-10-04 22:22:39 +03:00
vegu-ai-tools
40bdf7b361 nodes 2025-10-04 22:22:26 +03:00
vegu-ai-tools
7161786b50 Add SummarizeWebsocketHandler to handle summarize actions and integrate it into SummarizeAgent 2025-10-04 20:01:18 +03:00
vegu-ai-tools
5c960784d9 Add check for node selectability in NodeEditorNodeSearch component to filter search results accordingly. 2025-10-04 20:01:06 +03:00
vegu-ai-tools
0fd6d01184 Add Agent Websocket Handler option to Node Editor Library with corresponding icons and labels 2025-10-04 20:01:01 +03:00
vegu-ai-tools
4a9522d030 Add characters output to ContextHistory node to track active participants in the scene 2025-10-04 20:00:52 +03:00
vegu-ai-tools
3a00e33dc1 Refactor init_nodes method in DirectorAgent to call superclass method and rename chat initialization method in DirectorChatMixin for clarity. 2025-10-04 20:00:43 +03:00
vegu-ai-tools
751a2acfcb agent websocket handler node support 2025-10-04 20:00:32 +03:00
vegu-ai-tools
acb3b66328 Add active frontend websocket handler management in websocket_endpoint 2025-10-04 19:59:07 +03:00
vegu-ai-tools
8d56eb1ff8 Update TalemateApp.vue to set the active tab to 'main' when switching to the node editor, improving navigation consistency. 2025-10-04 19:58:39 +03:00
vegu-ai-tools
435082935e prompt tweaks 2025-10-04 16:39:24 +03:00
vegu-ai-tools
d7fe0e36d9 prompt tweaks 2025-10-04 16:07:38 +03:00
vegu-ai-tools
0ae92c39de Implement logic to always show scene view in scene mode within TalemateApp.vue, enhancing user experience during scene interactions. 2025-10-04 15:41:19 +03:00
vegu-ai-tools
cfc84eb357 Refactor source entry attribute access in collect_source_entries function to use getattr for optional attributes, improving robustness. 2025-10-04 15:20:23 +03:00
vegu-ai-tools
0336f19066 linting 2025-10-04 14:42:25 +03:00
vegu-ai-tools
e2cca100f5 Update IntroRecentScenes.vue to use optional chaining for selectedScene properties and enhance backup timestamp display with revision info. 2025-10-04 14:41:50 +03:00
vegu-ai-tools
51d4ae57e9 Skip processing of changelog files in _list_files_and_directories function to prevent unnecessary inclusion in file listings. 2025-10-04 14:41:34 +03:00
vegu-ai-tools
8714fd1726 Update _apply_delta function to enhance delta application handling by adding parameters for error logging and force application of changes on non-existent paths. 2025-10-04 14:41:25 +03:00
vegu-ai-tools
7e4d3b7268 Enhance backup restore functionality by adding base and latest snapshot options; improve UI with clearer labels and alerts for restore actions. 2025-10-04 14:09:59 +03:00
vegu-ai-tools
117ea78f06 Add ensure_changelogs_for_all_scenes function to manage changelog files for all scenes; integrate it into the server run process. 2025-10-04 13:42:08 +03:00
vegu-ai-tools
5f396b22d2 Add update_from_scene method calls in SharedContextMixin for scene synchronization 2025-10-04 00:47:16 +03:00
vegu-ai-tools
430c69a83d Refactor character removal logic in shared context to prevent deletion; characters are now only marked as non-shared. 2025-10-04 00:43:23 +03:00
vegu-ai-tools
9d38432a8b avoid changed size error 2025-10-04 00:21:11 +03:00
vegu-ai-tools
bd378f4f44 missing arg 2025-10-04 00:17:35 +03:00
vegu-ai-tools
e800179a0c activate needs to happen explicitly now and deactivated is the default 2025-10-04 00:15:08 +03:00
vegu-ai-tools
a21c3d2ccf properly activate characters 2025-10-04 00:10:48 +03:00
vegu-ai-tools
d41953a70d linting 2025-10-03 23:46:10 +03:00
vegu-ai-tools
99efbade54 prompt tweaks 2025-10-03 23:45:20 +03:00
vegu-ai-tools
6db3cb72ff ensure character gets added to character_data 2025-10-03 23:45:15 +03:00
vegu-ai-tools
68ed364270 relock 2025-10-03 22:31:03 +03:00
vegu-ai-tools
665cc6f4b1 Refactor base_attributes type in Character model to a more generic dict type for improved flexibility 2025-10-03 22:29:59 +03:00
vegu-ai-tools
7ef5b70ff2 typo 2025-10-03 22:29:40 +03:00
vegu-ai-tools
a6d1065dcb Improve error handling in export_node_definitions by adding a try-except block for module path resolution. Log a warning if the relative path conversion fails. 2025-10-03 15:54:24 +03:00
vegu-ai-tools
33d093e5bb show icons 2025-10-03 15:52:19 +03:00
vegu-ai-tools
b2035ddebe Add relative_to_root function for path resolution and update node export logic
- Introduced a new function `relative_to_root` in path.py to resolve paths relative to the TALEMATE_ROOT.
- Updated the `export_node_definitions` function in registry.py to use `relative_to_root` for module path resolution.
- Added a check to skip non-selectable node definitions in litegraphUtils.js during registration.
2025-10-03 15:51:21 +03:00
vegu-ai-tools
3e06f8a64d node fixes 2025-10-03 15:07:39 +03:00
vegu-ai-tools
152fb9b474 linting 2025-10-03 14:48:26 +03:00
vegu-ai-tools
b7fdc3dc38 Enhance data extraction in Focal class by adding a fallback mechanism. Implemented additional error handling to attempt data extraction from a fenced block if the initial extraction fails, improving robustness in handling responses. 2025-10-03 14:47:49 +03:00
vegu-ai-tools
3cf5557780 prompt tweaks 2025-10-03 14:47:36 +03:00
vegu-ai-tools
e29496f650 recet scene message visibility on scene load 2025-10-03 14:47:30 +03:00
vegu-ai-tools
2bb1e45eb5 Enhance error handling in DynamicInstruction class by enforcing header requirement and ensuring content defaults to an empty string if not provided. 2025-10-03 14:47:07 +03:00
vegu-ai-tools
4dc580f630 remove debug msg 2025-10-03 14:46:58 +03:00
vegu-ai-tools
6a27c49594 node fixes 2025-10-03 14:45:57 +03:00
vegu-ai-tools
131479e29b Add chat template identifier support and error handling in ModelPrompt class
- Implemented logic to check for 'chat_template.jinja2' in Hugging Face repository.
- Added new template identifiers: GraniteIdentifier and GLMIdentifier.
- Enhanced error handling to avoid logging 404 errors for missing templates.
- Introduced Granite.jinja2 template file for prompt structure.
2025-10-03 14:45:46 +03:00
vegu-ai-tools
e6d528323b Remove plan.md 2025-10-03 12:39:18 +03:00
vegu-ai-tools
ccf6284442 linting 2025-10-02 15:41:07 +03:00
vegu-ai-tools
99a9488564 Update clear chat button logic to consider appBusy state in DirectorConsoleChatsToolbar component, enhancing user experience during busy operations. 2025-10-02 15:40:43 +03:00
vegu-ai-tools
42d08e5ac9 Refactor DirectorChatMixin to utilize standalone utility functions for parsing response sections and extracting action blocks. This improves code clarity and maintainability. Added tests for new utility functions in test_utils_prompt.py to ensure correct functionality. 2025-10-02 15:40:31 +03:00
vegu-ai-tools
b137546697 Add appBusy prop to DirectorConsoleChats and DirectorConsoleChatsToolbar components to manage button states during busy operations. 2025-10-02 15:37:37 +03:00
vegu-ai-tools
f13d306470 node updates 2025-10-02 14:48:30 +03:00
vegu-ai-tools
7aa274a0e0 linting 2025-10-02 14:44:38 +03:00
vegu-ai-tools
160818a26c Refactor ConfirmActionInline component to improve button rendering logic. Introduced 'size' prop for button customization and added 'comfortable' density option. Simplified icon handling with computed property for better clarity. 2025-10-02 14:44:26 +03:00
vegu-ai-tools
6953ccec69 director chat support remove message and regenerate message 2025-10-02 14:44:21 +03:00
vegu-ai-tools
b746b8773b Update EXTERNAL_DESCRIPTION in TabbyAPI client to include notes on EXL3 model sensitivity to inference parameters. Adjust handling of 'repetition_penalty_range' in parameter list for clarity. 2025-10-02 11:13:35 +03:00
vegu-ai-tools
8652e88ea8 Remove redundant question handling logic in DirectorChatMixin to streamline action selection process. 2025-10-01 22:45:56 +03:00
vegu-ai-tools
ad31e54e3a linting 2025-10-01 14:01:47 +03:00
vegu-ai-tools
748a2cfccf allow individual sharing of attributes and details 2025-10-01 13:54:08 +03:00
vegu-ai-tools
20db574155 Remove status emission for gameplay switch in CmdSetEnvironmentToScene class. 2025-10-01 10:32:37 +03:00
vegu-ai-tools
65e17f234f Add lastLoadedJSON property to GameState component for change detection. Update validation logic to prevent unnecessary updates when game state has not changed. 2025-10-01 10:29:41 +03:00
vegu-ai-tools
850679a0e8 Refactor GameState component to integrate Codemirror for JSON editing, replacing the previous treeview structure. Implement validation for JSON input and enhance error handling. Remove unused methods and streamline state management. 2025-10-01 10:27:13 +03:00
vegu-ai-tools
ae0749d173 linting 2025-10-01 02:41:35 +03:00
vegu-ai-tools
b13eb5be69 Add scene title generation to load process and update contextual generation template. Introduced a new method in AssistantMixin for generating scene titles, ensuring titles are concise and free of special characters. Updated load_scene_from_data to assign generated titles to scenes. 2025-10-01 02:41:26 +03:00
vegu-ai-tools
a4985c1888 Refine messages for shared context checkboxes in WorldStateManagerCharacter and WorldStateManagerWorldEntries components for clarity. 2025-10-01 02:41:14 +03:00
vegu-ai-tools
67571ec9be Update WorldStateManagerSceneSharedContext.vue to conditionally display alert based on scene saving status and new scene creation state. 2025-10-01 02:41:07 +03:00
vegu-ai-tools
29a5a5ac52 linting 2025-10-01 01:59:46 +03:00
vegu-ai-tools
60f5f20715 rename inheritance to scene initialization 2025-10-01 01:59:37 +03:00
vegu-ai-tools
00e3aa4a19 Add active_characters and intro_instructions to Inheritance model; implement intro generation in load_scene_from_data. Update WorldStateManagerSceneSharedContext.vue to enhance new scene creation dialog with character selection and premise instructions. 2025-10-01 01:44:26 +03:00
vegu-ai-tools
b59c3ab273 linting 2025-10-01 00:35:09 +03:00
vegu-ai-tools
4dbc824c07 Comment out 'repetition_penalty_range' in TabbyAPIClient to prevent unexpected "<unk><unk> .." responses. Further investigation needed. 2025-10-01 00:34:37 +03:00
vegu-ai-tools
0463fd37e5 Enhance chat modes by adding 'nospoilers' option to DirectorChat and related payloads. Update chat instructions to reflect new mode behavior and improve UI to support mode-specific icons and colors in the DirectorConsoleChatsToolbar. 2025-09-30 19:56:35 +03:00
vegu-ai-tools
a18a43cbe6 linting 2025-09-30 19:18:35 +03:00
vegu-ai-tools
b1cbaf650f Update WorldStateManagerSceneSharedContext.vue to clarify sharing of character, world entries, and history across connected scenes. 2025-09-30 19:18:22 +03:00
vegu-ai-tools
73c544211a shared context static history support
fix context memory db imports to always import
2025-09-30 19:15:29 +03:00
vegu-ai-tools
c65a7889d3 Enhance SharedContext.update_to_scene method to properly add or update character data in the scene based on existence checks. This improves the synchronization of character states between shared context and scene. 2025-09-30 14:08:34 +03:00
vegu-ai-tools
dc9e297587 Character.update deserialize voice value correctly 2025-09-30 14:08:19 +03:00
vegu-ai-tools
5d361331d5 comment 2025-09-30 14:07:48 +03:00
vegu-ai-tools
0b8810073f Refactor NodeEditor and TalemateApp components to enhance UI interactions. Removed the exit creative mode button from NodeEditor and updated tooltips for clarity. Adjusted app bar navigation icons for better accessibility and added functionality to switch between node editor and creative mode. 2025-09-30 13:44:30 +03:00
vegu-ai-tools
6ba65ff75e Refactor NodeEditorLibrary to improve search functionality and debounce input handling. Updated v-text-field model and added a watcher for search input to enhance performance. 2025-09-30 13:22:18 +03:00
veguAI
291921a9f2 Shared context 2 (#19)
Shared context
2025-09-30 03:26:48 +03:00
vegu-ai-tools
1b8ba12e61 fix world editor auto sync 2025-09-28 23:17:51 +03:00
vegu-ai-tools
9adbb2c518 fix button 2025-09-28 23:17:42 +03:00
vegu-ai-tools
883dffdd73 store character data at unified point 2025-09-28 22:41:17 +03:00
vegu-ai-tools
0c5fd2e48d Enhance DirectorConsoleChatsToolbar by adding a usage cheat sheet tooltip for user guidance and refining the Clear Chat button's UI for better accessibility. 2025-09-28 15:56:26 +03:00
vegu-ai-tools
7a6ae0f135 Update chat instructions to clarify user intent considerations and enhance decisiveness in responses. Added guidance on distinguishing between scene progression and background changes, and refined analysis requirements for user interactions. 2025-09-28 15:56:19 +03:00
vegu-ai-tools
6da7b29b94 Refactor push_history method to be asynchronous across multiple agents and scenes, ensuring consistent handling of message history updates. 2025-09-28 15:13:45 +03:00
vegu-ai-tools
b423bc3a18 Add scene progression guidance to chat-common-tasks template 2025-09-28 15:13:32 +03:00
vegu-ai-tools
bf8a580c33 relock 2025-09-28 14:41:55 +03:00
vegu-ai-tools
734b2bab19 linting 2025-09-28 14:39:27 +03:00
vegu-ai-tools
46afdeeb0b responsive layout fixes in template editors 2025-09-28 14:39:00 +03:00
vegu-ai-tools
674dfc5978 anchor clear chat confirm to top 2025-09-28 14:34:09 +03:00
vegu-ai-tools
71595a1fff Enhance ConfirmActionPrompt component by adding anchorTop prop for dynamic alignment and adjusting icon size and color for improved UI consistency. 2025-09-28 14:33:52 +03:00
vegu-ai-tools
219b5e2786 Enhance action handling in DirectorChatMixin by skipping actions when a question is present in the parsed response, ensuring better response accuracy. 2025-09-28 14:26:00 +03:00
vegu-ai-tools
1243d03718 director summary return appropriately on no action taken 2025-09-28 13:40:27 +03:00
vegu-ai-tools
ac9c66915b prompt tweaks 2025-09-28 05:11:25 +03:00
vegu-ai-tools
00b3c05f3d node updates 2025-09-28 05:03:24 +03:00
vegu-ai-tools
308363c93c node updates 2025-09-28 04:55:28 +03:00
vegu-ai-tools
78cc9334d3 node updates 2025-09-28 04:24:39 +03:00
vegu-ai-tools
e74e9c679a Add data_expected attribute to Focal and Prompt classes for enhanced response handling 2025-09-28 04:24:22 +03:00
vegu-ai-tools
6549d65ee8 linting 2025-09-28 03:51:29 +03:00
vegu-ai-tools
babe77929c node adjustments 2025-09-28 03:51:15 +03:00
vegu-ai-tools
531c0b4e87 prompt tweaks 2025-09-28 03:51:10 +03:00
vegu-ai-tools
3dc2269678 Add additional outputs for context validation in ValidateContextIDItem node, including context type, context value, and name. 2025-09-28 03:51:02 +03:00
vegu-ai-tools
26d7886c31 Add string replacement functionality and Jinja2 formatting support in nodes. Introduced 'old' and 'new' properties for substring replacement in the Replace node, and added a new Jinja2Format node for template rendering using jinja2. 2025-09-28 03:50:52 +03:00
vegu-ai-tools
3dadf49a69 Add context type output and filtering for creative context ID meta entries in PathToContextID and ContextIDMetaEntries nodes 2025-09-28 03:50:39 +03:00
vegu-ai-tools
4cb612bc23 prompt tweaks 2025-09-28 01:08:30 +03:00
vegu-ai-tools
b85b983522 node updates 2025-09-28 01:08:12 +03:00
vegu-ai-tools
922f520ec3 linting 2025-09-27 16:07:20 +03:00
vegu-ai-tools
555d90e53a immutable scenes should reset context db on load 2025-09-27 16:07:07 +03:00
vegu-ai-tools
e84c36a31b Enhance scene view toggle functionality to support shift-click behavior for closing all drawers when hiding the scene view. 2025-09-27 16:00:00 +03:00
vegu-ai-tools
072cd7fd12 linting 2025-09-27 15:53:37 +03:00
vegu-ai-tools
b9f5423f92 gamestate nodes 2025-09-27 15:52:17 +03:00
vegu-ai-tools
e9f0e4124a Add UnpackGameState node to retrieve and unpack game state variables 2025-09-27 15:52:07 +03:00
vegu-ai-tools
c361f4723b Add DictUpdate node 2025-09-27 15:51:25 +03:00
vegu-ai-tools
e0c92be628 Add 'data_multiple' property to GenerateResponse class to allow multiple data structures in responses. Update output socket type for 'data_obj' to support both dict and list formats. 2025-09-27 14:38:47 +03:00
vegu-ai-tools
52f07b26fa Refactor Prompt class by removing LoopedPrompt and cleaning up related methods. Update data response parsing to streamline functionality and improve clarity. Adjust imports accordingly. 2025-09-27 14:38:29 +03:00
vegu-ai-tools
1c7d28f83c Add gamestate context support in BuildPrompt and corresponding template. Introduced new property for gamestate context and updated rendering logic to include gamestate information in prompts. 2025-09-27 14:32:26 +03:00
vegu-ai-tools
5fb22467f2 prompt tweaks 2025-09-27 14:31:45 +03:00
vegu-ai-tools
b9f4d0a88a unified data extraction function 2025-09-27 14:31:17 +03:00
vegu-ai-tools
fb43310049 node updates 2025-09-27 11:43:05 +03:00
vegu-ai-tools
e98cafa63d Update message input hint in TalemateApp component to include keyboard shortcuts for navigating input history (Ctrl+Up/Down). 2025-09-26 22:24:54 +03:00
vegu-ai-tools
ae82e7ba2d Add input history functionality to message input in TalemateApp component. Implement keyboard shortcuts for navigating history (Ctrl+Up/Down) and limit history to the last 10 messages. Update message sending logic to store messages in history. 2025-09-26 22:24:06 +03:00
vegu-ai-tools
bad9453ba1 openroute fetch models on key set 2025-09-26 21:50:10 +03:00
vegu-ai-tools
f41506eeeb linting 2025-09-26 21:43:28 +03:00
vegu-ai-tools
6f47edbb27 Remove agent messages from state when opening agent message view in SceneTools component. 2025-09-26 21:43:13 +03:00
vegu-ai-tools
67facdf39e Add message emission for actor, narrator, and scene analysis guidance in respective components. Enhance AgentMessages and SceneTools for better message handling and visual feedback. 2025-09-26 21:39:37 +03:00
vegu-ai-tools
1fdc95e7cf Add agent state exclusions to changelog with a TODO for module migration 2025-09-26 21:39:23 +03:00
vegu-ai-tools
ab956f25a7 linting 2025-09-26 20:15:51 +03:00
vegu-ai-tools
cbbc843f86 Add AdvanceTime node to world state for time advancement with duration and narration instructions 2025-09-26 19:27:26 +03:00
vegu-ai-tools
028b0abf53 Update advance_time method to include return type annotation and return message 2025-09-26 19:27:13 +03:00
vegu-ai-tools
a7bbabbcad Add IsoDateDuration node for ISO 8601 interval string construction 2025-09-26 19:26:59 +03:00
vegu-ai-tools
e61baa6a5a narrate time action now has access to response length instructions 2025-09-26 19:26:50 +03:00
vegu-ai-tools
56b8e033ba Enhance type hints for duration conversion functions in time.py 2025-09-26 19:26:23 +03:00
vegu-ai-tools
1dd796aeb2 more prompt fixes 2025-09-26 18:23:35 +03:00
vegu-ai-tools
8d824acebc director chat prompt simplifications 2025-09-26 17:53:22 +03:00
vegu-ai-tools
2a1bd5864f prompt tweaks 2025-09-26 17:33:40 +03:00
vegu-ai-tools
e666cfe81a Update icons in NodeEditorLibrary and NodeEditorModuleProperties for improved UI clarity 2025-09-26 15:32:47 +03:00
vegu-ai-tools
d4182705e1 move module properties to navigation drawer 2025-09-26 15:30:59 +03:00
vegu-ai-tools
e57d5dbb73 linting 2025-09-26 14:12:23 +03:00
vegu-ai-tools
4fb5bc54ba move world state scene tools into sub component 2025-09-26 14:11:55 +03:00
vegu-ai-tools
f68610f315 remove leghacy world state manager buttons 2025-09-26 14:04:40 +03:00
vegu-ai-tools
e0e81a3796 Add cleanup function for recent scenes in config to remove non-existent paths 2025-09-26 14:00:39 +03:00
vegu-ai-tools
c53bbf2693 linting 2025-09-26 13:56:06 +03:00
vegu-ai-tools
9c60d4c046 restore from backup tweaks and scene loading error handling imrpvoements 2025-09-26 13:49:30 +03:00
vegu-ai-tools
a427b940b5 linting 2025-09-26 02:25:16 +03:00
vegu-ai-tools
63c992ab44 Refactor DirectorConsoleChatsToolbar to enhance UI with tooltips for persona and chat mode selection, improving user interaction and accessibility. 2025-09-26 02:24:54 +03:00
vegu-ai-tools
910764bde4 Add confirm write actions feature to chat context and UI components 2025-09-26 02:24:10 +03:00
vegu-ai-tools
7bbe5ced7e Implement auto-apply feature for input changes in GameState component, enhancing user experience by automatically committing changes after a brief delay. Update relevant methods to trigger auto-apply on various input events. 2025-09-26 02:13:45 +03:00
vegu-ai-tools
119dee7418 Enhance changelog functionality by adding delta type handling and improving commit behavior in InMemoryChangelog. Update tests to manually commit changes after appending deltas. 2025-09-26 02:04:29 +03:00
vegu-ai-tools
ddbb74a7a3 Implement unified refresh mechanism for active tabs in WorldStateManager, enhancing data loading for scene, characters, world, history, contextdb, pins, templates, and suggestions components. 2025-09-26 01:45:52 +03:00
vegu-ai-tools
64f7165fc8 refactor fork scene chip color and label in CharacterMessage and NarratorMessage components 2025-09-26 01:19:15 +03:00
vegu-ai-tools
5d008ae676 set default writing style in assistant mixin 2025-09-26 01:16:21 +03:00
vegu-ai-tools
62db9b1221 yield back to user on reject 2025-09-25 22:13:18 +03:00
vegu-ai-tools
a544367501 linting 2025-09-25 02:25:17 +03:00
vegu-ai-tools
314f24d23a pin decay 2025-09-25 02:25:10 +03:00
vegu-ai-tools
fbde0103bd linting 2025-09-25 01:43:26 +03:00
vegu-ai-tools
1c84ad76ea prompt tweaks 2025-09-25 01:41:24 +03:00
vegu-ai-tools
9c5d5cc322 set pins from context id 2025-09-25 01:41:04 +03:00
vegu-ai-tools
bf605604f0 append deltas on save 2025-09-25 01:38:51 +03:00
vegu-ai-tools
62d3aa25ca exclude world state from changelog 2025-09-25 01:38:39 +03:00
vegu-ai-tools
eb4e1426ac fixes 2025-09-24 22:42:09 +03:00
vegu-ai-tools
11f5242008 linting 2025-09-24 22:04:38 +03:00
vegu-ai-tools
4b0b252bfb Added scene activity check in DirectorChatActionConfirm to handle inactive scenes gracefully 2025-09-24 22:04:23 +03:00
vegu-ai-tools
85680a5285 linting 2025-09-24 21:24:01 +03:00
vegu-ai-tools
c8969d0fb7 fix UI flickering during quick agent workload swaps 2025-09-24 21:22:04 +03:00
vegu-ai-tools
3e5697d072 dleeting scene should remove changelog files 2025-09-24 21:13:52 +03:00
vegu-ai-tools
e446b01ac8 linting 2025-09-24 20:58:41 +03:00
vegu-ai-tools
1cc73f1899 memory changelog context 2025-09-24 20:58:33 +03:00
vegu-ai-tools
238911630b fork scene use changelog when available 2025-09-24 18:41:57 +03:00
vegu-ai-tools
89c9364db3 reset ltm after restore 2025-09-24 17:14:07 +03:00
vegu-ai-tools
b291afefd0 restore from backup ux polish 2025-09-24 16:10:29 +03:00
vegu-ai-tools
08850a7cb3 linting 2025-09-24 14:08:59 +03:00
vegu-ai-tools
9732e90a5b no need to init changelog during scene load 2025-09-24 14:08:30 +03:00
vegu-ai-tools
e689b18088 replace auto backup with restore from changelog 2025-09-24 14:04:47 +03:00
vegu-ai-tools
5123cbbef7 unix timestamps 2025-09-24 12:13:15 +03:00
vegu-ai-tools
b19e8cc645 store rev with scene messages 2025-09-24 12:06:03 +03:00
vegu-ai-tools
681102116d split changelog revisions 2025-09-24 11:36:35 +03:00
vegu-ai-tools
1b0f738e0b linting 2025-09-24 02:06:22 +03:00
vegu-ai-tools
eb094cc4b5 changelog integration progress 2025-09-24 02:01:15 +03:00
vegu-ai-tools
c434fb5a78 add deepdiff 2025-09-24 02:01:04 +03:00
vegu-ai-tools
6a37f673f6 changelog system 2025-09-24 02:00:53 +03:00
vegu-ai-tools
9cd5434a58 update argument_instructions and instructions fields to allow None values in Callback model 2025-09-23 23:22:16 +03:00
vegu-ai-tools
4fdebcb803 fix broken save world entry node 2025-09-23 17:12:34 +03:00
vegu-ai-tools
145be1096e bring manual commit back 2025-09-23 15:35:06 +03:00
vegu-ai-tools
05b3065ed2 auto commit 2025-09-23 15:22:52 +03:00
vegu-ai-tools
6d7b1cb063 ux tweaks 2025-09-23 13:31:59 +03:00
vegu-ai-tools
dd9a8f8ad4 linting 2025-09-23 02:25:59 +03:00
vegu-ai-tools
d4fcd724e3 cleanup 2025-09-23 02:25:49 +03:00
vegu-ai-tools
c0d3d7f14f gamestate editor 2025-09-23 01:58:38 +03:00
vegu-ai-tools
3ce834a432 linting 2025-09-22 13:47:23 +03:00
vegu-ai-tools
dc84c416b3 clean up rag config ux 2025-09-22 13:47:08 +03:00
vegu-ai-tools
e182a178c2 rag improvements 2025-09-22 13:33:20 +03:00
vegu-ai-tools
2b3251b46c always use semantic similarity retrieval since its fast 2025-09-22 12:57:52 +03:00
vegu-ai-tools
09f136dfb9 Add sentence compilation functions to dedupe module
- Introduced `compile_sentences_to_length` to join sentences into chunks of a specified length.
- Updated `__all__` to include new functions for improved usability in text processing.
2025-09-22 12:57:21 +03:00
vegu-ai-tools
3c7c5565f0 update confirm message 2025-09-22 00:09:18 +03:00
vegu-ai-tools
19af0e8156 Add read-only alert and conditional rendering in WorldStateManagerContextDB component
- Introduced a v-alert to notify users when the Context Database is in read-only mode.
- Updated button and table cell rendering to conditionally display based on the read-only state, enhancing user experience and clarity.
2025-09-22 00:07:06 +03:00
vegu-ai-tools
6baadc544d Remove the ability to add entries in WorldStateManagerContextDB component 2025-09-21 22:47:43 +03:00
vegu-ai-tools
bc072d9b68 fail safe for Listen nodes to avoid infinite failure cascade 2025-09-21 22:26:56 +03:00
vegu-ai-tools
67487b227e fix issue where inline function would not emit note state updates 2025-09-21 22:02:55 +03:00
vegu-ai-tools
4d4ca5e2cb ux tweaks 2025-09-21 21:57:14 +03:00
vegu-ai-tools
ed3f725b17 Add Copy Module Functionality to NodeEditor and NodeEditorLibrary
- Introduced a new button in NodeEditor for copying locked modules to the editable scene.
- Added a method to NodeEditorLibrary for pre-filling module details when copying to the scene, enhancing user workflow and module management.
2025-09-21 21:49:35 +03:00
vegu-ai-tools
4646bad50e Update Node Deletion Logic in NodeEditorLibrary
- Refined the condition for checking if the first module is deleted to ensure it only triggers when there are listed nodes, enhancing the robustness of node selection after deletions.
2025-09-21 20:50:29 +03:00
vegu-ai-tools
2fe1f4ff82 Enhance Node Deletion Handling in NodeEditorLibrary
- Updated logic to prevent selecting a deleted node as the first module in the list, ensuring a smoother user experience when nodes are removed.
2025-09-21 20:47:22 +03:00
vegu-ai-tools
533a618658 Remove console.log statements from DirectorConsoleChatMessageActionResult and NodeEditor components 2025-09-21 20:40:56 +03:00
vegu-ai-tools
d59807a68f - Introduced a unique ID for each Scene instance to enhance identification and tracking.
- Updated TalemateApp to manage the display of the new scene setup modal based on unique scene IDs, ensuring it only shows once per unique scene.
2025-09-21 01:21:00 +03:00
vegu-ai-tools
a497775de6 Enhance NodeEditor and TalemateApp Components
- Added 'toggle-scene-view' emit to NodeEditor for improved scene management.
- Wrapped SceneMessages component in a div within TalemateApp for better visibility control based on scene view state.
2025-09-21 01:12:06 +03:00
vegu-ai-tools
599d7115e0 Add New Scene Setup Modal Component
- Introduced a new modal for setting up scenes, allowing users to select writing styles and director personas.
- Integrated the modal into the TalemateApp component, enabling it to display when a new scene is detected.
- Added functionality to manage templates for writing styles and director personas within the modal.
- Implemented data handling for scene properties and user interactions, enhancing the user experience during scene creation.
2025-09-21 01:05:04 +03:00
veguAI
5f26134647 Director chat (#17)
* clarify instructions

* incrase default context length for attribute gen

* character progression node

* Call Agent Function conditional

* director chat tweaks

* fix issue where graph property editing would reload the graph and lose unsaved changes

* prompt tweaks

* character name optional

* use blur

* prompt tweaks

* director actions

* rename reason to instructions

* fix argument node type conversion

* prompt tweaks

* director action create character node improved

* nting

* scene budge and function name reorg

* memory nodes

* prompt tweaks

* get_arguments_node allow filter fn override

* query world information node

* smarter dict collector key values

* linting

* dedicated director action argument node

* node style

* FunctionWrapper find_nodes and first_node
CallForEach now suppoorts dict in items socket

* focal improvements

* world entry management nodes

* linting

* director action change world information

* instruction tweaks

* director action confirmation flow

* raise on reject

* director action confirmation progress

* single chat

* polish ux

* separation of components

* ux polish

* tweaks

* agent personas

* linting

* initial chat message override in persona

* agent persona to system message

* linting

* director chat compaction

* linting

* fix codeblock linebreaks

* prompt twaeks

* error message socket

* get scene types node

* collect None values

* linting

* director action nodes for scene direction management

* proimpt tweaks

* fix issue of director chat not working right on new scenes

* prompt tweaks

* director action summary node

* rename to content classification

* scene nodes to get/set intro, desc, title, content classification

* linting

* allow somew extra calls

* director action nodes

* fix query contextdb to use iterate value

* director action ndoes

* linting

* fix double cancellation issue on websocket plugin handlers

* fix node editor losing changes when switching to a different talemate tab and back

* fix resize handler

* fix group overlap bug during snap action

* clear validation messages from property editor

* improve node search matching

* fix dynamic socket issues

* cleanup

* allow hot reload of new DA or command modules

* linting

* fix tests

* director modes

* allow changing director persona from chat interface

* tweaks

* separate state reinf component

* cleanup

* separate components for char attrib and details

* separate component for spirce collection

* separate writing style component

* cleanup

* remove applicable_agents

* persist chat mode

* linting

* ux tweaks

* director chat better context managemnet

* wording

* displat budgets in UX

* reorg

* Validate Is Not Set Node

* character nodes

* add extra output sockets

* fix compact error

* fix compact error

* fancy diffs

* nodes updated

* summarizer settings node

* fix type hint

* add useful output sockets

* history archive nodes

* add dedupe_enabled and response_length properties

* prompt tweaks

* linting

* nodes

* prompt tweaks

* prompt tweaks

* linting

* better instruct_character actions

* fix Get node to work with tuples and sets

* query character information should include details

* lint

* tweak instructions

* context id impl

* context id

* fix registry id

* context id socket

* context id

* build prompt improvements

* extract list node

* context_id as socket type

* remove empty separators

* linting

* character cibtext

* Fix advanced format always executing

* expose context id on character attrib

* CombineList node

* Dynamic instructions ndoe can now be fed list of strings

* return the context id object

* expose context id when unpacking memory doc

* progress on improved direction query action

* linting

* tweaks

* fix dynamic sockets not being copied during node clone

* fix dynamic sockets being lost when creating module from selection

* fix nodes spawning in too small to contain title

* sort choices

* hide prop value if related socket is connected

* shorten character context ids

* fix ai function type conversion issue that would cast everything to str

* hash context id

* context id value

* tests

* linting

* rename and tests

* linting

* refactor context id handler a bit

* context id shenanigans

* fix tests

* cleanup

* remove unused imports

* history context id handler

* refactor context id into proper module structure

* linting

* retrieve context

* world entry context ids

* story config context ids

* story config context

* linting

* no longer needed

* context id progress

* fix tests

* scene type inspection context ids

* linting

* prompt tweaks

* prompt tweaks

* shift+alt drag node for counterpart creation

* node property editor will now confirm close if it has changes

* transfer name property

* node coutnerpart fixes

* counterpart copy size fixes

* character_status socket

* fix director confirm node error when called outside of director chat context

* if input and output socket counterpart def

* prompt tweaks

* director action nodes

* no longer needed

* instruct character creation

* fix title

* toggle character

* linting

* GPT-OSS base templte

* wass reasoning_tokens to model_prompt

* pass reasoning_tokens to model prompt template

* gpt-oss preset

* to warning

* prompt tweaks

* prompt tweaks

* wording

* prompt tweaks

* pass through error message

* new exceptions

* clean up

* status field

* better response parsing

* linting

* add sockets and field to GetWorldEntry

* auto size node to fit all the widgets when adding to the graph

* contextual generate node correctly validates context_name as required

* fix issue where alt dragging a single node wouldnt work if other nodes were selected

* create group from node selection clears the selection after creation

* group from selected nodes - presets

* fix ctrl enter in text properties adding extra newline

* mark unresolved required sockets

* linting

* fix issue where connections where incorrectly flagged as unresolved

* Add GetCharacterDetail and SetCharacterDetail nodes to manage character details

- Introduced GetCharacterDetail node to retrieve character details based on a name input.
- Added SetCharacterDetail node to set character details with specified name and value.
- Updated existing GetCharacterAttribute node to handle cases where attributes may not exist, ensuring safe access to context IDs.

* Add Character Context Module and Update Instruct Character Updates

- Introduced a new character context module with various nodes for managing character attributes, details, and descriptions.
- Removed obsolete nodes from the instruct-character-updates module to streamline functionality.
- Adjusted node positions and properties for better organization and clarity in the graph structure.

* linting

* some context id stuff

* linting

* determine character dialogue instructions can now rewrite existing instructions

* allow none values

* context id node fixes

* prompt tweaks

* Add CounterPart functionality to DefineFunction and GetFunction nodes

* linting

* character config updates

* module sync

* dialogue_instructions -> acting_instructions

* linting

* story conig updates

* fix tests

* remove old action

* scan context ids

* director action tweaks

* Add scene_type_ids output to GetSceneTypes node

* director action nodes

* linting

* director agent nodes

* director action direct scene

* nodes

* nodes

* context id collector

* linting

* Handle empty content in DynamicInstruction string representation

* Add new color items "Prepare" and "Special" to recent nodes context menu

* Rename and separate hook processing methods in FocalContext for clarity

* Refactor action result handling in DirectorChatMixin to improve feedback mechanism and streamline chat state updates

* Add custom instructions feature to DirectorChatMixin and update chat instructions template to display them. Refactor existing action configurations for clarity.

* Add chat common tasks and limitations template for Director. Include scenarios for story creation, character behavior, memory issues, and repetition handling.

* Update chat template to clarify the role of the Director and include Talemate system context. Add common tasks template for enhanced chat functionality.

* prompt tweaks

* Add scene code block processing to DirectorConsoleChatMessageMarkdown component. Enhance markdown rendering by integrating scene text parsing and updating styles for scene blocks.

* Enhance NodeEditor and NodeEditorLibrary components with a new module library drawer and improved node display. Introduce tree view for better organization of modules, including scenes and agents, and update node display labels for clarity. Refactor resizing logic for the editor container.

* Implement scene view toggle and exit confirmation in NodeEditor. Move creative mode controls from TalemateApp to NodeEditor, enhancing user experience with new buttons and confirmation prompts for exiting creative mode.

* linting
2025-09-21 00:18:57 +03:00
vegu-ai-tools
b971c3044d restore from backup function 2025-08-30 01:06:20 +03:00
vegu-ai-tools
f3d02530d5 linting 2025-08-30 00:39:13 +03:00
vegu-ai-tools
9bf08b1f00 auto backup 2025-08-30 00:38:18 +03:00
vegu-ai-tools
b0f1b7307c reserved property names 2025-08-30 00:02:34 +03:00
vegu-ai-tools
9134c0cc26 node styles 2025-08-26 22:00:08 +03:00
vegu-ai-tools
3141e53eac dont need this 2025-08-26 21:52:11 +03:00
vegu-ai-tools
307439b210 tweaks and tests 2025-08-26 21:50:05 +03:00
vegu-ai-tools
ab3f4f3b2e dict collector improvements 2025-08-26 21:36:15 +03:00
vegu-ai-tools
b862159aef streamline add / remove dyn socket 2025-08-26 21:18:20 +03:00
vegu-ai-tools
8f4aa75e09 fixes 2025-08-26 21:06:26 +03:00
vegu-ai-tools
7320196ac6 garbage 2025-08-26 16:44:35 +03:00
vegu-ai-tools
e7b949c443 set 0.33.0 2025-08-26 10:20:35 +03:00
vegu-ai-tools
7ffbfe8d0d fix alt-drag to clone single node 2025-08-26 10:15:43 +03:00
veguAI
72867c930e 0.32.3 (#219)
* set 0.33.0

* fix nodeeditor context menu issues

* notes
2025-08-24 17:17:08 +03:00
veguAI
eddddd5034 0.32.2 (#216)
* fix api url error in koboldcpp

* set 0.32.2

* relock

* add 0.32.2 note
2025-08-23 19:20:39 +03:00
veguAI
25e646c56a 0.32.1 (#213)
* GLM 4.5 templates

* set 0.33 and relock

* fix issues with character creation

* relock

* prompt tweaks

* fix lmstudio

* fix issue with npm on windows failing on paths
set 0.32.1

* linting

* update what's new

* #214 (#215)

* max-height and overflow

* max-height and overflow

* v-tabs to list and offset new scrollbar at the top so it doesnt overlap into the divider

* tweaks

* tweaks

* prompt tweaks

---------

Co-authored-by: Iceman Oakenbear <89090218+IcemanOakenbear@users.noreply.github.com>
2025-08-23 01:16:18 +03:00
veguAI
ce4c302d73 0.32.0 (#208)
* separate other tts apis and improve chunking

* move old tts config to voice agent config and implement config widget ux elements for table editing

* elevenlabs updated to use their client and expose model selection

* linting

* separate character class into character.pt and start on voice routing

* linting

* tts hot swapping and chunking improvements

* linting

* add support for piper-tts

* update gitignore

* linting

* support google tts
fix issue where quick_toggle agent config didnt work on standard config items

* linting

* only show agent quick toggles if the agent is enabled

* change elevenlabs to use a locally maintained voice list

* tts generate before / after events

* voice library refactor

* linting

* update openai model and voices

* tweak configs

* voice library ux

* linting

* add support for kokoro tts

* fix add / remove voice

* voice library tags

* linting

* linting

* tts api status

* api infos and add more kokoro voices

* allow voice testing before saving a new voice

* tweaks to voice library ux and some api info text

* linting

* voice mixer

* polish

* voice files go into /tts instead of templates/voice

* change default narrator voice

* xtts confirmation note

* character voice select

* koboldai format template

* polish

* skip empty chunks

* change default voice

* replace em-dash with normal dash

* adjust limit

* replace libebreaks

* chunk cleanup for whitespace

* info updated

* remove invalid endif tag

* sort voices by ready api

* Character hashable type

* clarify set_simulated_environment use to avoid unwanted character deactivated

* allow manual generation of tts and fix assorted issues with tts

* tts websocket handler router renamed

* voice mixer: when there are only 2 voices auto adjust the other weight as needed

* separate persist character functions into own mixin

* auto assign voices

* fix chara load and auto assign voice during chara load

* smart speaker separation

* tts speaker separation config

* generate tts for intro text

* fix prompting issues with anthropic, google and openrouter clients

* decensor flag off again

* only to ai assisted voice markup on narrator messages

* openrouter provider configuration

* linting

* improved sound controls

* add support for chatterbox

* fix info

* chatterbox dependencies

* remove piper and xtts2

* linting

* voice params

* linting

* tts model overrides and move tts info to tab

* reorg toolbar

* allow overriding of test text

* more tts fixes, apply intensity, chatterbox voices

* confirm voice delete

* lintinG

* groq updates

* reorg decorators

* tts fixes

* cancelable audio queue

* voice library uploads

* scene voice library

* Config refactor (#13)

* config refactor progres

* config nuke continues

* fix system prompts

* linting

* client fun

* client config refactor

* fix kcpp auto embedding selection

* linting

* fix proxy config

* remove cruft

* fix remaining client bugs from config refactor
always use get_config(), dont keep an instance reference

* support for reasoning models

* more reasoning tweaks

* only allow one frontend to connect at a time

* fix tests

* relock

* relock

* more client adjustments

* pattern prefill

* some tts agent fixes

* fix ai assist cond

* tts nodes

* fix config retrieval

* assign voice node and fixes

* sim suite char gen assign voice

* fix voice assign template to consider used voices

* get rid of auto break repetition which wasn't working right for a while anyhow

* linting

* generate tts node
as string node

* linting

* voice change on character event

* tweak chatterbox max length

* koboldai default template

* linting

* fix saving of existing voice

* relock

* adjust params of eva default voice

* f5tts support

* f5tts samples

* f5tts support

* f5tts tweaks

* chunk size per tts api and reorg defaul f5tts voices

* chatterbox default voice reog to match f5-tts default voices

* voice library ux polish pass

* cleanup

* f5-tts tweaks

* missing samples

* get rid of old save cmd

* add chatterbox and f5tts

* housekeeping

* fix some issues with world entry editing

* remove cruft

* replace exclamation marks

* fix save immutable check

* fix replace_exclamation_marks

* better error handling in websocket plugins and fix issue with saves

* agent config save on dialog close

* ctrl click to disable / enable agents

* fix quick config

* allow modifying response size of focal requests

* sim suite set goal always sets story intent, encourage calling of set goal during simulation start

* allow setting of model

* voice param tweaks

* tts tweaks

* fix character card load

* fix note_on_value

* add mixed speaker_separation mode

* indicate which message the audio is for and provide way to stop audio from the message

* fix issue with some tts generation failing

* linting

* fix speaker separate modes

* bad idea

* linting

* refactor speaker separation prompt

* add kimi think pattern

* fix issue with unwanted cover image replacemenT

* no scene analysis for visual promp generation (for now)

* linting

* tts for context investigation messages

* prompt tweaks

* tweak intro

* fix intro text tts not auto playing sometimes

* consider narrator voice when assigning voice tro a character

* allow director log messages to go only into the director console

* linting

* startup performance fixes

* init time

* linting

* only show audio control for messagews taht can have it

* always create story intent and dont override existing saves during character card load

* fix history check in dynamic story line node
add HasHistory node

* linting

* fix intro message not having speaker separation

* voice library character manager

* sequantial and cancelable auto assign all

* linting

* fix generation cancel handling

* tooltips

* fix auto assign voice from scene voices

* polish

* kokoro does not like lazy import

* update info text

* complete scene export / import

* linting

* wording

* remove cruft

* fix story intent generation during character card import

* fix generation cancelled emit status inf loop

* prompt tweak

* reasoning quick toggle, reasoning token slider, tooltips

* improved reasoning pattern handling

* fix indirect coercion response parsing

* fix streaming issue

* response length instructions

* more robust streaming

* adjust default

* adjust formatting

* litning

* remove debug output

* director console log function calls

* install cuda script updated

* linting

* add another step

* adjust default

* update dialogue examples

* fix voice selection issues

* what's happening here

* third time's the charm?

* Vite migration (#207)

* add vite config

* replace babel, webpack, vue-cli deps with vite, switch to esm modules, separate eslint config

* change process.env to import.meta.env

* update index.html for vite and move to root

* update docs for vite

* remove vue cli config

* update example env with vite

* bump frontend deps after rebase to 32.0

---------

Co-authored-by: pax-co <Pax_801@proton.me>

* properly referencer data type

* what's new

* better indication of dialogue example supporting multiple lines, improve dialogue example display

* fix potential issue with cached scene anlysis being reused when it shouldn't

* fix character creation issues with player character toggle

* fix issue where editing a message would sometimes lose parts of the message

* fix slider ux thumb labels (vuetify update)

* relock

* narrative conversation format

* remove planning step

* linting

* tweaks

* don't overthink

* update dialogue examples and intro

* dont dictate response length instructions when data structures are expected

* prompt tweaks

* prompt tweaks

* linting

* fix edit message not handling : well

* prompt tweaks

* fix tests

* fix manual revision when character message was generated in new narrative mode

* fix issue with message editing

* Docker packages relese (#204)

* add CI workflow for Docker image build and MkDocs deployment

* rename CI workflow from 'ci' to 'package'

* refactor CI workflow: consolidate container build and documentation deployment into a single file

* fix: correct indentation for permissions in CI workflow

* fix: correct indentation for steps in deploy-docs job in CI workflow

* build both cpu and cuda image

* docs

* docs

* expose writing style during state reinforcement

* prompt tweaks

* test container build

* test container  image

* update docker compose

* docs

* test-container-build

* test container build

* test container build

* update docker build workflows

* fix guidance prompt prefix not being dropped

* mount tts dir

* add gpt-5

* remove debug output

* docs

* openai auto toggle reasoning based on model selection

* linting

---------

Co-authored-by: pax-co <123330830+pax-co@users.noreply.github.com>
Co-authored-by: pax-co <Pax_801@proton.me>
Co-authored-by: Luis Alexandre Deschamps Brandão <brandao_luis@yahoo.com>
2025-08-08 13:56:29 +03:00
vegu-ai-tools
685ca994f9 linting can be done at merge 2025-07-06 20:32:40 +03:00
vegu-ai-tools
285b0699ab contributing.md 2025-07-06 18:41:44 +03:00
vegu-ai-tools
7825489cfc add contributing.md 2025-07-06 18:37:00 +03:00
veguAI
fb2fa31f13 linting
* precommit

* linting

* add linting to workflow

* ruff.toml added
2025-06-29 19:51:08 +03:00
585 changed files with 80743 additions and 29301 deletions

View File

@@ -1,30 +1,57 @@
name: ci
name: ci
on:
push:
branches:
- master
- main
- prep-0.26.0
- master
release:
types: [published]
permissions:
contents: write
packages: write
jobs:
deploy:
container-build:
if: github.event_name == 'release'
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: Configure Git Credentials
run: |
git config user.name github-actions[bot]
git config user.email 41898282+github-actions[bot]@users.noreply.github.com
- uses: actions/setup-python@v5
- name: Log in to GHCR
uses: docker/login-action@v3
with:
python-version: 3.x
- run: echo "cache_id=$(date --utc '+%V')" >> $GITHUB_ENV
registry: ghcr.io
username: ${{ github.actor }}
password: ${{ secrets.GITHUB_TOKEN }}
- name: Build & push
uses: docker/build-push-action@v5
with:
context: .
file: Dockerfile
push: true
tags: |
ghcr.io/${{ github.repository }}:latest
ghcr.io/${{ github.repository }}:${{ github.ref_name }}
deploy-docs:
if: github.event_name == 'release'
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: Configure Git credentials
run: |
git config user.name "github-actions[bot]"
git config user.email "41898282+github-actions[bot]@users.noreply.github.com"
- uses: actions/setup-python@v5
with: { python-version: '3.x' }
- run: echo "cache_id=$(date --utc '+%V')" >> $GITHUB_ENV
- uses: actions/cache@v4
with:
key: mkdocs-material-${{ env.cache_id }}
path: .cache
restore-keys: |
mkdocs-material-
restore-keys: mkdocs-material-
- run: pip install mkdocs-material mkdocs-awesome-pages-plugin mkdocs-glightbox
- run: mkdocs gh-deploy --force

View File

@@ -0,0 +1,32 @@
name: test-container-build
on:
push:
branches: [ 'prep-*' ]
permissions:
contents: read
packages: write
jobs:
container-build:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: Log in to GHCR
uses: docker/login-action@v3
with:
registry: ghcr.io
username: ${{ github.actor }}
password: ${{ secrets.GITHUB_TOKEN }}
- name: Build & push
uses: docker/build-push-action@v5
with:
context: .
file: Dockerfile
push: true
# Tag with prep suffix to avoid conflicts with production
tags: |
ghcr.io/${{ github.repository }}:${{ github.ref_name }}

View File

@@ -42,6 +42,11 @@ jobs:
source .venv/bin/activate
uv pip install -e ".[dev]"
- name: Run linting
run: |
source .venv/bin/activate
uv run pre-commit run --all-files
- name: Setup configuration file
run: |
cp config.example.yaml config.yaml

13
.gitignore vendored
View File

@@ -8,11 +8,20 @@
talemate_env
chroma
config.yaml
.cursor
.claude
# uv
.venv/
templates/llm-prompt/user/*.jinja2
templates/world-state/*.yaml
tts/voice/piper/*.onnx
tts/voice/piper/*.json
tts/voice/kokoro/*.pt
tts/voice/xtts2/*.wav
tts/voice/chatterbox/*.wav
tts/voice/f5tts/*.wav
tts/voice/voice-library.json
scenes/
!scenes/infinity-quest-dynamic-scenario/
!scenes/infinity-quest-dynamic-scenario/assets/
@@ -21,4 +30,6 @@ scenes/
!scenes/infinity-quest/assets/
!scenes/infinity-quest/infinity-quest.json
tts_voice_samples/*.wav
third-party-docs/
third-party-docs/
legacy-state-reinforcements.yaml
CLAUDE.md

16
.pre-commit-config.yaml Normal file
View File

@@ -0,0 +1,16 @@
fail_fast: false
exclude: |
(?x)^(
tests/data/.*
|install-utils/.*
)$
repos:
- repo: https://github.com/astral-sh/ruff-pre-commit
# Ruff version.
rev: v0.12.1
hooks:
# Run the linter.
- id: ruff
args: [ --fix ]
# Run the formatter.
- id: ruff-format

64
CONTRIBUTING.md Normal file
View File

@@ -0,0 +1,64 @@
# Contributing to Talemate
## About This Project
Talemate is a **personal hobbyist project** that I maintain in my spare time. While I appreciate the community's interest and contributions, please understand that:
- This is primarily a passion project that I enjoy working on myself
- I have limited time for code reviews and prefer to spend that time developing fixes or new features myself
- Large contributions require significant review and testing time that takes away from my own development
For these reasons, I've established contribution guidelines that balance community involvement with my desire to actively develop the project myself.
## Contribution Policy
**I welcome small bugfix and small feature pull requests!** If you've found a bug and have a fix, or have a small feature improvement, I'd love to review it.
However, please note that **I am not accepting large refactors or major feature additions** at this time. This includes:
- Major architectural changes
- Large new features or significant functionality additions
- Large-scale code reorganization
- Breaking API changes
- Features that would require significant maintenance
## What is accepted
**Small bugfixes** - Fixes for specific, isolated bugs
**Small features** - Minor improvements that don't break existing functionality
**Documentation fixes** - Typo corrections, clarifications in existing docs
**Minor dependency updates** - Security patches or minor version bumps
## What is not accepted
**Major features** - Large new functionality or systems
**Large refactors** - Code reorganization or architectural changes
**Breaking changes** - Any changes that break existing functionality
**Major dependency changes** - Framework upgrades or replacements
## Submitting a PR
If you'd like to submit a bugfix or small feature:
1. **Open an issue first** - Describe the bug you've found or feature you'd like to add
2. **Keep it small** - Focus on one specific issue or small improvement
3. **Follow existing code style** - Match the project's current patterns
4. **Don't break existing functionality** - Ensure all existing tests pass
5. **Include tests** - Add or update tests that verify your fix or feature
6. **Update documentation** - If your changes affect behavior, update relevant docs
## Testing
Ensure all tests pass by running:
```bash
uv run pytest tests/ -p no:warnings
```
## Questions?
If you're unsure whether your contribution would be welcome, please open an issue to discuss it first. This saves everyone time and ensures alignment with the project's direction.

View File

@@ -35,18 +35,9 @@ COPY pyproject.toml uv.lock /app/
# Copy the Python source code (needed for editable install)
COPY ./src /app/src
# Create virtual environment and install dependencies
# Create virtual environment and install dependencies (includes CUDA support via pyproject.toml)
RUN uv sync
# Conditional PyTorch+CUDA install
ARG CUDA_AVAILABLE=false
RUN . /app/.venv/bin/activate && \
if [ "$CUDA_AVAILABLE" = "true" ]; then \
echo "Installing PyTorch with CUDA support..." && \
uv pip uninstall torch torchaudio && \
uv pip install torch~=2.7.0 torchaudio~=2.7.0 --index-url https://download.pytorch.org/whl/cu128; \
fi
# Stage 3: Final image
FROM python:3.11-slim

20
docker-compose.manual.yml Normal file
View File

@@ -0,0 +1,20 @@
version: '3.8'
services:
talemate:
build:
context: .
dockerfile: Dockerfile
ports:
- "${FRONTEND_PORT:-8080}:8080"
- "${BACKEND_PORT:-5050}:5050"
volumes:
- ./config.yaml:/app/config.yaml
- ./scenes:/app/scenes
- ./templates:/app/templates
- ./chroma:/app/chroma
- ./tts:/app/tts
environment:
- PYTHONUNBUFFERED=1
- PYTHONPATH=/app/src:$PYTHONPATH
command: ["uv", "run", "src/talemate/server/run.py", "runserver", "--host", "0.0.0.0", "--port", "5050", "--frontend-host", "0.0.0.0", "--frontend-port", "8080"]

View File

@@ -2,11 +2,7 @@ version: '3.8'
services:
talemate:
build:
context: .
dockerfile: Dockerfile
args:
- CUDA_AVAILABLE=${CUDA_AVAILABLE:-false}
image: ghcr.io/vegu-ai/talemate:latest
ports:
- "${FRONTEND_PORT:-8080}:8080"
- "${BACKEND_PORT:-5050}:5050"
@@ -15,6 +11,7 @@ services:
- ./scenes:/app/scenes
- ./templates:/app/templates
- ./chroma:/app/chroma
- ./tts:/app/tts
environment:
- PYTHONUNBUFFERED=1
- PYTHONPATH=/app/src:$PYTHONPATH

View File

@@ -1,60 +1,63 @@
import os
import re
import subprocess
from pathlib import Path
import argparse
def find_image_references(md_file):
"""Find all image references in a markdown file."""
with open(md_file, 'r', encoding='utf-8') as f:
with open(md_file, "r", encoding="utf-8") as f:
content = f.read()
pattern = r'!\[.*?\]\((.*?)\)'
pattern = r"!\[.*?\]\((.*?)\)"
matches = re.findall(pattern, content)
cleaned_paths = []
for match in matches:
path = match.lstrip('/')
if 'img/' in path:
path = path[path.index('img/') + 4:]
path = match.lstrip("/")
if "img/" in path:
path = path[path.index("img/") + 4 :]
# Only keep references to versioned images
parts = os.path.normpath(path).split(os.sep)
if len(parts) >= 2 and parts[0].replace('.', '').isdigit():
if len(parts) >= 2 and parts[0].replace(".", "").isdigit():
cleaned_paths.append(path)
return cleaned_paths
def scan_markdown_files(docs_dir):
"""Recursively scan all markdown files in the docs directory."""
md_files = []
for root, _, files in os.walk(docs_dir):
for file in files:
if file.endswith('.md'):
if file.endswith(".md"):
md_files.append(os.path.join(root, file))
return md_files
def find_all_images(img_dir):
"""Find all image files in version subdirectories."""
image_files = []
for root, _, files in os.walk(img_dir):
# Get the relative path from img_dir to current directory
rel_dir = os.path.relpath(root, img_dir)
# Skip if we're in the root img directory
if rel_dir == '.':
if rel_dir == ".":
continue
# Check if the immediate parent directory is a version number
parent_dir = rel_dir.split(os.sep)[0]
if not parent_dir.replace('.', '').isdigit():
if not parent_dir.replace(".", "").isdigit():
continue
for file in files:
if file.lower().endswith(('.png', '.jpg', '.jpeg', '.gif', '.svg')):
if file.lower().endswith((".png", ".jpg", ".jpeg", ".gif", ".svg")):
rel_path = os.path.relpath(os.path.join(root, file), img_dir)
image_files.append(rel_path)
return image_files
def grep_check_image(docs_dir, image_path):
"""
Check if versioned image is referenced anywhere using grep.
@@ -65,33 +68,46 @@ def grep_check_image(docs_dir, image_path):
parts = os.path.normpath(image_path).split(os.sep)
version = parts[0] # e.g., "0.29.0"
filename = parts[-1] # e.g., "world-state-suggestions-2.png"
# For versioned images, require both version and filename to match
version_pattern = f"{version}.*{filename}"
try:
result = subprocess.run(
['grep', '-r', '-l', version_pattern, docs_dir],
["grep", "-r", "-l", version_pattern, docs_dir],
capture_output=True,
text=True
text=True,
)
if result.stdout.strip():
print(f"Found reference to {image_path} with version pattern: {version_pattern}")
print(
f"Found reference to {image_path} with version pattern: {version_pattern}"
)
return True
except subprocess.CalledProcessError:
pass
except Exception as e:
print(f"Error during grep check for {image_path}: {e}")
return False
def main():
parser = argparse.ArgumentParser(description='Find and optionally delete unused versioned images in MkDocs project')
parser.add_argument('--docs-dir', type=str, required=True, help='Path to the docs directory')
parser.add_argument('--img-dir', type=str, required=True, help='Path to the images directory')
parser.add_argument('--delete', action='store_true', help='Delete unused images')
parser.add_argument('--verbose', action='store_true', help='Show all found references and files')
parser.add_argument('--skip-grep', action='store_true', help='Skip the additional grep validation')
parser = argparse.ArgumentParser(
description="Find and optionally delete unused versioned images in MkDocs project"
)
parser.add_argument(
"--docs-dir", type=str, required=True, help="Path to the docs directory"
)
parser.add_argument(
"--img-dir", type=str, required=True, help="Path to the images directory"
)
parser.add_argument("--delete", action="store_true", help="Delete unused images")
parser.add_argument(
"--verbose", action="store_true", help="Show all found references and files"
)
parser.add_argument(
"--skip-grep", action="store_true", help="Skip the additional grep validation"
)
args = parser.parse_args()
# Convert paths to absolute paths
@@ -118,7 +134,7 @@ def main():
print("\nAll versioned image references found in markdown:")
for img in sorted(used_images):
print(f"- {img}")
print("\nAll versioned images in directory:")
for img in sorted(all_images):
print(f"- {img}")
@@ -133,9 +149,11 @@ def main():
for img in unused_images:
if not grep_check_image(docs_dir, img):
actually_unused.add(img)
if len(actually_unused) != len(unused_images):
print(f"\nGrep validation found {len(unused_images) - len(actually_unused)} additional image references!")
print(
f"\nGrep validation found {len(unused_images) - len(actually_unused)} additional image references!"
)
unused_images = actually_unused
# Report findings
@@ -148,7 +166,7 @@ def main():
print("\nUnused versioned images:")
for img in sorted(unused_images):
print(f"- {img}")
if args.delete:
print("\nDeleting unused versioned images...")
for img in unused_images:
@@ -162,5 +180,6 @@ def main():
else:
print("\nNo unused versioned images found!")
if __name__ == "__main__":
main()
main()

View File

@@ -4,12 +4,12 @@ from talemate.events import GameLoopEvent
import talemate.emit.async_signals
from talemate.emit import emit
@register()
class TestAgent(Agent):
agent_type = "test"
verbose_name = "Test"
def __init__(self, client):
self.client = client
self.is_enabled = True
@@ -20,7 +20,7 @@ class TestAgent(Agent):
description="Test",
),
}
@property
def enabled(self):
return self.is_enabled
@@ -36,7 +36,7 @@ class TestAgent(Agent):
def connect(self, scene):
super().connect(scene)
talemate.emit.async_signals.get("game_loop").connect(self.on_game_loop)
async def on_game_loop(self, emission: GameLoopEvent):
"""
Called on the beginning of every game loop
@@ -45,4 +45,8 @@ class TestAgent(Agent):
if not self.enabled:
return
emit("status", status="info", message="Annoying you with a test message every game loop.")
emit(
"status",
status="info",
message="Annoying you with a test message every game loop.",
)

View File

@@ -19,14 +19,17 @@ from talemate.config import Client as BaseClientConfig
log = structlog.get_logger("talemate.client.runpod_vllm")
class Defaults(pydantic.BaseModel):
max_token_length: int = 4096
model: str = ""
runpod_id: str = ""
class ClientConfig(BaseClientConfig):
runpod_id: str = ""
@register()
class RunPodVLLMClient(ClientBase):
client_type = "runpod_vllm"
@@ -49,7 +52,6 @@ class RunPodVLLMClient(ClientBase):
)
}
def __init__(self, model=None, runpod_id=None, **kwargs):
self.model_name = model
self.runpod_id = runpod_id
@@ -59,12 +61,10 @@ class RunPodVLLMClient(ClientBase):
def experimental(self):
return False
def set_client(self, **kwargs):
log.debug("set_client", kwargs=kwargs, runpod_id=self.runpod_id)
self.runpod_id = kwargs.get("runpod_id", self.runpod_id)
def tune_prompt_parameters(self, parameters: dict, kind: str):
super().tune_prompt_parameters(parameters, kind)
@@ -88,32 +88,37 @@ class RunPodVLLMClient(ClientBase):
self.log.debug("generate", prompt=prompt[:128] + " ...", parameters=parameters)
try:
async with aiohttp.ClientSession() as session:
endpoint = runpod.AsyncioEndpoint(self.runpod_id, session)
run_request = await endpoint.run({
"input": {
"prompt": prompt,
run_request = await endpoint.run(
{
"input": {
"prompt": prompt,
}
# "parameters": parameters
}
#"parameters": parameters
})
while (await run_request.status()) not in ["COMPLETED", "FAILED", "CANCELLED"]:
)
while (await run_request.status()) not in [
"COMPLETED",
"FAILED",
"CANCELLED",
]:
status = await run_request.status()
log.debug("generate", status=status)
await asyncio.sleep(0.1)
status = await run_request.status()
log.debug("generate", status=status)
response = await run_request.output()
log.debug("generate", response=response)
return response["choices"][0]["tokens"][0]
except Exception as e:
self.log.error("generate error", e=e)
emit(

View File

@@ -9,6 +9,7 @@ class Defaults(pydantic.BaseModel):
api_url: str = "http://localhost:1234"
max_token_length: int = 4096
@register()
class TestClient(ClientBase):
client_type = "test"
@@ -22,14 +23,13 @@ class TestClient(ClientBase):
self.client = AsyncOpenAI(base_url=self.api_url + "/v1", api_key="sk-1111")
def tune_prompt_parameters(self, parameters: dict, kind: str):
"""
Talemate adds a bunch of parameters to the prompt, but not all of them are valid for all clients.
This method is called before the prompt is sent to the client, and it allows the client to remove
any parameters that it doesn't support.
"""
super().tune_prompt_parameters(parameters, kind)
keys = list(parameters.keys())
@@ -41,11 +41,10 @@ class TestClient(ClientBase):
del parameters[key]
async def get_model_name(self):
"""
This should return the name of the model that is being used.
"""
return "Mock test model"
async def generate(self, prompt: str, parameters: dict, kind: str):

View File

@@ -27,10 +27,10 @@ uv run src\talemate\server\run.py runserver --host 0.0.0.0 --port 1234
### Letting the frontend know about the new host and port
Copy `talemate_frontend/example.env.development.local` to `talemate_frontend/.env.production.local` and edit the `VUE_APP_TALEMATE_BACKEND_WEBSOCKET_URL`.
Copy `talemate_frontend/example.env.development.local` to `talemate_frontend/.env.production.local` and edit the `VITE_TALEMATE_BACKEND_WEBSOCKET_URL`.
```env
VUE_APP_TALEMATE_BACKEND_WEBSOCKET_URL=ws://localhost:1234
VITE_TALEMATE_BACKEND_WEBSOCKET_URL=ws://localhost:1234
```
Next rebuild the frontend.

View File

@@ -1,22 +1,15 @@
!!! example "Experimental"
Talemate through docker has not received a lot of testing from me, so please let me know if you encounter any issues.
You can do so by creating an issue on the [:material-github: GitHub repository](https://github.com/vegu-ai/talemate)
## Quick install instructions
1. `git clone https://github.com/vegu-ai/talemate.git`
1. `cd talemate`
1. copy config file
1. linux: `cp config.example.yaml config.yaml`
1. windows: `copy config.example.yaml config.yaml`
1. If your host has a CUDA compatible Nvidia GPU
1. Windows (via PowerShell): `$env:CUDA_AVAILABLE="true"; docker compose up`
1. Linux: `CUDA_AVAILABLE=true docker compose up`
1. If your host does **NOT** have a CUDA compatible Nvidia GPU
1. Windows: `docker compose up`
1. Linux: `docker compose up`
1. windows: `copy config.example.yaml config.yaml` (or just copy the file and rename it via the file explorer)
1. `docker compose up`
1. Navigate your browser to http://localhost:8080
!!! info "Pre-built Images"
The default setup uses pre-built images from GitHub Container Registry that include CUDA support by default. To manually build the container instead, use `docker compose -f docker-compose.manual.yml up --build`.
!!! note
When connecting local APIs running on the hostmachine (e.g. text-generation-webui), you need to use `host.docker.internal` as the hostname.

View File

@@ -12,14 +12,6 @@
!!! note "First start can take a while"
The initial download and dependency installation may take several minutes, especially on slow internet connections. The console will keep you updated just wait until the Talemate logo shows up.
### Optional: CUDA support
If you have an NVIDIA GPU and want CUDA acceleration for larger embedding models:
1. Close Talemate (if it is running).
2. Double-click **`install-cuda.bat`**. This script swaps the CPU-only Torch build for the CUDA 12.8 build.
3. Start Talemate again via **`start.bat`**.
## Maintenance & advanced usage
| Script | Purpose |

Binary file not shown.

After

Width:  |  Height:  |  Size: 35 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 29 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 43 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 65 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 54 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 18 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 18 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 75 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 61 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 9.6 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 72 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 12 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 63 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 33 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 61 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 107 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 3.0 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 9.3 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 6.6 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 142 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 18 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 15 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 17 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 63 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 24 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 64 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 61 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 42 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 4.2 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 3.9 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 72 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 67 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 12 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 14 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 75 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 60 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 71 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 18 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 23 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 2.6 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 46 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 107 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 2.3 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 39 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 44 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 6.9 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 32 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 2.6 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 7.6 KiB

View File

@@ -0,0 +1,125 @@
# Director Chat
!!! example "Experimental"
Currently experimental and may change substantially in the future.
Introduced in version 0.33.0 the director chat feature allows you interact with the director agent directly once a scene is loaded.
As part of the chat session the director can query for information as well as make changes to the scene.
!!! warning "Strong model recommended"
In my personal testing I've found that while its possible to have a coherent chat session with weaker models, the experience is going to be
significantly better with [reasoning enabled](/talemate/user-guide/clients/reasoning/) models past the 100B parameter mark.
This may change as smaller models get stronger and your mileage may vary.
!!! info "Chat settings"
You can customize various aspects of the director chat behavior in the [Director Chat settings](/talemate/user-guide/agents/director/settings/#director-chat), including response length, token budgets, and custom instructions.
## Accessing the director chat
Once a scene is loaded click the **:material-bullhorn:** director console icon in the top right corner of the screen.
![Director Console](/talemate/img/0.33.0/open-director-console.png)
![Director Console](/talemate/img/0.33.0/director-console-chat.png)
## Chat interface
The director chat provides a conversational interface where you can ask the director to perform various tasks, from querying information about your scene to making changes to characters, world entries, and progressing the story.
![Director Chat Interaction](/talemate/img/0.33.0/director-chat-interaction.png)
### What can you ask the director to do?
The director can help you with many tasks:
- Progress the story by generating new narration or dialogue
- Answer questions about your characters, world, or story details
- Create or modify characters, world entries, and story configuration
- Advance time in your story
- Manage game state variables (if your story uses them)
Simply describe what you want in natural language, and the director will figure out how to accomplish it.
### Viewing action details
When the director performs an action, you can expand it to see exactly what was done:
![Expanded Function Call](/talemate/img/0.33.0/director-chat-expanded-function-call.png)
This gives you full transparency into the changes being made to your scene.
## Chat modes
The director chat supports three different modes that control how the director behaves:
![Chat Mode Selection](/talemate/img/0.33.0/director-chat-mode.png)
!!! note
Chat mode behavior is not guaranteed and depends heavily on the model's ability to follow instructions. Stronger models, especially those with reasoning capabilities, will respect these modes much more consistently than weaker models.
### Normal mode
The default mode where the director can freely discuss the story and reveal information. It will ask for clarification when needed and take a more conversational approach.
### Decisive mode
In this mode, the director acts more confidently on your instructions and avoids asking for clarifications unless strictly necessary. Use this when you trust the director to make the right decisions autonomously.
### No Spoilers mode
This mode prevents the director from revealing information that could spoil the story. The director will still make changes and answer questions, but will be careful not to discuss plot points or details that should remain hidden.
## Write action confirmation
By default, the director will ask for confirmation before performing actions that modify your scene (like progressing the story or making significant changes).
![Confirm On](/talemate/img/0.33.0/director-chat-confirm-on.png)
You can toggle this behavior to allow the director to act without confirmation:
![Confirm Off](/talemate/img/0.33.0/director-chat-confirm-off.png)
!!! tip
Keep confirmation enabled when experimenting or when you want more control over changes. Disable it when you trust the director to act autonomously.
### Confirmation workflow example
When confirmation is enabled, the director will describe what it plans to do and wait for your approval:
![Confirmation Request](/talemate/img/0.33.0/director-chat-0001.png)
The confirmation dialog shows the instructions that will be sent and the expected result:
![Confirmation Dialog](/talemate/img/0.33.0/director-chat-0002.png)
Once confirmed, the action executes and new content is added to your scene:
![Action Approved](/talemate/img/0.33.0/director-chat-0003.png)
The director then analyzes the result and discusses what happened:
![Result Analysis](/talemate/img/0.33.0/director-chat-0004.png)
### Rejecting actions
You can also reject actions if you change your mind or want to revise your request:
![Action Rejection Request](/talemate/img/0.33.0/director-chat-reject-0001.png)
When rejected, the director acknowledges and waits for your next instruction:
![Action Rejected](/talemate/img/0.33.0/director-chat-reject-0002.png)
## Director personas
You can customize the director's personality and initial greeting by assigning a persona:
![Persona Selection](/talemate/img/0.33.0/director-chat-persona-0001.png)
Personas can completely change how the director presents itself and communicates with you:
![Persona Example](/talemate/img/0.33.0/director-chat-persona-0002.png)
To create or manage personas, select "Manage Personas" from the persona dropdown. You can define a custom description and initial chat message for each persona.

View File

@@ -154,4 +154,56 @@ Allows the director to evaluate the current scene phase and switch to a differen
The number of turns between evaluations. (0 = NEVER)
!!! note "Recommended to leave at 0 (never)"
This isn't really working well at this point, so recommended to leave at 0 (never)
This isn't really working well at this point, so recommended to leave at 0 (never)
## Director Chat
!!! example "Experimental"
Currently experimental and may change substantially in the future.
The [Director Chat](/talemate/user-guide/agents/director/chat) feature allows you to interact with the director through a conversational interface where you can ask questions, make changes to your scene, and direct story progression.
![Director Chat Settings](/talemate/img/0.33.0/director-agent-chat-settings.png)
##### Enable Analysis Step
When enabled, the director performs an internal analysis step before responding. This helps the director think through complex requests and plan actions more carefully.
!!! tip "Recommended for complex tasks"
Enable this when working on complex scene modifications or when you want more thoughtful responses. Disable it for simple queries to get faster responses.
##### Response token budget
Controls the maximum number of tokens the director can use for generating responses. Higher values allow for more detailed responses but use more tokens. Default is 2048.
##### Auto-iteration limit
The maximum number of action-response cycles the director can perform in a single interaction. For example, if set to 10, the director can execute actions and generate follow-up responses up to 10 times before requiring your input again. Default is 10.
##### Retries
The number of times the director will retry if it encounters an error during response generation. Default is 1.
##### Scene context ratio
Controls the fraction of the remaining token budget (after fixed context and instructions) that is reserved for scene context. The rest is allocated to chat history.
- **Lower values** (e.g., 0.30): 30% for scene context, 70% for chat history
- **Higher values** (e.g., 0.70): 70% for scene context, 30% for chat history
Default is 0.30.
##### Stale history share
When the chat history needs to be compacted (summarized), this controls what fraction of the chat history budget is treated as "stale" and should be summarized. The remaining portion is kept verbatim as recent messages.
- **Lower values** (e.g., 0.50): Summarize less (50%), keep more recent messages verbatim
- **Higher values** (e.g., 0.90): Summarize more (90%), keep fewer recent messages verbatim
Default is 0.70 (70% will be summarized when compaction is triggered).
##### Custom instructions
Add custom instructions that will be included in all director chat prompts. Use this to customize the director's behavior for your specific scene or storytelling style.
For example, you might add instructions to maintain a particular tone, follow specific genre conventions, or handle certain types of requests in a particular way.

View File

@@ -0,0 +1,58 @@
# Chatterbox
Local zero shot voice cloning from .wav files.
![Chatterbox API settings](/talemate/img/0.32.0/chatterbox-api-settings.png)
##### Device
Auto-detects best available option
##### Model
Default Chatterbox model optimized for speed
##### Chunk size
Split text into chunks of this size. Smaller values will increase responsiveness at the cost of lost context between chunks. (Stuff like appropriate inflection, etc.). 0 = no chunking
## Adding Chatterbox Voices
### Voice Requirements
Chatterbox voices require:
- Reference audio file (.wav format, 5-15 seconds optimal)
- Clear speech with minimal background noise
- Single speaker throughout the sample
### Creating a Voice
1. Open the Voice Library
2. Click **:material-plus: New**
3. Select "Chatterbox" as the provider
4. Configure the voice:
![Add Chatterbox voice](/talemate/img/0.32.0/add-chatterbox-voice.png)
**Label:** Descriptive name (e.g., "Marcus - Deep Male")
**Voice ID / Upload File** Upload a .wav file containing the voice sample. The uploaded reference audio will also be the voice ID.
**Speed:** Adjust playback speed (0.5 to 2.0, default 1.0)
**Tags:** Add descriptive tags for organization
**Extra voice parameters**
There exist some optional parameters that can be set here on a per voice level.
![Chatterbox extra voice parameters](/talemate/img/0.32.0/chatterbox-parameters.png)
##### Exaggeration Level
Exaggeration (Neutral = 0.5, extreme values can be unstable). Higher exaggeration tends to speed up speech; reducing cfg helps compensate with slower, more deliberate pacing.
##### CFG / Pace
If the reference speaker has a fast speaking style, lowering cfg to around 0.3 can improve pacing.

View File

@@ -1,7 +1,41 @@
# ElevenLabs
If you have not configured the ElevenLabs TTS API, the voice agent will show that the API key is missing.
Professional voice synthesis with voice cloning capabilities using ElevenLabs API.
![Elevenlaps api key missing](/talemate/img/0.26.0/voice-agent-missing-api-key.png)
![ElevenLabs API settings](/talemate/img/0.32.0/elevenlabs-api-settings.png)
See the [ElevenLabs API setup](/talemate/user-guide/apis/elevenlabs/) for instructions on how to set up the API key.
## API Setup
ElevenLabs requires an API key. See the [ElevenLabs API setup](/talemate/user-guide/apis/elevenlabs/) for instructions on obtaining and setting an API key.
## Configuration
**Model:** Select from available ElevenLabs models
!!! warning "Voice Limits"
Your ElevenLabs subscription allows you to maintain a set number of voices (10 for the cheapest plan). Any voice that you generate audio for is automatically added to your voices at [https://elevenlabs.io/app/voice-lab](https://elevenlabs.io/app/voice-lab). This also happens when you use the "Test" button. It is recommended to test voices via their voice library instead.
## Adding ElevenLabs Voices
### Getting Voice IDs
1. Go to [https://elevenlabs.io/app/voice-lab](https://elevenlabs.io/app/voice-lab) to view your voices
2. Find or create the voice you want to use
3. Click "More Actions" -> "Copy Voice ID" for the desired voice
![Copy Voice ID](/talemate/img/0.32.0/elevenlabs-copy-voice-id.png)
### Creating a Voice in Talemate
![Add ElevenLabs voice](/talemate/img/0.32.0/add-elevenlabs-voice.png)
1. Open the Voice Library
2. Click "Add Voice"
3. Select "ElevenLabs" as the provider
4. Configure the voice:
**Label:** Descriptive name for the voice
**Provider ID:** Paste the ElevenLabs voice ID you copied
**Tags:** Add descriptive tags for organization

View File

@@ -0,0 +1,78 @@
# F5-TTS
Local zero shot voice cloning from .wav files.
![F5-TTS configuration](/talemate/img/0.32.0/f5tts-api-settings.png)
##### Device
Auto-detects best available option (GPU preferred)
##### Model
- F5TTS_v1_Base (default, most recent model)
- F5TTS_Base
- E2TTS_Base
##### NFE Step
Number of steps to generate the voice. Higher values result in more detailed voices.
##### Chunk size
Split text into chunks of this size. Smaller values will increase responsiveness at the cost of lost context between chunks. (Stuff like appropriate inflection, etc.). 0 = no chunking
##### Replace exclamation marks
If checked, exclamation marks will be replaced with periods. This is recommended for `F5TTS_v1_Base` since it seems to over exaggerate exclamation marks.
## Adding F5-TTS Voices
### Voice Requirements
F5-TTS voices require:
- Reference audio file (.wav format, 10-30 seconds)
- Clear speech with minimal background noise
- Single speaker throughout the sample
- Reference text (optional but recommended)
### Creating a Voice
1. Open the Voice Library
2. Click "Add Voice"
3. Select "F5-TTS" as the provider
4. Configure the voice:
![Add F5-TTS voice](/talemate/img/0.32.0/add-f5tts-voice.png)
**Label:** Descriptive name (e.g., "Emma - Calm Female")
**Voice ID / Upload File** Upload a .wav file containing the **reference audio** voice sample. The uploaded reference audio will also be the voice ID.
- Use 6-10 second samples (longer doesn't improve quality)
- Ensure clear speech with minimal background noise
- Record at natural speaking pace
**Reference Text:** Enter the exact text spoken in the reference audio for improved quality
- Enter exactly what is spoken in the reference audio
- Include proper punctuation and capitalization
- Improves voice cloning accuracy significantly
**Speed:** Adjust playback speed (0.5 to 2.0, default 1.0)
**Tags:** Add descriptive tags (gender, age, style) for organization
**Extra voice parameters**
There exist some optional parameters that can be set here on a per voice level.
![F5-TTS extra voice parameters](/talemate/img/0.32.0/f5tts-parameters.png)
##### Speed
Allows you to adjust the speed of the voice.
##### CFG Strength
A higher CFG strength generally leads to more faithful reproduction of the input text, while a lower CFG strength can result in more varied or creative speech output, potentially at the cost of text-to-speech accuracy.

View File

@@ -0,0 +1,15 @@
# Google Gemini-TTS
Google Gemini-TTS provides access to Google's text-to-speech service.
## API Setup
Google Gemini-TTS requires a Google Cloud API key.
See the [Google Cloud API setup](/talemate/user-guide/apis/google/) for instructions on obtaining an API key.
## Configuration
![Google TTS settings](/talemate/img/0.32.0/google-tts-api-settings.png)
**Model:** Select from available Google TTS models

View File

@@ -1,6 +1,26 @@
# Overview
Talemate supports Text-to-Speech (TTS) functionality, allowing users to convert text into spoken audio. This document outlines the steps required to configure TTS for Talemate using different providers, including ElevenLabs and a local TTS API.
In 0.32.0 Talemate's TTS (Text-to-Speech) agent has been completely refactored to provide advanced voice capabilities including per-character voice assignment, speaker separation, and support for multiple local and remote APIs. The voice system now includes a comprehensive voice library for managing and organizing voices across all supported providers.
## Key Features
- **Per-character voice assignment** - Each character can have their own unique voice
- **Speaker separation** - Automatic detection and separation of dialogue from narration
- **Voice library management** - Centralized management of all voices across providers
- **Multiple API support** - Support for both local and remote TTS providers
- **Director integration** - Automatic voice assignment for new characters
## Supported APIs
### Local APIs
- **Kokoro** - Fastest generation with predefined voice models and mixing
- **F5-TTS** - Fast voice cloning with occasional mispronunciations
- **Chatterbox** - High-quality voice cloning (slower generation)
### Remote APIs
- **ElevenLabs** - Professional voice synthesis with voice cloning
- **Google Gemini-TTS** - Google's text-to-speech service
- **OpenAI** - OpenAI's TTS-1 and TTS-1-HD models
## Enable the Voice agent
@@ -12,28 +32,30 @@ If your voice agent is disabled - indicated by the grey dot next to the agent -
![Agent disabled](/talemate/img/0.26.0/agent-disabled.png) ![Agent enabled](/talemate/img/0.26.0/agent-enabled.png)
!!! note "Ctrl click to toggle agent"
You can use Ctrl click to toggle the agent on and off.
!!! abstract "Next: Connect to a TTS api"
Next you need to decide which service / api to use for audio generation and configure the voice agent accordingly.
## Voice Library Management
- [OpenAI](openai.md)
- [ElevenLabs](elevenlabs.md)
- [Local TTS](local_tts.md)
Voices are managed through the Voice Library, accessible from the main application bar. The Voice Library allows you to:
You can also find more information about the various settings [here](settings.md).
- Add and organize voices from all supported providers
- Assign voices to specific characters
- Create mixed voices (Kokoro)
- Manage both global and scene-specific voice libraries
## Select a voice
See the [Voice Library Guide](voice-library.md) for detailed instructions.
![Elevenlaps voice missing](/talemate/img/0.26.0/voice-agent-no-voice-selected.png)
## Character Voice Assignment
Click on the agent to open the agent settings.
![Character voice assignment](/talemate/img/0.32.0/character-voice-assignment.png)
Then click on the `Narrator Voice` dropdown and select a voice.
Characters can have individual voices assigned through the Voice Library. When a character has a voice assigned:
![Elevenlaps voice selected](/talemate/img/0.26.0/voice-agent-select-voice.png)
1. Their dialogue will use their specific voice
2. The narrator voice is used for exposition in their messages (with speaker separation enabled)
3. If their assigned voice's API is not available, it falls back to the narrator voice
The selection is saved automatically, click anywhere outside the agent window to close it.
The Voice agent status will show all assigned character voices and their current status.
The Voice agent should now show that the voice is selected and be ready to use.
![Elevenlabs ready](/talemate/img/0.26.0/elevenlabs-ready.png)
![Voice agent status with characters](/talemate/img/0.32.0/voice-agent-status-characters.png)

View File

@@ -0,0 +1,55 @@
# Kokoro
Kokoro provides predefined voice models and voice mixing capabilities for creating custom voices.
## Using Predefined Voices
Kokoro comes with built-in voice models that are ready to use immediately
Available predefined voices include various male and female voices with different characteristics.
## Creating Mixed Voices
Kokor allows you to mix voices together to create a new voice.
### Voice Mixing Interface
To create a mixed voice:
1. Open the Voice Library
2. Click ":material-plus: New"
3. Select "Kokoro" as the provider
4. Choose ":material-tune:Mixer" option
5. Configure the mixed voice:
![Voice mixing interface](/talemate/img/0.32.0/kokoro-mixer.png)
**Label:** Descriptive name for the mixed voice
**Base Voices:** Select 2-4 existing Kokoro voices to combine
**Weights:** Set the influence of each voice (0.1 to 1.0)
**Tags:** Descriptive tags for organization
### Weight Configuration
Each selected voice can have its weight adjusted:
- Higher weights make that voice more prominent in the mix
- Lower weights make that voice more subtle
- Total weights need to sum to 1.0
- Experiment with different combinations to achieve desired results
### Saving Mixed Voices
Once configured click "Add Voice", mixed voices are saved to your voice library and can be:
- Assigned to characters
- Used as narrator voices
just like any other voice.
Saving a mixed cvoice may take a moment to complete.

View File

@@ -1,53 +0,0 @@
# Local TTS
!!! warning
This has not been tested in a while and may not work as expected. It will likely be replaced with something different in the future. If this approach is currently broken its likely to remain so until it is replaced.
For running a local TTS API, Talemate requires specific dependencies to be installed.
### Windows Installation
Run `install-local-tts.bat` to install the necessary requirements.
### Linux Installation
Execute the following command:
```bash
pip install TTS
```
### Model and Device Configuration
1. Choose a TTS model from the [Coqui TTS model list](https://github.com/coqui-ai/TTS).
2. Decide whether to use `cuda` or `cpu` for the device setting.
3. The first time you run TTS through the local API, it will download the specified model. Please note that this may take some time, and the download progress will be visible in the Talemate backend output.
Example configuration snippet:
```yaml
tts:
device: cuda # or 'cpu'
model: tts_models/multilingual/multi-dataset/xtts_v2
```
### Voice Samples Configuration
Configure voice samples by setting the `value` field to the path of a .wav file voice sample. Official samples can be downloaded from [Coqui XTTS-v2 samples](https://huggingface.co/coqui/XTTS-v2/tree/main/samples).
Example configuration snippet:
```yaml
tts:
voices:
- label: English Male
value: path/to/english_male.wav
- label: English Female
value: path/to/english_female.wav
```
## Saving the Configuration
After configuring the `config.yaml` file, save your changes. Talemate will use the updated settings the next time it starts.
For more detailed information on configuring Talemate, refer to the `config.py` file in the Talemate source code and the `config.example.yaml` file for a barebone configuration example.

View File

@@ -8,16 +8,12 @@ See the [OpenAI API setup](/apis/openai.md) for instructions on how to set up th
## Settings
![Voice agent openai settings](/talemate/img/0.26.0/voice-agent-openai-settings.png)
![Voice agent openai settings](/talemate/img/0.32.0/openai-tts-api-settings.png)
##### Model
Which model to use for generation.
- GPT-4o Mini TTS
- TTS-1
- TTS-1 HD
!!! quote "OpenAI API documentation on quality"
For real-time applications, the standard tts-1 model provides the lowest latency but at a lower quality than the tts-1-hd model. Due to the way the audio is generated, tts-1 is likely to generate content that has more static in certain situations than tts-1-hd. In some cases, the audio may not have noticeable differences depending on your listening device and the individual person.
Generally i have found that HD is fast enough for talemate, so this is the default.
- TTS-1 HD

View File

@@ -1,36 +1,65 @@
# Settings
![Voice agent settings](/talemate/img/0.26.0/voice-agent-settings.png)
![Voice agent settings](/talemate/img/0.32.0/voice-agent-settings.png)
##### API
##### Enabled APIs
The TTS API to use for voice generation.
Select which TTS APIs to enable. You can enable multiple APIs simultaneously:
- OpenAI
- ElevenLabs
- Local TTS
- **Kokoro** - Fastest generation with predefined voice models and mixing
- **F5-TTS** - Fast voice cloning with occasional mispronunciations
- **Chatterbox** - High-quality voice cloning (slower generation)
- **ElevenLabs** - Professional voice synthesis with voice cloning
- **Google Gemini-TTS** - Google's text-to-speech service
- **OpenAI** - OpenAI's TTS-1 and TTS-1-HD models
!!! note "Multi-API Support"
You can enable multiple APIs and assign different voices from different providers to different characters. The system will automatically route voice generation to the appropriate API based on the voice assignment.
##### Narrator Voice
The voice to use for narration. Each API will come with its own set of voices.
The default voice used for narration and as a fallback for characters without assigned voices.
![Narrator voice](/talemate/img/0.26.0/voice-agent-select-voice.png)
The dropdown shows all available voices from all enabled APIs, with the format: "Voice Name (Provider)"
!!! note "Local TTS"
For local TTS, you will have to provide voice samples yourself. See [Local TTS Instructions](local_tts.md) for more information.
!!! info "Voice Management"
Voices are managed through the Voice Library, accessible from the main application bar. Adding, removing, or modifying voices should be done through the Voice Library interface.
##### Generate for player
##### Speaker Separation
Whether to generate voice for the player. If enabled, whenever the player speaks, the voice agent will generate audio for them.
Controls how dialogue is separated from exposition in messages:
##### Generate for NPCs
- **No separation** - Character messages use character voice entirely, narrator messages use narrator voice
- **Simple** - Basic separation of dialogue from exposition using punctuation analysis, with exposition being read by the narrator voice
- **Mixed** - Enables AI assisted separation for narrator messages and simple separation for character messages
- **AI assisted** - AI assisted separation for both narrator and character messages
Whether to generate voice for NPCs. If enabled, whenever a non player character speaks, the voice agent will generate audio for them.
!!! warning "AI Assisted Performance"
AI-assisted speaker separation sends additional prompts to your LLM, which may impact response time and API costs.
##### Generate for narration
##### Auto-generate for player
Whether to generate voice for narration. If enabled, whenever the narrator speaks, the voice agent will generate audio for them.
Generate voice automatically for player messages
##### Split generation
##### Auto-generate for AI characters
If enabled, the voice agent will generate audio in chunks, allowing for faster generation. This does however cause it lose context between chunks, and inflection may not be as good.
Generate voice automatically for NPC/AI character messages
##### Auto-generate for narration
Generate voice automatically for narrator messages
##### Auto-generate for context investigation
Generate voice automatically for context investigation messages
## Advanced Settings
Advanced settings are configured per-API and can be found in the respective API configuration sections:
- **Chunk size** - Maximum text length per generation request
- **Model selection** - Choose specific models for each API
- **Voice parameters** - Provider-specific voice settings
!!! tip "Performance Optimization"
Each API has different optimal chunk sizes and parameters. The system automatically handles chunking and queuing for optimal performance across all enabled APIs.

View File

@@ -0,0 +1,156 @@
# Voice Library
The Voice Library is the central hub for managing all voices across all TTS providers in Talemate. It provides a unified interface for organizing, creating, and assigning voices to characters.
## Accessing the Voice Library
The Voice Library can be accessed from the main application bar at the top of the Talemate interface.
![Voice Library access](/talemate/img/0.32.0/voice-library-access.png)
Click the voice icon to open the Voice Library dialog.
!!! note "Voice agent needs to be enabled"
The Voice agent needs to be enabled for the voice library to be available.
## Voice Library Interface
![Voice Library interface](/talemate/img/0.32.0/voice-library-interface.png)
The Voice Library interface consists of:
### Scope Tabs
- **Global** - Voices available across all scenes
- **Scene** - Voices specific to the current scene (only visible when a scene is loaded)
- **Characters** - Character voice assignments for the current scene (only visible when a scene is loaded)
### API Status
The toolbar shows the status of all TTS APIs:
- **Green** - API is enabled and ready
- **Orange** - API is enabled but not configured
- **Red** - API has configuration issues
- **Gray** - API is disabled
![API status](/talemate/img/0.32.0/voice-library-api-status.png)
## Managing Voices
### Global Voice Library
The global voice library contains voices that are available across all scenes. These include:
- Default voices provided by each TTS provider
- Custom voices you've added
#### Adding New Voices
To add a new voice:
1. Click the "+ New" button
2. Select the TTS provider
3. Configure the voice parameters:
- **Label** - Display name for the voice
- **Provider ID** - Provider-specific identifier
- **Tags** - Free-form descriptive tags you define (gender, age, style, etc.)
- **Parameters** - Provider-specific settings
Check the provider specific documentation for more information on how to configure the voice.
#### Voice Types by Provider
**F5-TTS & Chatterbox:**
- Upload .wav reference files for voice cloning
- Specify reference text for better quality
- Adjust speed and other parameters
**Kokoro:**
- Select from predefined voice models
- Create mixed voices by combining multiple models
- Adjust voice mixing weights
**ElevenLabs:**
- Select from available ElevenLabs voices
- Configure voice settings and stability
- Use custom cloned voices from your ElevenLabs account
**OpenAI:**
- Choose from available OpenAI voice models
- Configure model (GPT-4o Mini TTS, TTS-1, TTS-1-HD)
**Google Gemini-TTS:**
- Select from Google's voice models
- Configure language and gender settings
### Scene Voice Library
Scene-specific voices are only available within the current scene. This is useful for:
- Scene-specific characters
- Temporary voice experiments
- Custom voices for specific scenarios
Scene voices are saved with the scene and will be available when the scene is loaded.
## Character Voice Assignment
### Automatic Assignment
The Director agent can automatically assign voices to new characters based on:
- Character tags and attributes
- Voice tags matching character personality
- Available voices in the voice library
This feature can be enabled in the Director agent settings.
### Manual Assignment
![Character voice assignment](/talemate/img/0.32.0/character-voice-assignment.png)
To manually assign a voice to a character:
1. Go to the "Characters" tab in the Voice Library
2. Find the character in the list
3. Click the voice dropdown for that character
4. Select a voice from the available options
5. The assignment is saved automatically
### Character Voice Status
The character list shows:
- **Character name**
- **Currently assigned voice** (if any)
- **Voice status** - whether the voice's API is available
- **Quick assignment controls**
## Voice Tags and Organization
### Tagging System
Voices can be tagged with any descriptive attributes you choose. Tags are completely free-form and user-defined. Common examples include:
- **Gender**: male, female, neutral
- **Age**: young, mature, elderly
- **Style**: calm, energetic, dramatic, mysterious
- **Quality**: deep, high, raspy, smooth
- **Character types**: narrator, villain, hero, comic relief
- **Custom tags**: You can create any tags that help you organize your voices
### Filtering and Search
Use the search bar to filter voices by:
- Voice label/name
- Provider
- Tags
- Character assignments
This makes it easy to find the right voice for specific characters or situations.

View File

@@ -0,0 +1,82 @@
# Reasoning Model Support
Talemate supports reasoning models that can perform step-by-step thinking before generating their final response. This feature allows models to work through complex problems internally before providing an answer.
## Enabling Reasoning Support
To enable reasoning support for a client:
1. Open the **Clients** dialog from the main toolbar
2. Select the client you want to configure
3. Navigate to the **Reasoning** tab in the client configuration
![Client reasoning configuration](/talemate/img/0.32.0/client-reasoning-2.png)
4. Check the **Enable Reasoning** checkbox
## Configuring Reasoning Tokens
Once reasoning is enabled, you can configure the **Reasoning Tokens** setting using the slider:
![Reasoning tokens configuration](/talemate/img/0.32.0/client-reasoning.png)
### Recommended Token Amounts
**For local reasoning models:** Use a high token allocation (recommended: 4096 tokens) to give the model sufficient space for complex reasoning.
**For remote APIs:** Start with lower amounts (512-1024 tokens) and adjust based on your needs and token costs.
### Token Allocation Behavior
The behavior of the reasoning tokens setting depends on your API provider:
**For APIs that support direct reasoning token specification:**
- The specified tokens will be allocated specifically for reasoning
- The model will use these tokens for internal thinking before generating the response
**For APIs that do NOT support reasoning token specification:**
- The tokens are added as extra allowance to the response token limit for ALL requests
- This may lead to more verbose responses than usual since Talemate normally uses response token limits to control verbosity
!!! warning "Increased Verbosity"
For providers without direct reasoning token support, enabling reasoning may result in more verbose responses since the extra tokens are added to all requests.
## Response Pattern Configuration
When reasoning is enabled, you may need to configure a **Pattern to strip from the response** to remove the thinking process from the final output.
### Default Patterns
Talemate provides quick-access buttons for common reasoning patterns:
- **Default** - Uses the built-in pattern: `.*?</think>`
- **`.*?◁/think▷`** - For models using arrow-style thinking delimiters
- **`.*?</think>`** - For models using XML-style think tags
### Custom Patterns
You can also specify a custom regular expression pattern that matches your model's reasoning format. This pattern will be used to strip the thinking tokens from the response before displaying it to the user.
## Model Compatibility
Not all models support reasoning. This feature works best with:
- Models specifically trained for chain-of-thought reasoning
- Models that support structured thinking patterns
- APIs that provide reasoning token specification
## Important Notes
- **Coercion Disabled**: When reasoning is enabled, LLM coercion (pre-filling responses) is automatically disabled since reasoning models need to generate their complete thought process
- **Response Time**: Reasoning models may take longer to respond as they work through their thinking process
## Troubleshooting
### Pattern Not Working
If the reasoning pattern isn't properly stripping the thinking process:
1. Check your model's actual reasoning output format
2. Adjust the regular expression pattern to match your model's specific format
3. Test with the default pattern first to see if it works

View File

@@ -0,0 +1,70 @@
# Template Locking
Template locking allows you to prevent a client's prompt template from automatically updating when the model changes. This is useful when you want to maintain a consistent prompt format across different models or when you've customized a template for a specific use case.
## What is Template Locking?
By default, Talemate automatically determines the appropriate prompt template for each model you load. When you switch models, the prompt template updates to match the new model's requirements. Template locking disables this automatic behavior, keeping your selected template fixed regardless of model changes.
## When to Use Template Locking
Some models have reasoning and non-reasoning variants of their templates. This allows you to lock one client to the reasoning template and another to the non-reasoning template.
## How to Lock a Template
### Step 1: Open Client Settings
Start with your client that has a template already assigned (either automatically detected or manually selected):
![Lock Template - Starting Point](/talemate/img/0.33.0/client-lock-template-0001.png)
1. Open the client settings by clicking the cogwheels icon next to the client
2. Review the currently assigned template in the preview area
### Step 2: Enable Template Lock
When you check the **Lock Template** checkbox, the current template selection is cleared and you must select which template to lock:
![Lock Template - Select Template](/talemate/img/0.33.0/client-lock-template-0002.png)
1. Check the **Lock Template** checkbox
2. You'll see the message: "Please select a prompt template to lock for this client"
3. Select your desired template from the dropdown menu
This gives you the opportunity to choose which specific template you want to lock, rather than automatically locking whatever template happened to be active.
### Step 3: Template Locked
Once you've selected a template and clicked **Save**:
![Lock Template - Locked](/talemate/img/0.33.0/client-lock-template-0003.png)
- The template display shows your locked template with its name (e.g., `TextGenWebUI__LOCK.jinja2`)
- The template will no longer automatically update when you change models
- The lock icon indicates the template is fixed
## Understanding the Lock Template Setting
When the **Lock Template** checkbox is enabled:
- The prompt template will not automatically update when you change models
When disabled:
- Talemate automatically determines the best template for your loaded model
- Templates update when you switch models
- The system attempts to match templates via HuggingFace metadata
## Unlocking a Template
To return to automatic template detection:
1. Open the client settings
2. Uncheck the **Lock Template** checkbox
3. Click **Save**
4. Re-open the client settings and confirm that the template is no longer locked and the correct template is selected.
## Related Topics
- [Prompt Templates](/talemate/user-guide/clients/prompt-templates/) - Learn more about how prompt templates work
- [Client Configuration](/talemate/user-guide/clients/) - General client setup and configuration

View File

@@ -35,4 +35,19 @@ A unique name for the client that makes sense to you.
Which model to use. Currently defaults to `gpt-4o`.
!!! note "Talemate lags behind OpenAI"
When OpenAI adds a new model, it currently requires a Talemate update to add it to the list of available models. We are working on making this more dynamic.
When OpenAI adds a new model, it currently requires a Talemate update to add it to the list of available models. We are working on making this more dynamic.
##### Reasoning models (o1, o3, gpt-5)
!!! important "Enable reasoning and allocate tokens"
The `o1`, `o3`, and `gpt-5` families are reasoning models. They always perform internal thinking before producing the final answer. To use them effectively in Talemate:
- Enable the **Reasoning** option in the client configuration.
- Set **Reasoning Tokens** to a sufficiently high value to make room for the model's thinking process.
A good starting range is 5121024 tokens. Increase if your tasks are complex. Without enabling reasoning and allocating tokens, these models may return minimal or empty visible content because the token budget is consumed by internal reasoning.
See the detailed guide: [Reasoning Model Support](/talemate/user-guide/clients/reasoning/).
!!! tip "Getting empty responses?"
If these models return empty or very short answers, it usually means the reasoning budget was exhausted. Increase **Reasoning Tokens** and try again.

View File

@@ -14,7 +14,7 @@ We **do not** care about changing any of the actual loop logic.
With this in mind we can extend a new scene loop from the default talemate loop. This will give us a copy of the default loop that we can add to while keeping the rest of the loop logic up to date with any future improvements.
The `scene-loop` module should already be selected in the **:material-group: Modules** library.
The `scene-loop` module should already be selected in the **:material-tree-view: Modules** library.
![Scene Loop](./img/2-0001.png)

Binary file not shown.

Before

Width:  |  Height:  |  Size: 15 KiB

After

Width:  |  Height:  |  Size: 32 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 18 KiB

After

Width:  |  Height:  |  Size: 51 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 5.2 KiB

After

Width:  |  Height:  |  Size: 3.9 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 19 KiB

After

Width:  |  Height:  |  Size: 23 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 18 KiB

After

Width:  |  Height:  |  Size: 23 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 18 KiB

After

Width:  |  Height:  |  Size: 24 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 3.4 KiB

After

Width:  |  Height:  |  Size: 2.7 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 3.5 KiB

After

Width:  |  Height:  |  Size: 2.7 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 3.3 KiB

After

Width:  |  Height:  |  Size: 2.5 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 3.1 KiB

After

Width:  |  Height:  |  Size: 2.1 KiB

View File

@@ -0,0 +1,333 @@
# Collector Nodes
!!! info "New in version 0.33"
Collector nodes with dynamic sockets were introduced in Talemate version 0.33.0.
Collector nodes are specialized nodes that aggregate multiple inputs into a single data structure. They feature **dynamic sockets** that can be added or removed as needed, making them flexible for various data aggregation scenarios.
## Types of Collector Nodes
### Dict Collector
![Dict Collector](../img/dict-collector.png)
**Expected output from the example above:**
```json
{
"some_variable": "hello",
"auto_save": true
}
```
The **Dict Collector** node collects key-value pairs into a dictionary.
**Inputs:**
- `dict` (optional): An existing dictionary to add items to
- `item{i}` (dynamic): Multiple dynamic input slots for key-value pairs
**Outputs:**
- `dict`: The resulting dictionary containing all collected key-value pairs
**How it works:**
1. Each dynamic input can accept either:
- A **tuple** in the format `(key, value)` from nodes like [Make Key-Value Pair](../img/dict-collector.png)
- Any **value**, in which case the key is inferred from the connected node's properties
2. When a tuple `(key, value)` is provided, the key and value are directly added to the dictionary
3. When a non-tuple value is provided, the collector attempts to infer the key name by checking the source node for:
- A `name` property/input
- A `key` property/input
- An `attribute` property/input
- Falls back to the socket name if none are found
4. The final dictionary contains all collected key-value pairs
**Common use cases:**
- Collecting multiple values into a structured dictionary
- Building configuration objects from separate nodes
- Aggregating character properties or attributes
- Creating data structures from unpacked objects
### List Collector
![List Collector](../img/list-collector.png)
**Expected output from the example above:**
```json
["hello", "world"]
```
The **List Collector** node collects items into a list.
**Inputs:**
- `list` (optional): An existing list to append items to
- `item{i}` (dynamic): Multiple dynamic input slots for list items
**Outputs:**
- `list`: The resulting list containing all collected items
**How it works:**
1. Each dynamic input accepts any value type
2. Values are appended to the list in the order of the dynamic inputs (item0, item1, item2, etc.)
3. If an existing list is provided, new items are appended to it
4. If no list is provided, a new empty list is created
**Common use cases:**
- Collecting multiple values into an ordered list
- Aggregating results from multiple nodes
- Building lists for iteration or processing
- Combining outputs from multiple sources
## Managing Dynamic Sockets
Collector nodes support **dynamic input sockets** that can be added or removed as needed.
### Adding Input Sockets
There are two ways to add input sockets:
1. **Using the on-node buttons**: Click the green **+** button at the bottom of the node
2. **Using the context menu**: Right-click the node and select `Add Input Slot`
### Removing Input Sockets
There are two ways to remove input sockets:
1. **Using the on-node buttons**: Click the red **-** button at the bottom of the node (removes the last dynamic input)
2. **Using the context menu**: Right-click the node and select `Remove Last Input`
!!! note "Only the last dynamic input can be removed"
You can only remove the most recently added dynamic input. To remove an input from the middle, you must first remove all inputs after it.
## Working with Key-Value Pairs
When working with **Dict Collector**, you'll often use the **Make Key-Value Pair** node to create properly formatted tuple outputs.
**Make Key-Value Pair Node:**
- **Inputs**: `key` (string), `value` (any type)
- **Outputs**: `kv` (tuple), `key` (string), `value` (any)
The `kv` output produces a tuple in the format `(key, value)` which can be directly connected to Dict Collector's dynamic inputs.
**Example:**
```
[Make Key-Value Pair] [Make Key-Value Pair] [Get State]
key: "name" key: "age" scope: local
value: "Alice" value: 30 name: "works"
↓ kv ↓ kv ↓ value
└────────┐ ┌─────┘ │
↓ ↓ ↓
[Dict Collector] [Dict Collector]
item0 item1 item2
{"name": "Alice", "age": 30, "works": true}
```
## Nested Collection
Collector nodes can be chained together to create nested data structures:
- **Dict into Dict**: A Dict Collector's output can feed into another Dict Collector's `dict` input to add more key-value pairs
- **List into List**: A List Collector's output can feed into another List Collector's `list` input to combine lists
- **Mixed nesting**: Lists can contain dictionaries and vice versa
**Example of nested collection:**
```
[Dict Collector A] → [Dict Collector B]
item0: ("x", 10) dict: (from A)
item1: ("y", 20) item0: ("z", 30)
Result: {"x": 10, "y": 20, "z": 30}
```
## Examples
### Dict Collector Example
In the Dict Collector screenshot above, two nodes are connected:
- **item0**: `value` output from "GET local.some_variable"
- **item1**: `auto_save` output from "Get Scene State"
Since no explicit key-value tuples are provided, the Dict Collector uses key inference:
- **item0**: Infers key from the source node's `name` property → `"some_variable"`
- **item1**: Uses the socket name → `"auto_save"`
**Expected output:**
```python
{
"some_variable": <value from local.some_variable>,
"auto_save": <value from scene state's auto_save property>
}
```
For example, if `local.some_variable` contains `"hello"` and `auto_save` is `True`:
```python
{
"some_variable": "hello",
"auto_save": True
}
```
### List Collector Example
In the List Collector screenshot above, two Get State nodes are connected:
- **item0**: `value` from "GET local.some_variable"
- **item1**: `value` from "GET local.other_variable"
The List Collector simply appends values in order.
**Expected output:**
```python
[
<value from local.some_variable>,
<value from local.other_variable>
]
```
For example, if `local.some_variable` contains `"hello"` and `local.other_variable` contains `"world"`:
```python
["hello", "world"]
```
## String Formatting with Collected Data
Collector nodes work seamlessly with formatting nodes to convert collected data into formatted strings. Two powerful formatting nodes that support dynamic inputs are **Advanced Format** and **Jinja2 Format**.
### Advanced Format
![Advanced Format](../img/advanced-format.png)
**Expected output from the example above:**
```json
"hello world"
```
The **Advanced Format** node uses Python-style string formatting with dynamic inputs similar to Dict Collector.
!!! warning "Use single curly braces `{}`"
Advanced Format uses Python's `.format()` syntax. Variables are referenced with **single curly braces**: `{variable_name}`
**Inputs:**
- `template` (required): A format string with placeholders (e.g., `"hello {hello}"`)
- `variables` (optional): Base dictionary to merge with dynamic inputs
- `item{i}` (dynamic): Multiple dynamic input slots for format variables
**Outputs:**
- `result`: The formatted string
**How it works:**
1. Each dynamic input can accept either:
- A **tuple** in the format `(key, value)`
- Any **value**, in which case the key is inferred from the connected node's properties (same logic as Dict Collector)
2. Variables from dynamic inputs are merged with the optional `variables` dictionary
3. The template is formatted using Python's `.format()` method with all collected variables
4. Placeholders in the template like `{hello}` are replaced with their corresponding values
**Example from screenshot:**
- Template: `"hello {hello}"`
- Dynamic input `item0`: Connected to "GET local.hello" which has value `"world"`
- Key inferred as `"hello"` from the source node's `name` property
- Result: `"hello world"`
### Jinja2 Format
![Jinja2 Format](../img/jinja2-format.png)
**Expected output from the example above:**
```json
"hello world"
```
The **Jinja2 Format** node extends Advanced Format but uses Jinja2 templating instead of Python's `.format()`.
!!! warning "Use double curly braces `{{}}`"
Jinja2 Format uses Jinja2 template syntax. Variables are referenced with **double curly braces**: `{{ variable_name }}`
**Inputs:**
- `template` (required): A Jinja2 template string (e.g., `"hello {{ hello }}"`)
- `variables` (optional): Base dictionary to merge with dynamic inputs
- `item{i}` (dynamic): Multiple dynamic input slots for template variables
**Outputs:**
- `result`: The rendered template string
**How it works:**
1. Inherits all the dynamic input behavior from Advanced Format
2. Uses Jinja2 template syntax: `{{ variable }}` for variables, `{% %}` for logic
3. Supports all Jinja2 features: filters, conditionals, loops, etc.
4. Perfect for complex formatting needs beyond simple placeholder replacement
**Example from screenshot:**
- Template: `"hello {{ hello }}"`
- Dynamic input `item0`: Connected to "GET local.hello" with value `"world"`
- Key inferred as `"hello"` from the source node's `name` property
- Result: `"hello world"`
### Combining Collectors with Formatters
A common pattern is to use collectors to gather data, then format it into strings:
```
[Get State] ──→ [Dict Collector] ──→ [Advanced Format]
[Get State] ──→ item0 variables
item1 ↓
↓ template: "Name: {name}, Age: {age}"
{"name": "Alice",
"age": 30}
```
Or use dynamic inputs directly on the formatter:
```
[Get State] ──→ [Advanced Format]
[Get State] ──→ item0
item1
template: "{name} is {age} years old"
```
### Format vs Advanced Format vs Jinja2 Format
| Feature | Format | Advanced Format | Jinja2 Format |
|---------|--------|-----------------|---------------|
| **Dynamic Inputs** | No | Yes | Yes |
| **Syntax** | Python `.format()` | Python `.format()` | Jinja2 `{{ }}` |
| **Key Inference** | No | Yes | Yes |
| **Conditionals** | No | No | Yes |
| **Loops** | No | No | Yes |
| **Filters** | No | No | Yes |
| **Best For** | Simple static formatting | Dynamic formatting with multiple inputs | Complex templates with logic |

View File

@@ -1,5 +1,8 @@
# Prompt Templates
!!! tip "Want a more streamlined approach?"
If you need quick prompts with automatic context management, check out [Prompt Building](prompt_building.md) for a simplified alternative to manual template construction.
Prompt templates in the node editor allow you to create dynamic, reusable prompts that can be customized with variables and sent to agents for processing.
## Overview

View File

@@ -0,0 +1,191 @@
# Prompt Building
!!! info "New in version 0.33"
The Build Prompt node was introduced in Talemate version 0.33.0.
The **Build Prompt** node is a high-level alternative to manually constructing prompts with templates. It automatically assembles prompts with contextual information based on configurable needs, making it much faster to create agent prompts without managing template files.
## Build Prompt vs Prompt Templates
While [Prompt Templates](prompt-templates.md) give you complete control over every aspect of your prompt construction, **Build Prompt** provides a streamlined approach:
| Feature | Build Prompt | Prompt Templates |
|---------|--------------|------------------|
| **Complexity** | Simple configuration | Full template control |
| **Context Management** | Automatic | Manual |
| **Token Budgeting** | Built-in | Manual calculation |
| **Setup Time** | Fast | Requires template creation |
| **Flexibility** | Predefined options | Complete customization |
| **Best For** | Quick prompts, standard patterns | Complex custom prompts |
**Use Build Prompt when:**
- You need standard contextual prompts quickly
- You want automatic context and memory management
- You're using common prompt patterns (analysis, direction, creation, etc.)
**Use Prompt Templates when:**
- You need precise control over prompt structure
- You have complex custom requirements
- You want to reuse specific prompt patterns across scenes
## The Build Prompt Node
![Build Prompt Example](../img/build_prompt.png)
**Expected output from the example above:**
A fully assembled prompt with:
- Base template structure from `common.base`
- Director agent context
- Dynamic instructions: "Don't over-analyze"
- Scene context with memory, intent, and extra context
- Question: "Answer this question: Is there a tiger in the room?"
### Inputs
- **`state`**: The graph state (required)
- **`agent`**: The agent that will receive the prompt (required)
- **`instructions`** (optional): Text instructions to include in the prompt
- **`dynamic_context`** (optional): List of `DynamicInstruction` objects for context
- **`dynamic_instructions`** (optional): List of `DynamicInstruction` objects for instructions
- **`memory_prompt`** (optional): Semantic query for memory retrieval
### Outputs
- **`state`**: Passthrough of the input state
- **`agent`**: Passthrough of the input agent
- **`prompt`**: The constructed `Prompt` object
- **`rendered`**: The rendered prompt as a string
- **`response_length`**: The calculated response length for token budgeting
### Properties
#### Template Configuration
- **`template_file`** (default: `"base"`): The template file to use
- **`scope`** (default: `"common"`): The template scope (common, agent-specific, etc.)
!!! warning "Template Compatibility"
The Build Prompt node requires templates specifically designed to work with its configuration options. The default `common/base.jinja2` template is pre-configured to respect all the `include_*` boolean properties and dynamic instruction inputs.
If you change the `template_file` or `scope`, ensure the template is compatible with Build Prompt's variable structure, or the context control properties may not work as expected.
#### Context Control
- **`include_scene_intent`** (default: `true`): Include the scene's current intent/direction
- **`include_extra_context`** (default: `true`): Include pins, reinforcements, and content classification
- **`include_memory_context`** (default: `true`): Include relevant memories from the scene via RAG (Retrieval-Augmented Generation). Works best when `memory_prompt` is provided to guide semantic search
- **`include_scene_context`** (default: `true`): Include scene history and state
- **`include_character_context`** (default: `false`): Include active character details
- **`include_gamestate_context`** (default: `false`): Include game state variables
#### Advanced Settings
- **`reserved_tokens`** (default: `312`): Tokens reserved from the context budget before calculating how much scene history to include. This provides a buffer for the response and any template overhead (range: 16-1024)
- **`limit_max_tokens`** (default: `0`): Maximum token limit (0 = use client context limit)
- **`response_length`** (default: `0`): Expected length of the response
- **`technical`** (default: `false`): Include technical context (IDs, typing information)
- **`dedupe_enabled`** (default: `true`): Enable deduplication in the prompt
- **`memory_prompt`** (default: `""`): Semantic query string for memory retrieval. Provide this to guide what memories are retrieved when `include_memory_context` is enabled
- **`prefill_prompt`** (default: `""`): Text to prefill the response
- **`return_prefill_prompt`** (default: `false`): Return the prefill with response
## Dynamic Instructions
Dynamic instructions allow you to inject contextual information into prompts at runtime. They come in two types:
### Dynamic Context
Provides background information and context to the agent. Use the **Dynamic Instruction** node connected to the `dynamic_context` input (typically via a List Collector).
### Dynamic Instructions
Provides specific instructions or directives to the agent. Use the **Dynamic Instruction** node connected to the `dynamic_instructions` input (typically via a List Collector).
**Dynamic Instruction Node Properties:**
- **`header`**: The instruction header/title
- **`content`**: The instruction content/body
## Example Workflow
The example screenshot shows a typical workflow:
1. **Agent Selection**: Get the `director` agent
2. **Dynamic Instructions**: Create a "Don't over-analyze" instruction using the Dynamic Instruction node
3. **Collect Instructions**: Use a List Collector to gather dynamic instructions
4. **Instructions**: Provide the main instruction via Make Text node: "Answer this question: Is there a tiger in the room?"
5. **Build Prompt**: Configure context needs (scene intent, extra context, memory, scene context enabled)
6. **Generate Response**: Send the built prompt to the agent with appropriate settings
## Common Patterns
### Simple Question Prompt
```
[Get Agent] → [Build Prompt] → [Generate Response]
[Make Text: "Your question here"]
```
### Prompt with Dynamic Instructions
```
[Dynamic Instruction] → [List Collector] → [Build Prompt] → [Generate Response]
[Get Agent] ────────────────────────────────────↑
[Make Text: "Main instruction"] ────────────────↑
```
### Analysis with Full Context
Enable all context options:
- `include_scene_intent`: true
- `include_extra_context`: true
- `include_memory_context`: true
- `include_scene_context`: true
- `include_character_context`: true
- `include_gamestate_context`: true
### Quick Direction with Minimal Context
Disable unnecessary context:
- `include_character_context`: false
- `include_gamestate_context`: false
- `include_memory_context`: false (if not needed)
## Token Budget Management
Build Prompt automatically manages token budgets for scene history inclusion:
1. **Start with max context**: Uses `max_tokens` from the agent's client
2. **Subtract reserved tokens**: Removes `reserved_tokens` to create a buffer for the response
3. **Count rendered context**: Calculates tokens used by all enabled contexts (intent, memory, instructions, etc.)
4. **Calculate scene history budget**: `budget = max_tokens - reserved_tokens - count_tokens(rendered_contexts)`
5. **Fill with scene history**: Includes as much scene history as fits in the remaining budget
The `reserved_tokens` setting (default: 312) ensures there's always space reserved for the agent's response and prevents the context from being completely filled. Increase it if responses are getting cut off; decrease it if you want more scene history included.
## How Build Prompt Works with Templates
The Build Prompt node works by passing all its configuration options as variables to the template. The default `common/base.jinja2` template is specifically structured to:
1. **Check boolean flags**: Uses `{% if include_scene_intent %}` to conditionally include context sections
2. **Include sub-templates**: Dynamically includes templates like `scene-intent.jinja2`, `memory-context.jinja2`, etc.
3. **Process dynamic instructions**: Renders both `dynamic_context` and `dynamic_instructions` lists
4. **Calculate token budgets**: Uses `count_tokens()` to budget space for scene history
5. **Apply prefill prompts**: Handles the `prefill_prompt` and `return_prefill_prompt` options
This is why changing the template requires ensuring compatibility - a different template must expect and use these same variables.
## Template Scopes
The `scope` property determines where templates are loaded from:
- **`common`**: Shared templates in `templates/prompts/common/`
- **Agent types**: Agent-specific templates in `templates/prompts/{agent_type}/`
- `narrator`, `director`, `creator`, `editor`, `summarizer`, `world_state`
The `template_file` property references the template name without the `.jinja2` extension.

View File

@@ -4,25 +4,31 @@
The node editor is available in the main scene window once the scene is switched to creative mode.
Switch to creative mode through the creative mode toggle in the scene toolbar.
Open the node editor by clicking the :material-chart-timeline-variant-shimmer: icon in the main toolbar (upper left).
![Switch to creative mode](../img/toggle-node-editor.png)
![Switch to creative mode](../img/open-node-editor.png)
Exit the node editor the same way by clicking the :material-exit-to-app: icon in the main toolbar (upper left).
![Exit node editor](../img/close-node-editor.png)
## Module Library
![Module Library](../img/user-interface-0001.png)
The **:material-group: Modules** Library can be found at the bottom of the editor.
The **:material-file-tree: Modules** Library can be found at the left sidebar of the editor. If the sidebar is closed, click the :material-file-tree: icon in the main toolbar (upper left) to open it.
It holds all the node modules that talemate has currently installed and is the main way to add new modules to the editor or open existing modules for inspection or editing.
### Module listing
Node modules come in three categories:
Node modules are organized into hierarchical groups:
- Scene level modules: Purple, these modules live with scene project
- Installed modules: Brown, these are the modules installed through the `templates/modules` directory
- Core modules: Grey, these are the modules that are part of the core talemate installation
- **scene**: Scene-level modules that live with your project
- **agents/{agent}**: Agent-specific modules organized by agent name
- **core**: Core talemate system modules
- **installed/{project}**: Installed modules grouped by project (from `templates/modules/{project}`)
- **templates**: General template modules
All modules can be opened and inspected, but **only scene level modules can be edited**.
@@ -62,6 +68,9 @@ Select the appropriate module type.
| :material-function: Function | Creates a new [function](functions.md) module |
| :material-file: Module | Creates a new module |
| :material-source-branch-sync: Scene Loop | Creates a new scene loop module |
| :material-package-variant: Package | Creates a new package module |
| :material-chat: Director Chat Action | Creates a new director chat action module |
| :material-robot-happy: Agent Websocket Handler | Creates a new agent websocket handler module |
In the upcoming dialog you can name the new module and set the registry path.
@@ -226,6 +235,20 @@ To paste a node, select the location where you want to paste it and hit `Ctrl+V`
You can also hold the `Alt` key and drag the selected node(s) to duplicate them and drag the duplicate to the desired location.
##### Alt+Shift drag to create counterpart
Certain nodes support creating a "counterpart" node. Hold `Alt+Shift` and drag the node to create its paired counterpart node.
For example:
- **Set State → Get State**: Creates a Get State node with matching scope and variable name
- **Input → Output**: Creates the corresponding socket node with matching configuration
The counterpart node is positioned near the original and can be immediately dragged to the desired location.
!!! note "Limited node support"
This feature is currently only available for specific node types like Get/Set State and Input/Output socket nodes.
#### Node Properties
Most nodes come with properties that can be edited. To edit a node property, click on the corresponding input widget in the node.

Binary file not shown.

After

Width:  |  Height:  |  Size: 60 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 362 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 27 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 97 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 56 KiB

Some files were not shown because too many files have changed in this diff Show More