mirror of
https://github.com/vegu-ai/talemate.git
synced 2025-12-15 19:27:47 +01:00
main
23 Commits
| Author | SHA1 | Message | Date | |
|---|---|---|---|---|
|
|
c179fcd3eb |
0.34.0 (#239)
Visual Agent Refactor + Visual Library Character Card Import Refactor Bug fixes and other improvements |
||
|
|
89d16ae513 |
0.33.0 (#229)
* linting * Add cleanup function for recent scenes in config to remove non-existent paths * remove leghacy world state manager buttons * move world state scene tools into sub component * linting * move module properties to navigation drawer * Update icons in NodeEditorLibrary and NodeEditorModuleProperties for improved UI clarity * prompt tweaks * director chat prompt simplifications * more prompt fixes * Enhance type hints for duration conversion functions in time.py * narrate time action now has access to response length instructions * Add IsoDateDuration node for ISO 8601 interval string construction * Update advance_time method to include return type annotation and return message * Add AdvanceTime node to world state for time advancement with duration and narration instructions * linting * Add agent state exclusions to changelog with a TODO for module migration * Add message emission for actor, narrator, and scene analysis guidance in respective components. Enhance AgentMessages and SceneTools for better message handling and visual feedback. * Remove agent messages from state when opening agent message view in SceneTools component. * linting * openroute fetch models on key set * Add input history functionality to message input in TalemateApp component. Implement keyboard shortcuts for navigating history (Ctrl+Up/Down) and limit history to the last 10 messages. Update message sending logic to store messages in history. * Update message input hint in TalemateApp component to include keyboard shortcuts for navigating input history (Ctrl+Up/Down). * node updates * unified data extraction function * prompt tweaks * Add gamestate context support in BuildPrompt and corresponding template. Introduced new property for gamestate context and updated rendering logic to include gamestate information in prompts. * Refactor Prompt class by removing LoopedPrompt and cleaning up related methods. Update data response parsing to streamline functionality and improve clarity. Adjust imports accordingly. * Add 'data_multiple' property to GenerateResponse class to allow multiple data structures in responses. Update output socket type for 'data_obj' to support both dict and list formats. * Add DictUpdate node * Add UnpackGameState node to retrieve and unpack game state variables * gamestate nodes * linting * Enhance scene view toggle functionality to support shift-click behavior for closing all drawers when hiding the scene view. * immutable scenes should reset context db on load * linting * node updates * prompt tweaks * Add context type output and filtering for creative context ID meta entries in PathToContextID and ContextIDMetaEntries nodes * Add string replacement functionality and Jinja2 formatting support in nodes. Introduced 'old' and 'new' properties for substring replacement in the Replace node, and added a new Jinja2Format node for template rendering using jinja2. * Add additional outputs for context validation in ValidateContextIDItem node, including context type, context value, and name. * prompt tweaks * node adjustments * linting * Add data_expected attribute to Focal and Prompt classes for enhanced response handling * node updates * node updates * node updates * prompt tweaks * director summary return appropriately on no action taken * Enhance action handling in DirectorChatMixin by skipping actions when a question is present in the parsed response, ensuring better response accuracy. * Enhance ConfirmActionPrompt component by adding anchorTop prop for dynamic alignment and adjusting icon size and color for improved UI consistency. * anchor clear chat confirm to top * responsive layout fixes in template editors * linting * relock * Add scene progression guidance to chat-common-tasks template * Refactor push_history method to be asynchronous across multiple agents and scenes, ensuring consistent handling of message history updates. * Update chat instructions to clarify user intent considerations and enhance decisiveness in responses. Added guidance on distinguishing between scene progression and background changes, and refined analysis requirements for user interactions. * Enhance DirectorConsoleChatsToolbar by adding a usage cheat sheet tooltip for user guidance and refining the Clear Chat button's UI for better accessibility. * store character data at unified point * fix button * fix world editor auto sync * Shared context 2 (#19) Shared context * Refactor NodeEditorLibrary to improve search functionality and debounce input handling. Updated v-text-field model and added a watcher for search input to enhance performance. * Refactor NodeEditor and TalemateApp components to enhance UI interactions. Removed the exit creative mode button from NodeEditor and updated tooltips for clarity. Adjusted app bar navigation icons for better accessibility and added functionality to switch between node editor and creative mode. * comment * Character.update deserialize voice value correctly * Enhance SharedContext.update_to_scene method to properly add or update character data in the scene based on existence checks. This improves the synchronization of character states between shared context and scene. * shared context static history support fix context memory db imports to always import * Update WorldStateManagerSceneSharedContext.vue to clarify sharing of character, world entries, and history across connected scenes. * linting * Enhance chat modes by adding 'nospoilers' option to DirectorChat and related payloads. Update chat instructions to reflect new mode behavior and improve UI to support mode-specific icons and colors in the DirectorConsoleChatsToolbar. * Comment out 'repetition_penalty_range' in TabbyAPIClient to prevent unexpected "<unk><unk> .." responses. Further investigation needed. * linting * Add active_characters and intro_instructions to Inheritance model; implement intro generation in load_scene_from_data. Update WorldStateManagerSceneSharedContext.vue to enhance new scene creation dialog with character selection and premise instructions. * rename inheritance to scene initialization * linting * Update WorldStateManagerSceneSharedContext.vue to conditionally display alert based on scene saving status and new scene creation state. * Refine messages for shared context checkboxes in WorldStateManagerCharacter and WorldStateManagerWorldEntries components for clarity. * Add scene title generation to load process and update contextual generation template. Introduced a new method in AssistantMixin for generating scene titles, ensuring titles are concise and free of special characters. Updated load_scene_from_data to assign generated titles to scenes. * linting * Refactor GameState component to integrate Codemirror for JSON editing, replacing the previous treeview structure. Implement validation for JSON input and enhance error handling. Remove unused methods and streamline state management. * Add lastLoadedJSON property to GameState component for change detection. Update validation logic to prevent unnecessary updates when game state has not changed. * Remove status emission for gameplay switch in CmdSetEnvironmentToScene class. * allow individual sharing of attributes and details * linting * Remove redundant question handling logic in DirectorChatMixin to streamline action selection process. * Update EXTERNAL_DESCRIPTION in TabbyAPI client to include notes on EXL3 model sensitivity to inference parameters. Adjust handling of 'repetition_penalty_range' in parameter list for clarity. * director chat support remove message and regenerate message * Refactor ConfirmActionInline component to improve button rendering logic. Introduced 'size' prop for button customization and added 'comfortable' density option. Simplified icon handling with computed property for better clarity. * linting * node updates * Add appBusy prop to DirectorConsoleChats and DirectorConsoleChatsToolbar components to manage button states during busy operations. * Refactor DirectorChatMixin to utilize standalone utility functions for parsing response sections and extracting action blocks. This improves code clarity and maintainability. Added tests for new utility functions in test_utils_prompt.py to ensure correct functionality. * Update clear chat button logic to consider appBusy state in DirectorConsoleChatsToolbar component, enhancing user experience during busy operations. * linting * Remove plan.md * Add chat template identifier support and error handling in ModelPrompt class - Implemented logic to check for 'chat_template.jinja2' in Hugging Face repository. - Added new template identifiers: GraniteIdentifier and GLMIdentifier. - Enhanced error handling to avoid logging 404 errors for missing templates. - Introduced Granite.jinja2 template file for prompt structure. * node fixes * remove debug msg * Enhance error handling in DynamicInstruction class by enforcing header requirement and ensuring content defaults to an empty string if not provided. * recet scene message visibility on scene load * prompt tweaks * Enhance data extraction in Focal class by adding a fallback mechanism. Implemented additional error handling to attempt data extraction from a fenced block if the initial extraction fails, improving robustness in handling responses. * linting * node fixes * Add relative_to_root function for path resolution and update node export logic - Introduced a new function `relative_to_root` in path.py to resolve paths relative to the TALEMATE_ROOT. - Updated the `export_node_definitions` function in registry.py to use `relative_to_root` for module path resolution. - Added a check to skip non-selectable node definitions in litegraphUtils.js during registration. * show icons * Improve error handling in export_node_definitions by adding a try-except block for module path resolution. Log a warning if the relative path conversion fails. * typo * Refactor base_attributes type in Character model to a more generic dict type for improved flexibility * relock * ensure character gets added to character_data * prompt tweaks * linting * properly activate characters * activate needs to happen explicitly now and deactivated is the default * missing arg * avoid changed size error * Refactor character removal logic in shared context to prevent deletion; characters are now only marked as non-shared. * Add update_from_scene method calls in SharedContextMixin for scene synchronization * Add ensure_changelogs_for_all_scenes function to manage changelog files for all scenes; integrate it into the server run process. * Enhance backup restore functionality by adding base and latest snapshot options; improve UI with clearer labels and alerts for restore actions. * Update _apply_delta function to enhance delta application handling by adding parameters for error logging and force application of changes on non-existent paths. * Skip processing of changelog files in _list_files_and_directories function to prevent unnecessary inclusion in file listings. * Update IntroRecentScenes.vue to use optional chaining for selectedScene properties and enhance backup timestamp display with revision info. * linting * Refactor source entry attribute access in collect_source_entries function to use getattr for optional attributes, improving robustness. * Implement logic to always show scene view in scene mode within TalemateApp.vue, enhancing user experience during scene interactions. * prompt tweaks * prompt tweaks * Update TalemateApp.vue to set the active tab to 'main' when switching to the node editor, improving navigation consistency. * Add active frontend websocket handler management in websocket_endpoint * agent websocket handler node support * Refactor init_nodes method in DirectorAgent to call superclass method and rename chat initialization method in DirectorChatMixin for clarity. * Add characters output to ContextHistory node to track active participants in the scene * Add Agent Websocket Handler option to Node Editor Library with corresponding icons and labels * Add check for node selectability in NodeEditorNodeSearch component to filter search results accordingly. * Add SummarizeWebsocketHandler to handle summarize actions and integrate it into SummarizeAgent * nodes * Add data property to QueueResponse class for websocket communication and update run method to include action and data in output values. * Update manual context handling in WorldStateManager to include shared property from existing context * Enhance GetWorldEntry node to include 'shared' property in output values from world entry context * Update scene loading to allow setting scene ID from data and include ID in scene serialization * Update icon for AgentWebsocketHandler in NodeEditorLibrary component to mdi-web-box * Refactor WorldStateManager components to enhance history management and sharing capabilities. Added summarized history titles, improved UI for sharing static history, and integrated scene summarization functionality. Removed deprecated methods related to shared context settings. * linting * Change log level from warning to debug for migrate_narrator_source_to_meta error handling in NarratorMessage class. * Update GLM-no-reasoning template to include <think></think> tag before coercion message for improved prompt structure. * allow prompt templates to specify reasoning pattern * Add Seed.jinja2 template for LLM prompts with reasoning patterns and user interaction handling * Enhance NarratorAgent to support dynamic response length configuration. Updated max generation length from 192 to 256 tokens and introduced a new method to calculate response length. Modified narration methods to accept and utilize response length parameter. Added response length property in GenerateNarrationBase class and updated templates to include response length handling. * Update response length calculation in RevisionMixin to include token count for improved text processing. * Refactor response identifier in RevisionMixin to dynamically use calculated response length for improved prompt handling. * linting * allow contextual generation of static history entries * Add is_static property to HistoryEntry for static history entry identification * Add "static history" option to ContextualGenerate node for enhanced contextual generation capabilities. * Add CreateStaticArchiveEntry and RemoveStaticArchiveEntry nodes for managing static history entries. Implement input/output properties and error handling for entry creation and deletion. * nodes updated * linting * Add assets field to SceneInitialization model and update load_scene_from_data function to handle scene assets. Update WorldStateManagerSceneSharedContext.vue to include assets in scene initialization parameters. * Refactor CoverImage component to enhance drag-and-drop functionality and improve styling for empty portrait state. * Add intent_state to SceneInitialization model and update load_scene_from_data function to handle intent state. Introduce story_intent property in Scene class and reset method in SceneIntent class. Update WorldStateManagerSceneSharedContext.vue to include intent state in scene initialization parameters. * Refactor WorldStateManagerSceneSharedContext.vue to improve cancel functionality by introducing a dedicated cancelCreate method and removing the direct dialog toggle from the Cancel button. This enhances code clarity and maintainability. * Update SharedContext to use await for set_shared method, ensuring proper asynchronous handling when modifying character sharing status. * Add MAX_CONTENT_WIDTH constant and update components to use it for consistent max width styling * fix issue with data structure parsing * linting * fix tests * nodes * fix update_introduction * Add building blocks template for story configuration and scene management * Refactor toggleNavigation method to accept an 'open' parameter for direct control over drawer visibility in TalemateApp.vue * Update usageCheatSheet text in DirectorConsoleChatsToolbar.vue for clarity and add pre-wrap styling to tooltip * Add cover image and writing style sections to story and character templates; update chat common tasks with new scene restrictions and user guide reference. * linting * relock * Add EmitWorldEditorSync node to handle world editor synchronization; update WorldStateManager to refresh active tab on sync action. * Update Anthropic client with new models and adjust default settings; introduce limited parameter models for specific configurations. * director action module updates * direct context update fn * director action updates * Update usageCheatSheet in DirectorConsoleChatsToolbar.vue to include recommendation for 100B+ models. * Remove debug diagnostics from DirectorConsoleChats.vue to clean up console output. * Update card styles in IntroRecentScenes.vue for improved visual consistency; change card color to grey-darken-3 and adjust text classes for titles and subtitles. * Update EmitWorldEditorSync node to include websocket passthrough in sync action for improved event handling. * Increase maximum changelog file size limit from 500KB to 1MB to accommodate larger change logs. * linting * director action module updates * 0.33 added * Add Nexus agent persona to talemate template and initialize phrases array * Add support for project-specific grouping in NodeEditorLibrary for templates/modules, enhancing organization of node groups. * docs * Enhance NodeEditorLibrary by adding primary color to tree component for improved visibility and user experience. * docs * Enhance NewSceneSetupModal to include subtitles for writing styles and director personas, improving context and usability. * Update agent persona description in WorldStateManagerTemplates to specify current support for director only, enhancing clarity for users. * Refine agent persona description in WorldStateManagerTemplates to clarify assignment per agent in Scene Settings, maintaining focus on current director-only support. * fix crash when attempting to delete some clients * Add TODO comments in finalize_llama3 and finalize_YI methods to indicate removable cruft * Add lock_template feature to Client configuration and update related components for template management * linting * persist client template lock through model changes * There is no longer a point to enforcing creative mode when there are no characters * fix direct_narrator character argument * Update CharacterContextItem to allow 'value' to accept dict type in addition to existing types * docs * Update lock_template field in Client model to allow None type in addition to bool * Remove unused template_file field from Defaults model in Client configuration * Refactor lock_template field in Client model and ClientModal component to ensure consistent boolean handling * Add field validator for lock_template in Client model to ensure boolean value is returned * fix issue where valid data processed in extract_data_with_ai_fallback was not returned * Update default_player_character assignment in ConfigPlugin to use GamePlayerCharacter schema for improved data validation * linting * add heiku 4.5 model and make default * opse 4.5 isnt a thing * fix issue where fork / restore would restore duplicate messages * improve autocomplete handling when prefill isn't available * prompt tweaks * linting * gracefully handle removed attributes * Refactor scene reference handling in delete_changelog_files to prevent incorrect deletions. Added a test to verify proper scene reference construction and ensure changelog files are deleted correctly. * forked scenes reset memory id and are not immutable * emit_status export rev * Update RequestInput.vue to handle extra_params more robustly, ensuring defaults are set correctly for input. * only allow forking on saved messages * linting * tweak defaults * summarizer fire off of push_history.after * docs * : in world entry titles will now load correctly * linting * docs * removing base attrib ute or detail also clears it from shared list * fix issue where cancelling some generations would cause errors * increase font size * formatting fixes * unhandled errors at the loop level should not crash the entire scene * separate message processing from main loop * linting * remove debug cruft * enhance error logging in background processing to include traceback information * linting * nothing to detemrine of no model is sent * fix some errors during kcpp client deletion * improve configuration issue alert visibility * restore input focus after autocomplete * linting |
||
|
|
25e646c56a |
0.32.1 (#213)
* GLM 4.5 templates * set 0.33 and relock * fix issues with character creation * relock * prompt tweaks * fix lmstudio * fix issue with npm on windows failing on paths set 0.32.1 * linting * update what's new * #214 (#215) * max-height and overflow * max-height and overflow * v-tabs to list and offset new scrollbar at the top so it doesnt overlap into the divider * tweaks * tweaks * prompt tweaks --------- Co-authored-by: Iceman Oakenbear <89090218+IcemanOakenbear@users.noreply.github.com> |
||
|
|
ce4c302d73 |
0.32.0 (#208)
* separate other tts apis and improve chunking * move old tts config to voice agent config and implement config widget ux elements for table editing * elevenlabs updated to use their client and expose model selection * linting * separate character class into character.pt and start on voice routing * linting * tts hot swapping and chunking improvements * linting * add support for piper-tts * update gitignore * linting * support google tts fix issue where quick_toggle agent config didnt work on standard config items * linting * only show agent quick toggles if the agent is enabled * change elevenlabs to use a locally maintained voice list * tts generate before / after events * voice library refactor * linting * update openai model and voices * tweak configs * voice library ux * linting * add support for kokoro tts * fix add / remove voice * voice library tags * linting * linting * tts api status * api infos and add more kokoro voices * allow voice testing before saving a new voice * tweaks to voice library ux and some api info text * linting * voice mixer * polish * voice files go into /tts instead of templates/voice * change default narrator voice * xtts confirmation note * character voice select * koboldai format template * polish * skip empty chunks * change default voice * replace em-dash with normal dash * adjust limit * replace libebreaks * chunk cleanup for whitespace * info updated * remove invalid endif tag * sort voices by ready api * Character hashable type * clarify set_simulated_environment use to avoid unwanted character deactivated * allow manual generation of tts and fix assorted issues with tts * tts websocket handler router renamed * voice mixer: when there are only 2 voices auto adjust the other weight as needed * separate persist character functions into own mixin * auto assign voices * fix chara load and auto assign voice during chara load * smart speaker separation * tts speaker separation config * generate tts for intro text * fix prompting issues with anthropic, google and openrouter clients * decensor flag off again * only to ai assisted voice markup on narrator messages * openrouter provider configuration * linting * improved sound controls * add support for chatterbox * fix info * chatterbox dependencies * remove piper and xtts2 * linting * voice params * linting * tts model overrides and move tts info to tab * reorg toolbar * allow overriding of test text * more tts fixes, apply intensity, chatterbox voices * confirm voice delete * lintinG * groq updates * reorg decorators * tts fixes * cancelable audio queue * voice library uploads * scene voice library * Config refactor (#13) * config refactor progres * config nuke continues * fix system prompts * linting * client fun * client config refactor * fix kcpp auto embedding selection * linting * fix proxy config * remove cruft * fix remaining client bugs from config refactor always use get_config(), dont keep an instance reference * support for reasoning models * more reasoning tweaks * only allow one frontend to connect at a time * fix tests * relock * relock * more client adjustments * pattern prefill * some tts agent fixes * fix ai assist cond * tts nodes * fix config retrieval * assign voice node and fixes * sim suite char gen assign voice * fix voice assign template to consider used voices * get rid of auto break repetition which wasn't working right for a while anyhow * linting * generate tts node as string node * linting * voice change on character event * tweak chatterbox max length * koboldai default template * linting * fix saving of existing voice * relock * adjust params of eva default voice * f5tts support * f5tts samples * f5tts support * f5tts tweaks * chunk size per tts api and reorg defaul f5tts voices * chatterbox default voice reog to match f5-tts default voices * voice library ux polish pass * cleanup * f5-tts tweaks * missing samples * get rid of old save cmd * add chatterbox and f5tts * housekeeping * fix some issues with world entry editing * remove cruft * replace exclamation marks * fix save immutable check * fix replace_exclamation_marks * better error handling in websocket plugins and fix issue with saves * agent config save on dialog close * ctrl click to disable / enable agents * fix quick config * allow modifying response size of focal requests * sim suite set goal always sets story intent, encourage calling of set goal during simulation start * allow setting of model * voice param tweaks * tts tweaks * fix character card load * fix note_on_value * add mixed speaker_separation mode * indicate which message the audio is for and provide way to stop audio from the message * fix issue with some tts generation failing * linting * fix speaker separate modes * bad idea * linting * refactor speaker separation prompt * add kimi think pattern * fix issue with unwanted cover image replacemenT * no scene analysis for visual promp generation (for now) * linting * tts for context investigation messages * prompt tweaks * tweak intro * fix intro text tts not auto playing sometimes * consider narrator voice when assigning voice tro a character * allow director log messages to go only into the director console * linting * startup performance fixes * init time * linting * only show audio control for messagews taht can have it * always create story intent and dont override existing saves during character card load * fix history check in dynamic story line node add HasHistory node * linting * fix intro message not having speaker separation * voice library character manager * sequantial and cancelable auto assign all * linting * fix generation cancel handling * tooltips * fix auto assign voice from scene voices * polish * kokoro does not like lazy import * update info text * complete scene export / import * linting * wording * remove cruft * fix story intent generation during character card import * fix generation cancelled emit status inf loop * prompt tweak * reasoning quick toggle, reasoning token slider, tooltips * improved reasoning pattern handling * fix indirect coercion response parsing * fix streaming issue * response length instructions * more robust streaming * adjust default * adjust formatting * litning * remove debug output * director console log function calls * install cuda script updated * linting * add another step * adjust default * update dialogue examples * fix voice selection issues * what's happening here * third time's the charm? * Vite migration (#207) * add vite config * replace babel, webpack, vue-cli deps with vite, switch to esm modules, separate eslint config * change process.env to import.meta.env * update index.html for vite and move to root * update docs for vite * remove vue cli config * update example env with vite * bump frontend deps after rebase to 32.0 --------- Co-authored-by: pax-co <Pax_801@proton.me> * properly referencer data type * what's new * better indication of dialogue example supporting multiple lines, improve dialogue example display * fix potential issue with cached scene anlysis being reused when it shouldn't * fix character creation issues with player character toggle * fix issue where editing a message would sometimes lose parts of the message * fix slider ux thumb labels (vuetify update) * relock * narrative conversation format * remove planning step * linting * tweaks * don't overthink * update dialogue examples and intro * dont dictate response length instructions when data structures are expected * prompt tweaks * prompt tweaks * linting * fix edit message not handling : well * prompt tweaks * fix tests * fix manual revision when character message was generated in new narrative mode * fix issue with message editing * Docker packages relese (#204) * add CI workflow for Docker image build and MkDocs deployment * rename CI workflow from 'ci' to 'package' * refactor CI workflow: consolidate container build and documentation deployment into a single file * fix: correct indentation for permissions in CI workflow * fix: correct indentation for steps in deploy-docs job in CI workflow * build both cpu and cuda image * docs * docs * expose writing style during state reinforcement * prompt tweaks * test container build * test container image * update docker compose * docs * test-container-build * test container build * test container build * update docker build workflows * fix guidance prompt prefix not being dropped * mount tts dir * add gpt-5 * remove debug output * docs * openai auto toggle reasoning based on model selection * linting --------- Co-authored-by: pax-co <123330830+pax-co@users.noreply.github.com> Co-authored-by: pax-co <Pax_801@proton.me> Co-authored-by: Luis Alexandre Deschamps Brandão <brandao_luis@yahoo.com> |
||
|
|
61d01984ba |
0.30.0 (#184)
* pytorch update * github workflow for tests * tests set up config * tests download nltk punkt * punkt_tab * fix world state not updating on new initial scene load * fix new character creation from scene tools * py 312 py 313 support remove unreliant cuda detection and just lock poetry with cuda * fix tests * dont auto install cuda * remove unused StrEnum import * separate cuda install * fix auto progress OFF no longer working * fix debug logging options not sticking * disable revision during image prompt generation * prompt tweaks * prompt tweaks * fix some issues with directed character creation * tweak the continue generation button so its less prone to pop into a new line on itsown * fix context db filter field layout * handle error when trying to regnerate passthrough narrator message * prompt tweaks * increase auto direct evaluation length * fix node library on windows * auto direct eval tweaks * prompt tweaks * prompt tweaks * allow updationg of scene phase through director console * add generate action to director console phase intent text fields * prompt tweaks * track iteration count in scene loop always yield to user at initial start / load of a scene * fix issue with Split and Join nodes when passed \n as delimiter * sim suite only generate auto direct requirements if auto direct is enabled sim suite fix issues with title generation * autocomplete button disable until there is text to autcomplete * update installation docs * update scenario tools docs * docs * writing style phrase detection * typo * docs * fix issue where deleting an applied preset group would prevent selection of a different preset group in the affected client * fix @-Instruction is broken when using apostrophes * editor never attempt to fix exposition on user input if input starts with command characters @, ! or / * prompt tweaks * editor revision: automatic revision setting, prompt tweaks, docs * missing files * fix issue where narration responses starting with # would result in empty messages * prompt tweaks * fix issue with autocomplete not working at the beginning of a scene * fix issues where cached guidance would result in no guidance * editor revision analysis now has access to scene analysis and director guidance if it exists * fix issue where all nodes in the node editor would be locked on winsows OS * add `scene/SetIntroducation` node * fix issue where generating narration in a scene with zero characters would always come back blank * SceneIntroduction node state output fixed * node editor docs progress * fix issue with loading scene from file upload no longer working * better handling of what to do when there are no characters in a scene and no default character is defined * typo * silence trigger game loop debug message * docs * GenerateThematicList node * docs * docs * stubs * allow creation of module from existing nodes * move function into plugin * separate graph export functions into own .js file * group from selected nodes * remove debug output * tweak create module dialog * docs * docs * graph tests need to assume auto_progres True * add ModuleProperty node * fix some issues in the module creation modal when extending or copying a module * include module name in module deletion confirm dialog * fix node module copy not setting updated registry * module property name output * docs * docs * initializing a scene from a character card will no longer break the node editor * docs * when greeting text and intro do not match, do the greeting text first. * intro is set during card import, there is no need to ever emit character greetings in addition to the intro, its not really something that matches talemate's design philsophy at this point * docs * dynamic premise modules * tweaks * propagate module level property defaults to node * docs * fix issue where the default character would get added to scenes during load even though there already was a player character * prompt tweaks * tweaks to generate-premise module * docs * infinity quest dynamic story v2 * tweaks limits * fix line endings * prompt tweaks * fix some issues with node property editing * formatting * prompt tweaks * group and comment * add talemate tint node style fix gap in nodes when there are no sockets * node style fixes * docs * docs * icon for swtich nodes * conditional emit status node * don't reset dynamic premise * dynamic premise tweaks * dynamic premis status messages * fix issue with Input value error node * validate that topic is specified * fix issue where cancelling a generation triggered during scene loop init would cause the loop to reinitialize * docs * node error handling improvements * docs * better error handling in generate premise module * a connected socket should always override a property of the same value even if the socket connection is not resolved * dynamic premise expose intro length property * fix some issues with node module creation and add registry path validation * correctly title creative loop module so it can be copied * remove debug message * rename to dynamic storyline for clarity and so it doesn't collide with tutorial * import dynanimc storyline node * docs * gracefully handle a node missing during graph load * docs * make agent list a bit more compact * disable node property editing in inherited nodes * rename editor revision events so they are inline with the other agent events * support instruction injection for director.generate_choices action * normalize dynamic instructions * fix director guidance dynamic instructions * docs * generate choices event add choices property * prompt tweaks * add dynamic instruction node * prompt tweak * fix issue where some events could not be hooked into through event node modules * docs * clean response node * docs * docs * docs * module library tweaks * fix some issues with act-as selection * dont allow creation of new modules until scene has been saved at least once * public node modules dir * sim suite module styles * remove debug messages * fix default node titles * fix number input vlaidation in math nodes * context awareness in contextual generate now includes character info * fix dupe id warnings * alt drag to clone multiple nodes * alt drag to clone multiple nodes * docs * docs * fix issue where some scene modules could leak across scenes * dynamic instructions already included through extra context * prompt tweaks * update defaults * docs * make list init from json * socs * fix issue where not all field definitions would get sent * docs * fix issue causing deep analysis to loop * case insentive regex matching * prompt tweaks * fix node title * fix size issue in scene view when node editor was open on smaller resolutions * fix issue with autocomplete sometime loosing markup * add claude 4 * make director guidance and actor instructions available to autocomplete prompt * fix trim node handling of \n * extract node * extract node trim property * remover cruft * charactermessage node only set character if not already part of message prefix * editor revision unslop mode * fix search scenes showing node module files * prompt tweaks * unslop support unwanted phrase detection * define bad_prose * </FIX> seems to get ommitted a lot so lets handle it * cleanup * return on no issues * fix some issues with character creation * fix some character creation issues * prompt tweaks * contewxtual generate and autocomplete signals added * prefix revision_ * use uuidv4 which is already installed and doesnt come with compatibility issues * editor revision support for contextual generations normalize some event objects * add template picker to worldstate character creation interface * prompt tweaks * dont unslop scene intent or story intent generation * prompt tweaks * prompt tweaks * prompt tweaks * prompt tweaks * prompt tweaks * prompt tweaks * prompt tweaks * prompt tweaks * prompt tweaks * prompt tweaks * fix issue of conversation losing edits through signals * support revisions for world state entry generation * task_instructions * dont show rewrite specific options if unslop method is selected * docs * fix issue with setting locked save file when auto saving was turned on * don't trigger player_turn_start in creative mode * better check for when to fire player_turn_start and when not * node editor crash course fixes * docs * fix issue where sometimes wrong system prompt was selected * add world context to contextual generate node * fix node dragging issue when cloning multiple nodes * support editor revision for summarization generation * summarization support dynamic instructions * dedicated template for summary unslop * pass summarization_history to template vars * prompt tweaks * prompt tweaks * not sure how this got blanked * wording * wording * only display debug log if TALEMATE_DEBUG=1 * fix syntax * fix syntax * remove unused cruft * start-debug * log message cleanup * docs * tweak defaults * sim suite polish * remove unused images * add writing style instructions to revision prompts * missing endif tag * prompt tweaks * prompt tweaks * prompt tweaks * use app-busy * prompt tweaks * update readme and screenshots wording wording * add discord link |
||
|
|
113553c306 |
0.29.0 (#167)
* set 0.29.0 * tweaks for dig layered history (wip) * move director agent to directory * relock * remove "none" from dig_layered_history response * determine character development * update character sheet from character development (wip) * org imports * alert outdated template overrides during startup * editor controls normalization of exposition * dialogue formatting refactor * fix narrator.clean_result forcing * regardless of editor fix exposition setting * move more of the dialogue cleanup logic into the editor fix exposition handlers * remove cruft * change ot normal selects and add some margin * move formatting option up * always strip partial sentences * separates exposition fixes from other dialogue cleanup operations, since we still want those * add novel formatting style * honor formatting config when no markers are supplied * fix issue where sometimes character message formatting would miss character name * director can now guide actors through scene analysis * style fixes * typo * select correct system message on direction type * prompt tweaks * disable by default * add support for dynamic instruction injection and include missing guide for internal note usage * change favicon and also indicate business through favicon * img * support xtc, dry and smoothing in text gen webui * prompt tweaks * support xtc, dry, smoothing in koboldcpp client * reorder * dry, xtc and smoothing factor exposed to tabby api client * urls to third party API documentation * remove bos token * add missing preset * focal * focal progress * focal progress and generated suggestions progress * fix issue with discard all suggestions * apply suggestions * move suggestion ux into the world state manager * support generation options for suggestion generation * unused import * refactor focal to json based approach * focal and character suggestion tweaks * rmeove cruft * remove cruft * relock * prompt tweaks * layout spacing updates * ux elements for removal of scenes from quick load menu * context investigation refactor WIP * context investigation refactor * context investigation refactor * context investigation refactor * cleanup * move scene analysis to summarizer agent * remove deprecated context investigation logic * context investigation refactor continued - split into separate file for easier maint * allow direct specification of response context length * context investigation and scene analyzation progress * change analysis length config to number * remove old dig-layered-history templates * summarizer - deep analysis is only available if there is layered history * move world_state agent to dedicated directory * remove unused imports * automatic character progression WIP * character suggestions progress * app busy flag based on agent business * indicate suggestions in world state overview * fix issue with user input cleanup * move conversation agent to a dedicated submodule * Response in action analyze_text_and_extract_context is too short #162 * move narrator agent to its own submodule * narrator improvements WIP * narration improvements WIP * fix issue with regen of character exit narration * narration improvements WIP * prompt tweaks * last_message_of_type can set max iterations * fix multiline parsing * prompt tweaks * director guide actors based of scene analysis * director guidance for actors * prompt tweaks * prompt tweaks * prompt tweaks * fix automatic character proposals not propagating to the ux * fix analysis length * support director guidance in legacy chat format * typo * prompt tweaks * prompt tweaks * error handling * length config * prompt tweaks * typo * remove cruft * prompt tweak * prompt tweak * time passage style changes * remove cruft * deep analysis context investigations honor call limit * refactor conversation agent long term memory to use new memory rag mixin - also streamline prompts * tweaks to RAG mixin agent config * fix narration highlighting * context investgiation fixes director narration guidance summarization tweaks * direactor guide narration progress context investigation fixes that would cause looping of investigations and failure to dig into the correct layers * prompt tweaks * summarization improvements * separate deep analysis chapter selection from analysis into its own prompt * character entry and exit * cache analysis per subtype and some narrator prompt tweaks * separate layered history logic into its own summarizer mixin and expose some additional options * scene can now set an overral writing style using writing style templates narrator option to enable writing style * narrate query writing style support * scene tools - narrator actions refactor to handler and own component * narrator query / look at narrations emitted as context investigation messages refactor context investigation messaage display scene message meta data object * include narrative direction * improve context investigation message prompt insert * reorg supported parameters * fix bug when no message history exists * WIP make regenerate work nicely with director guidance * WIP make regenerate work nicely with director guidance * regenerate conversation fixes * help text * ux tweaks * relock * turn off deep analysis and context investigations by default * long term memory options for director and summarizer * long term memory caching * fix summarization cache toggle not showing up in ux * ux tweaks * layered history summarization includes character information for mentioned characters * deepseek client added * Add fork button to narrator message * analyze and guidance support for time passage narration * cache based on message fingerprint instead of id * configurable system prompts WIP * configurable system prompts WIP * client overrides for system prompts wired to ux * system prompt overhaul * fix issue with unknown system prompt kind * add button to manually request dynamic choices from the director move the generate choices logic of the director agent to its own submodule * remove cruft * 30 may be too long and is causing the client to disappear temporarly * suppoert dynamic choice generate for non player characters * enable `actor` tab for player characters * creator agent now has access to rag tools improve acting instruction generation * client timeout fixes * fix issue where scene removal menu stayed open after remove * expose scene restore functionality to ux * create initial restore point * fix creator extra-context template * didn't mean to remove this * intro scene should be edited through world editor * fix alert * fix partial quotes regardless of editor setting director guidance for conversation reminds to put speech in quotes * fix @ instructions not being passed through to director guidance prompt * anthropic mode list updated * default off * cohere model list updated * reset actAs on next scene load * prompt tweaks * prompt tweaks * prompt tweaks * prompt tweaks * prompt tweaks * remove debug cruft * relock * docs on changing host / port * fix issue with narrator / director actiosn not available on fresh install * fix issue with long content classification determination result * take this reminder to put speech into quotes out for now, it seems to do more harm than good * fix some remaining issues with auto expositon fixes * prompt tweaks * prompt tweaks * fix issue during reload * expensive and warning ux passthrough for agent config * layered sumamry analysation defaults to on * what's new info block added * docs * what's new updated * remove old images * old img cleanup script * prompt tweaks * improve auto prompt template detection via huggingface * add gpt-4o-realtime-preview add gpt-4o-mini-realtime-preview * add o1 and o3-mini * fix o1 and o3 * fix o1 and o3 * more o1 / o3 fixes * o3 fixes |
||
|
|
95a17197ba |
0.26.0 (#133)
* implement manually disabling and enabling clients * relock * fix warning spam * start moving stuff around * move more stuff * start separating world state manager into more managable submodules * character title * scroll home to top always * finish separating character state editor into components * fix defered nav to character sections * separate components for pin and contextdb managing * fix issue with context character filter search * fix world state manage ux state reset issues * wsm menu refactor allow updating character image from wsm cover image layout fixes * remove debug spam * fix client deletion / disabling rubber banding issue * deactivate / activate / delete characters through wsm * reload character instead * fix koboldcpp client jiggle arguments * save scene title * fix deferred nav * fix issue where blanking a character detail would bug out * some layout changes * character import copies cover image * remove debug message * character import via wsm * deactivate imported characters * images nav option placeholder * start move towards new world state templating system * prompt tweak * add templates/world-state/*.yaml * switch to new world state template system in manager * template editor progress * more wsm template changes * template applicator component * template applicate added to attributes and details * selective template application * fix issue with template editing * attribute and detail templates dont require instructions * adjust character attributes and details template applicator integration * add gpt-4o add gpt-4o-2024-05-13 * autocomplete prompt and postprocessing tweaks * prompt tweaks * fix issue where saving a new scene could cause recent config changes to revert * only download punkt if its not downloaded yet * working character attribute templates * character detail generate working move template generate logic to worldstate.templates * character creator first steps * support contextual generate when character doesn't exist * move talemate wsm templates to their own dir, add supports_spice and supports_style flags * wsm character creator progress * character creator progress * character creator progress and wire up image creation in character editor * templating progress * contextual generate generation options * ux tweaks * wirte up writing style and spice to generation * wire spice / writing style to detail generation * notify when spice is applied * tweaks to generation spice notifications * add some help / information to template editor * fix some issues with detail and attribute generation * some context gen tweaks * character gen tweaks * character color changer * link to templates form gen option ux * gen options for dialogue example genrate * ctrl click to max spice level * unify spice application notification into a component for reuse * improvements to example dialogue generation * some refinements to character editor * remove some old cruft from scene schema * wsm scene editor progress * relock * relock * debug message cleanup * fix issue with tab selection when loading a scene * scene editor progress * centralized generation options * pass generation settings through to character creator * save changes from wsm view * scene settings save copy * refactor world entry / states editor * fix issue with applying non-character world state templates * layout fixes * allow updating of scene cover image * move history manager to world editor * add phi-3 base template * dialogue cleanup improvements * refactor scoped game-engine api * separate legacy creator functions to own file * remove cruft * some cleanup and fixes * add photo style * remove noisy log message * better handling of active scene * some fixes to pin editor * don't enforce height * active scene context fixes * fix intro and scene description generration * tweak preset for scene direction and summarization tasks * ensure memory db is open * update frontend dependencies * update frontend dependencies * fix issue with prompt query_memory function returning None * typo * default world state templates * new scene creation fixes remove legacy creator ux * scene export * fix scene loading from upload * add claude 3.5 sonnet * fix automatic client selection when the current client is disabled * remove cruft * agent modal extended to support multiple config panels visual agent prompt prefixes and suffixes addeed * fix issue with world state template group saving * resolve attribute name issue `copy` * RequestInput: fix form validation and keystroke submit * support chara load from json files also refactor character loading to load.py * implement simple act-as feature using tab to cycle through active characters in the scene * docs progress * tts settings tweaks * fix issue with loading older talemate scenes * docs progress * fix issue with config validation on new installs * some tweaks for agent setting modals * default template changed to alpaca * docs dependencies * gemma2 template * nemotron4 template * docs * docs * docs * change prompt template section to autocomplete * fix agent config not loading for some agents * allow deletion of player character * fix some oddities with scene outline commit * automatically active player characters and create player characters with the correct actor class * also set the first npc created as immediately acitve * add has_active_npcs property and re-emit message history when scene outline is updated. * indicate when visualizer is busy in the scene tools * check for busy instead * prompt tweaks for movie script type dialogue format * gemma2 prompt fixed * scene message colors updated * act as narrator * move to _old * scene message appearance tweaks * fix rubberbanding when editing text field in agent configs * fix autocompletionm when acting as different character or narrator * disable autocomplete during command execution * remove autocomplete button from scene tools * docs * relock * docs * docs * improve context pins in dialogue context * better approximate token count * fix pin condition editing * fix issue where scene save as would lose long term memory entries * immediately clean message history when loading a new scene * docs * ensure intro text has formatting markers * narrator messages written by the player can now be deleted. * scene editor * move docs around * start character editor docs * more character editor docs: * fix some ux bugs * fix template group deletrion not removing the file * docs * typos * docs * relock * docs * notify image generation errors * linting * gh pages workflow * use poetry * dont use poetry * link to docs site * set site_url * add trailing slash * fix image paths * re-add tabbyai link * fix image generation error triggering incorrectly * fix intro formatting incosistencies * remove cruft * add time passed label to history view * date adjustments * tests * add gpt-4o-mini * fix links * remove hard ntlk requirement for voice generation chunking ntlk error handling fix typo * docs * fix issdue with dupe character card intro text * disable character forms while templates are being applied. * failure during context generate no longer locks ux * refactor client and agent status display in system bar * llama 3.1 8b claude * fix format * adjustments to automcomplete dialogue instructions * add mistral nemo * debug info * fix system agent status getting stuck * readme * readme * fix autocomplete responses when they are framed by quotes |
||
|
|
ddfbd6891b |
0.25.5 (#121)
* oepnai compat client to /completions instead of chat/completions openai compat client pass frequency penalty * 0.25.5 * fix version * remove debug message * fix openai compat client not saving coercion settings * openai compatible client: API handles prompt template switches over to chat/completions api * wording * mistral std template * fix error when setting llm prompt template if model name contained / * lock sentence transformers to 2.2.2 since >=2.3.0 breaks instructor model loading * support png tEXt * openai compat client: fix repetition_penality KeyError issue * presence_penalty is not equal to repetition_penalty and needs its own dedicated definition * round presence penalty randomization to one decimal place * fix filename * same fixes for presence_penalty ported to koboldcpp client * kcpp client: remove a1111 setup spam kcpp client: fixes to presence_penalty jiggle * mistral.ai: default model 8x22b mistral.ai: 7b and 8x7b taken out of JSON_OBJECT_RESPONSE_MODELS |
||
|
|
83027b3a0f |
0.23.0 (#91)
* dockerfiles and docker-compose * containerization fixes * docker instructions * readme * readme * dont mount src by default, readme * hf template determine fixes * auto determine prompt template * script to start talemate listening only to 127.0.0.1 * prompt tweaks * auto narrate round every 3 rounds * tweaks * Add return to startscreen button * Only show return to start screen button if scene is active * improvements to character creation * dedicated property for scene title separate fromn the save directory name * filter out negations into negative keywords * increase auto narrate delay * add character portrait keyword * summarization should ignore most recent message, as it is often regenerated. * cohere client * specify python3 * improve viable runpod text gen detection * fix formatting in template preview * cohere command-r plus template that i am not sure if correct or not * mistral client set to decensor * fix issue with parsing json responses * command-r prompts updated * use official mistralai python client * send max_tokens * new input autocomplete functionality * prompt tweeaks * llama 3 templates * add <|eot_id|> to stopping strings * prompt tweak * tooltip * llama-3 identifier * command-r and command-r plus prompt identifiers * text-gen-webui client tweaks to make llama3 eos tokens work correctly * better llama-3 detection * better llama-3 finalizing of parameters * streamline client prompt finalizers reduce YY model smoothing factor from 0.3 to 0.1 for text-generation-webui client * relock * linting * set 0.23.0 * add new gpt-4 models * set 0.23.0 * add note about conecting to text-gen-webui from docker * fix openai image generation no longer working * default to concept_art |
||
|
|
ba64050eab |
0.22.0 (#89)
* linux dev instance shortcuts * add voice samples to gitignore * direction mode: inner monologue * actor direction fixes * py script support for scene logic * fix end_simulation call * port sim suite logic to python * remove dupe log * fix typing * section off the text * fix end simulation command * simulation goal, prompt tweaks * prompt tweaks * dialogue format improvements * director action logged with message * call director action log and other fixes * generate character dialogue instructions, prompt fixes, director action ux * fix question / answer call * generate dialogue instructions when loading from character cards * more dialogue format improvements * set scene content context more reliably. * fix innermonologue perspective * conversation prompt should honor the client's decensor setting * fix comfyui checkpoint list not loading * more dialogue format fixes * prompt tweaks * fix sim suite group characters, prompt fixes * npm relock * handle inanimate objects, handle player name change issues * don't rename details if the original name was "You" * As the conversation goes on, dialogue instructions should be moved backwards further to have a weaker effect on immediate generations. * add more context to character creation prompt * fix select next talking actor when natural language flow is turned on and the LLM returns multiple character names * prompt fixes for dialogue generation * summarization fixes * default to script format * seperate dialogue prompt by formatting style, tweak conversation system prompt * remove cruft * add gen format to agent details * relock * relock * prep 0.22.0 * add claude-3-haiku-20240307 * readme |
||
|
|
abdfb1abbf |
WIP: Prep 0.21.0 (#83)
* cleanup * refactor clean_dialogue * prompt fixes * prompt fixes * conversation format types - movie script and chat (legacy) * stopping strings updated * mistral.ai client * prompt tweaks * mistral client return token counts * anthropic client * archive history emits whole object so we can inspectr time stamps * show timestamp in history dialog * openai compat fixes to stop trying to coerce openai url path schema and to never attempt to retrieve the model name automatically, hopefully improving compatibility with the various openai api implementations across the board * openai compat client let api control prompt template via config option * fix custom client configs and implement max backscroll * fix backscroll limit * remove debug message * prep 0.21.0 * include model name in prompt template selection label * use tabs for side nav in app config modal * readme / docs * fix issue where "No API key set" could be persisted as the selected model name to the config * deepinfra example * linting |
||
|
|
2f07248211 |
Prep 0.20.0 (#77)
* fix issue where recent save cover images would sometimes not load * paraphrase prompt tweaks * action_to_narration regenerate compatibility fixes * sim suite add asnwer question instruction * more sim suite tweaks * refactor agent details display in agent bar * visual agent progres (a1111 support) * visual gen prompt tweaks * openai compat client pass max_tokens * world state sequential reinforcement max tokens tightened * improve item names * Improve item names * attempt to remove "changed from.." notes when altering an existing character sheet * prompt improvements for single character portraits * visual agent progress * fix issue where character.update wouldn't update long-term memory * remove experimental flag for now * add better instructions for updating existing character sheet * background processing for agents, visual and tts * fix selected voice not saving between restarts for elevenlabs * lessen timeout * clean up agent status logic * conditional agent configs * comfyui support * visualization queue * refactor visual styles, comfyui progress * regen images auto cover image assign websocket handler plugin abstraction agent websocket handler * automatic1111 fixes agent status and ready checks * tweaks to character portrait prompt * system prompt for visualize * textgenwebui use temp smoothing on yi models * comment out api key for now * fixes issues with openai compat client for retaining api key and auto fixing urls * update_reinforcment tweaks * agent status emit from one place * emit agent status as asyncio task * remove debug output * tts add openai support * openai img gen support * fix issue with confyui checkbox list not loading * tts model selection for openai * narrate_query include character sheet if character is referenced in query improve visual character portrit generation prompt * client implementation extra field support and runpod vllm client example * relock * fix issue where changing context length would cause next generation to error * visual agent tweaks and auto gen character cover image in sim suite * fix issue with readyness lock when there werent any clients defined * load scene readiness fixes * linting * docs * notes for the runpod vllm example |
||
|
|
303ec2a139 |
Prep 0.18.0 (#58)
* vuetify update recent saves * use placeholder instead of prefilling text * fix scene loading when no coverage image is set * improve summarize and pin response quality * summarization use previous entries as informative context * fixes #49: auto save indicator missleading * regenerate with instructions * allow resetting of state reinforcement * creative tools: introduce new character creative tools: introduce passive character as active character * character creation adjustments * no longer needed * activate, deactivate characters (work in progress) * worldstate manager show inactive characters * allow setting of llm prompt template from ux reorganize llm prompt template directory for easier local overriding support a more sane way to write llm prompt templates * determine prompt template from huggingface * ignore user overrides * fix issue with removing narrator messages * summarization agent config for prev entry inclusion agent config attribute notes * client code clean up to allow modularity of clients + generic openai compatible api client * more client cleanup * remove debug msg, step size for ctx upped to 1024 * wip on stepped history summarization * summarization prompt fixes * include time message for hisory context pushed in scene.context_history * add / remove characters toggle narration of via ctrl * fix pydantic namespace warning fix client emit after reconfig * set memory ids on character detail entries * deal with chromadb race condition (maybe) * activate / deactivate characters from creative editor switch creative editor to edit characters through world state manager * set 0.18.0 * relock dependencies * openai client shortcut to set api key if not set * set error_action to null * if scene has just started provide intro for extra context in is_prsent and is_leaving queries * nice error if determine template via huggingface doesn't work * fix issue where regenerate would sometimes pick the wrong npc if there are multiple characters talking * add new openai models * default to gpt-4-turbo-preview |
||
|
|
d768713630 |
Prep 0.17.0 (#48)
* improve windows install script to check for compatible python versions, also work with multi version python installs * bunch of llm prompt templates * first gamestate directing impl * lower similarity threshold when checking for repetition in llm responses * tweaks to narrate after dialog prompt tweaks to extract character sheet prompt * set_context cmd * Xwin MoE * thematic generator for randomized content stimuli * add a memory query to extract character sheet * direct-scene prompt tweaks * conversation prompt tweaks * inline character creation from gameplay instruction template expose thematic generator to prompt templates * Mixtral Synthia-MoE * display prompt and response side by side * improve ensure_dialogue_format * prompt tweaks * prevent double passive narration in one round improvements to persist character logic * SlimOrca OpenBuddy * prompt tweaks * runpod status check wrapped in asyncio * generate_json_list creator agent action * limit conversation retries to 2 fix issue where REPETITION signal trigger would get sent with the prompt * smaller agent tweaks * thematic generator personality list thematic generator generate from sets of lists * adjust tests * mistral prompt adjustment * director: update content context * prompt adjustments * nous-hermes-2-yi dolphin-2.2-yo dolphin-2.6-mixtral * status messages * determine character goals generate json lists * fix error when chromadb add was called before db was ready (wait until the db is fully initiazed) * only strip extra spaces off of prompt textgenwebui: half temperature on -yi- models * prompt tweaks * more thematic generators * direct scene without character should just run the scene instructions if they exist * as_question_answer for query_scene * context_history revamp * Aurora-Nights MixtgralOrochi dolphin-2.7-mixtral nous-hermas-2-solar * remove old context_history calls * mv world_state.py to subdir FlatDolphinMaid Goliath Norobara Nous-Capybara * world state manager first progress * context db manager * fix issue with some clients not remembering context length settings after talemate restart * Sensualize-Solar * improve RAG prompt * conversation agent use [ as a stopping string since the new reinforcement messages use that * new method for RAG during conversation * mixtral_11bx2_moe * option to reset context db from manager ui * fix context db cleanup if scene is closed without saving * didnt mean to commit that * hide internal meta tags * keep track of manual context entries in scene save file so it can be rebuilt. * auto save auto progress quick settings hotbar options * manual mode actor dialogue tools refactor toolbar * narrate directed progress reorganiza narration tools into one cmd module * 0.17.0 * Mixtral_34Bx2 Sensualize-Mixtral openchat * fix save-as action * fix issue where too little context was joined in via RAG * context pins implementation * show active pins in world state component * pin condition eval and world state agent action config * Open_Gpt4 * summarization prompt improvements system prompt for summarization * guidance prompt for time passage narration * fix rerun for generic / unhandled messages * prompt fixes * summarization methods * prompt adjustments * world tools to hot bar ux tweaks * bagel-dpo * context state reinforcements support different insertion methods now (sequential, all context or conversation specific context) * first progress on world state reinforcement templating * Kunoichi * tweaks to update reinforcements prompt * world state templates progress * world state templates integration into main ux * fix issue where openai client wouldn't accept context length override * dont reconfigure client if no arguments are provided * pin condition prompt fixes world state apply template comman label set * world information / lore entries and reinforcement * show world entry states reinforcers in ux * gitignore * dynamic scenario generation progress * dynamic scenario experiment * gitignore * need to emit world state even if we dont run it during scene init * summarize and pin action * poetry relock * template question / attribute cannot be empty * fix issue with summarize and pin not respecting selected line * keep reinforcement messages in history, but keep the same one from stacking up * narrate query prompt more natural sounding response * manage pins from world entry editor * pin_only tag * ts aware summarize and pin pin text rendered to context with time label context reuse session id (this fixes issue of editing context entry and not saving the scene causing removal of context entry next time scene is loaded) * UX to add character state from template within the worldstate manager UX * move divider * handle agent emit error fix issue with state reinforcer validation * layout fixes in world state character panel physical health template added to example config * fix pin_only undefined error in world entry editor * laser-dolphin Noromaid-v0.4-Mixtral-Instruct * show state templates for world and players in favorite list fix applying world state template * refresh world entry list on state creation * changing a state from non-sequential to sequential should queue it as due * quicksettings to bar * fix error during memory db delete * status messages during scene load * removing a sequential state reinforcement should remove the reinforcement messages * Nous-Hermes-2-Mixtral * fix sync issue when editing character details through contextdb * immutable save property * enable director * update example config * enable director when loading a scene file that has instructions * fix more openai client funkyness with context size and losing model * iq dyn scenario prompt fixes * delay client save so that dragging the ctx slider doesnt send off a million requests default openai ctx to 8k * input disabled while clients active * declare event * narrate query prompt tweaks * fixes to dialogue cleanup that would cause messages after : to be cut off. * init git repo if not exist * pull current branch * add 12 hours as option * world-state persist deactivated * install npm packages * fix typo * prompt tweaks * new screenshots and features updated * update screenshot |
||
|
|
611f77a730 |
Prep 0.16.0 (#40)
* remove dbg message * more work to make clients and agents modular allow conversation and narrator to attempt to auto break AI repetition * application settings refactor setup third party api keys through application settings * runpod docs * fix wording * docs * improvements to auto-break-repetition functionality * more auto-break-repetition improvements * some cleanup to narrate on dialogue chance calculations * changing api keys via ux should now reflect to ux instantly. * memory agent / chromadb agent - wrap blocking functions calls in asyncio * clean up narrate progression prompt and function * turn off dedupe debug message for now * encourage the AI to break repetition as well * indicate if the current model is missing a LLM prompt template add prompt template to client modal fix a bunch of bad vue code * only show llm prompt when editing client * OpenHermes-2.5-neural-chat RpBird-Yi-34B * fix bug with auto rep break when no repetition was found * allow giving extra instructions to narrator agent * emit agents as needed, not constantly * fix a bunch of vue alerts * fix request-client-status event * remove undefined reference * log client / status emit * worldstate component track scene time * Tess Noromaid * fix narrate-character prompt context length overflow issues * disable worldstate refresh button while waiting for response * history timestamp moved to tooltip off of history button * fixes #39: using openai embeddings for chromadb tends to error * adjust conversation again default instructions * poetry lock * remove debug message * chromadb - agent status error if openai embeddings are selected in api key isn't set * prep 0.16.0 |
||
|
|
0738899ac9 |
Prep 0.15.0 (#38)
* send one request for assign all clients * tweak narrate-after-dialogue prompt * elevenlabs default to turbo model and make model id configurable * improve add client dialogue to be more robust * prompt for default character creation on character card loads * rename to model as to not conflict with pydantic * narrate after dialogue strip dialogue generation unless enabled via new option * starling and capybara-tess * narrate dialogue context increased * relabel tts agent to Voice, show agent label in status bar * dont expect LLM to handle * and " - most of them are not stable / consistent enough with it * starling template updated * if allow dialogue in narration is disabled just assume the entire string is a narration * reorganization the narrate after dialogue template * fix more issues with time passage calculations * move punkt download to agent init and silence * improved RAG during conversation if AI selected is enabled in conversation agent * prompt tweaks * deepseek, chromomaid-storytelling * relock * narrate-after-dialogue prompt tweaks * runpod status queries every 15 secs instead of 60 * default player character prompting when loading character card from talemate storage * better chunking during split tts generation * tweak narrate progress prompt * improvements to ensure_dialogue_format and tests * to pytest * prep 0.15.0 * update packages * dialogue cleanup fixes * fix openai default model name fix not being able to edit client due to name check * free form analyst was using wrong system prompt causing gpt-4 to actually generate json responses |
||
|
|
496eb469db |
Prep 0.14.0 (#34)
* tts agent first progress * coqui support voice lists * orca-2 * tts tweaks * switch to ux for audio gen * some tweaks for the new audio queue * fix error handling if llm fails to create a good world state on initial scene load * loading creative mode for a new scene will now ask for confirmation if the current scene has unsaved progress * local tts support * fix voice list reloading when switching tts api fix agent config ux to auto save on change, remove save / close buttons * only do a delayed save on agent config on text input changes * OrionStar * dont allow scene loading when llm agents arent correctly configured * wire summarization to game loop, summarizer agent configs * fix issues with time passage * editor fix narrator messages * 0.14.0 * poetry lock * requires_llm_client moved to cls property * add additional config stubs * tts still load voices even if the agent is disabled * fix bugf that would keep losing voice selection for tts agent after backend restart * update tts install requirements * remove debug output |
||
|
|
d7e72d27c5 |
Prep 0.13.0 (#28)
* requirements.txt file * windows installs from requirements.txt because of silly permission issues * relock * narrator - narrate on dialogue agent actions * add support for new textgenwebui api * world state auto regen trigger off of gameloop * funciton !rename command * ensure_dialog_format error handling * Cat, Nous-Capybara, dolphin-2.2.1 * narrate after dialog rerun fixes, template fixes * LMStudio client (experimental) * dolhpin yi * refactor client base * cruft * openai client to new base * more client refactor fixes * tweak context retrieval prompts * adjust nous capybara template * add Tess-Medium * 0.13.0 * switch back to poetry for windows as well * error on legacy textgenwebui api * runpod text gen api url fixed * fix windows install script * add fllow instruction template * Psyfighter2 |
||
|
|
72202dee02 |
Prep 0.12.0 (#26)
* no " or * just treat as spoken words * chromadb perist to db * collect name should contain embedding so switching between chromadb configurations doesn't brick your scenes * fix save-as long term memory transfer * add chroma * director agent refactor * tweak director command, prompt reset, ux display * tweak director message ux * allow clearing of prompt log * remove auto adding of quotes if neither quote or * are present * command to reset long term memory for the scene * improve summarization template as it would cause some llms to add extra details * rebuilding history will now also rebuild long term memory * direct scene template * fix scene time reset * dialogue template tweaks * better dialog format fixing * some dialogue template adjustments * adjust default values of director agent * keep track of scene saved/unsaved status and confirm loading a different scene if current scene is unsaved * prompt fixes * remove the collection on recommitting the seen to memory, as the embeddings may have changed * change to the official python api for the openai client and make it async * prompt tweaks * world state prompt parsing fixes * improve handling of json responses * 0 seconds ago changed to moments ago * move memory context closer to scene * token counts for openai client * narrator agent option: narrate passage of time * gitignore * remove memory id * refactor world state with persistence to chromadb (wip) * remove world state update instructions * dont display blank emotion in world state * openai gpt-4 turbo support * conversation agent extra instructions * track prompt response times * Yi and UtopiaXL * long term memory retrieval improvements during conversations * narrate scene tweaks * conversation ltm augment tweaks * hide subconfig if parent config isnt enabled * ai assisted memory recall during conversation default to off * openai json_object coersion only on model that supports it openai client emit prompt processing time * 0.12.0 * remove prompt number from prompt debug list * add prompt number back in but shift it to the upper row * narrate time passage hard content limit restriction for now as gpt-4 would just write a whole chapter. * relock |
||
|
|
e6b21789d1 |
Prep 0.11.0 (#19)
* dolphin mistral template * removate trailing \n before attaching the model response * improve prompt and validator for generated human age * fix issue where errors during character creation process would not be communicated to the ux and the character creator would appear stuck * add dolphin mistral to list * add talemate_env * poetry relock * add json schema for talemate scene files * fix issues with pydantic after version upgrade * add json extrac util functions * fix pydantic model * use extract json function * scene generator, better scene name prompt * OpenHermes-2-Mistral * alpaca base template Amethyst 20B template * character description is no longer part of the sheet and needs to be added separately * fix pydantic validation * fix issue where sometimes partial emote strings were kept at the end of dialogue * no need to commit character name to memory * dedupe prompts * clean up extra linebreaks in prompts * experimental editor agent agent signals first progress * take out hardcoded example * amethyst llm prompt template * editor agent disableable agent edit modal tweaks * world state agent agent action config schema * director agent disableable remove automatic actions config from ux (deprecated) * fix responsive update when toggling enable on or off in agent dialog * prompt adjustments fix divine intellect preset (mirostat values were way off) fix world state regenerating every turn regardless of setting * move templates for world state from summarizer to worldstate agent * conversation agent generation lenght setting * conversation agent jiggle attribute (randomize offset to certain inference parameters) * relabel * scene cover image set to cover as much space as it can * add character sheet to dialogue example generate prompt * character creator agent mixin use set_processing * add <|im_end|> to stopping strings * add random number gen to template functions * SynthIA and Tiefighter * create new persisted characters ouf of world state natural flow option for conversation agent to help guide multi character conversations * conversation agent natural flow improvements * fix bug with 1h time passage option * some templates * poetry relock * fix config validation * fix issues when detemrining scene history context length to stay within budget * fixes to world state json parsing fixes to conversation context length * remove unused import * update windows install scripts * zephyr * </s> stopping string * dialog cleanup utils improved * add agents and clients key to the config example |
||
|
|
73240b5791 |
Prep 0.10.0 (#12)
* track time passage in scene using iso 8601 format * chromadb openai instructions model recommendations updated * time context passed to long term memory * add some pre-established history for testing purposes * time passage analyze dialogue to template query_text template function analyze text and answer question summarizer function llm prompt template adjustments iso8601 time utils chromadb docs adjustments * didnt mean to remove this * fix ClientContext stacking * conversation cleanup tweaks * prompt prepared response padding * fix some bugs causing conversation lines containing : to be terminated early * fixes issue with chara importing dialoge examples as huge blob instea of splitting into lines dialogue example in conversation template randomized * llm prompt template for Speechless-Llama2-Hermes-Orca-Platypus-WizardLM * version to 0.10.0 |
||
|
|
44a91094e6 |
0.9.0 (#3)
* fixes #2: character creator description generation will not honor changes to the content context * decrease output of base attribute generation from 2-3 sentences to 1-2 sentences * conversation agent tweaks set other character names as stopping strings via client context * xwin llm template * conversation template tweaks * fixes #6: agent busy status not always reflected in ux * conversation min response length requirement reduced include character base details with conversation prompt * fixes #4: Prompt log * reset prompt log on scene load openai tokens as ? for now * version to 0.9.0 |
||
|
|
6d93b041c5 | initial commit |