* add gpt-4o
add gpt-4o-2024-05-13
* fix koboldcpp client jiggle arguments
* kcpp api url default port 5001
* fix repetition breaking issues with kcpp client
* use tokencount endpoint if available
* auto configure visual agent with koboldcpp
* env var config for frontend serve
* its not clear that gpt-4o is better than turbo, dont default to it yet
* 0.25.3
* handle kcpp being down during a1111 setup check
* only check a1111 setup if client is connected
* fix kcpp a1111 setup check
* fix issue where saving a new scene could cause recent config changes to revert
* fix typo
* fix openai compat config save issue maybe
* fix api_handles_prompt_template no longer saving changes after last fix
* koboldcpp client
* default to kobold ai api
* linting
* conversation cleanup tweak
* 0.25.2
* allowed hosts to all on dev instance
* ensure numbers on parameters when sending edited values
* fix prompt parameter issues
* remove debug message
KoboldCpp is a great fit for TaleMate, it supports fast local generations across a variety of machines including the cloud and is compatible with both text and image gen trough the OpenAI API, and A1111 API.
* flip title and name in recent scenes
* fix issue where a message could not be regenerated after applying continuity error fixes
* prompt tweaks
* allow json parameters for commands
* autocomplete improvements
* dialogue cleanup fixes
* fix issue with narrate after dialogue and llama3 (and other models that don't have a line break after the user prompt in their prompt template.
* expose ability to auto generate dialogue instructions to wsm character ux
* use b64_json response type
* move tag checks up so they match first
* fix typo
* prompt tweak
* api key support
* prompt tweaks
* editable parameters in prompt debugger / tester
* allow reseting of prompt params
* codemirror for prompt editor
* prompt tweaks
* more prompt debug tool tweaks
* some extra control for `context_history`
* new analytical preset (testing)
* add `join` and `llm_can_be_coerced` to jinja env
* support factual list summaries
* prompt tweaks to continuity check and fix
* new summarization method `facts` exposed to ux
* clamp mistral ai temperature according to their new requirements
* prompt tweaks
* better parsing of fixed dialogue response
* prompt tweaks
* fix intermittent empty meta issue
* history regen status progression and small ux tweaks
* summary entries should always be condensed
* google gemini support
* relock to install google-cloud-aiplatform for vertex ai inference
* fix instruction link
* better error handling of google safety validation and allow disabling of safety validation
* docs
* clarify credentials path requirements
* tweak error line identification
* handle quota limit error
* autocomplete ux wired to assistant plugin instead of command
* autocomplete narrative editing and fixes to autocomplete during dialog edit
* main input autocomplete tweaks
* allow new lines in main input
* 0.25.0 and relock
* fix issue with autocomplete elsewhere locking out main input
* better way to determine remote service
* prompt tweak
* fix rubberbanding issue when editing character attributes
* add open mistral 8x22
* fix continuity error check summary inclusion of target entry
* docs
* default context length to 8192
* linting
* groq client
* adjust max token length
* more openai image download fixes
* graphic novel style
* dialogue cleanup
* fix issue where auto-break repetition would trigger on empty responses
* reduce default convo retries to 1
* prompt tweaks
* fix some clients not handling autocomplete well
* screenplay dialogue generation tweaks
* message flags
* better cleanup of redundant change_ai_character calls
* super experimental continuity error fix mode for editor agent
* clamp temperature
* tweaks to continuity error fixing and expose to ux
* expose to ux
* allow CmdFixContinuityErrors to work even if editor has check_continuity_errors disabled
* prompt tweak
* support --endofline-- as well
* double coercion client option added
* fix issue with double coercion inserting "None" if not set
* client ux refactor to make room for coercion config
* rest of -- can be treated as *
* disable double coercion when json coercion is active since it kills accuracy
* prompt tweaks
* prompt tweaks
* show coercion status in client list
* change preset for edit_fix_continuity
* interim commit of coninuity error handling progress
* tag based presets
* special tokens to keep trailing whitespace if needed
* fix continuity errors finalized for now
* change double coercion formatting
* 0.24.0 and relock
* add groq and cohere to supported services
* linting
* dockerfiles and docker-compose
* containerization fixes
* docker instructions
* readme
* readme
* dont mount src by default, readme
* hf template determine fixes
* auto determine prompt template
* script to start talemate listening only to 127.0.0.1
* prompt tweaks
* auto narrate round every 3 rounds
* tweaks
* Add return to startscreen button
* Only show return to start screen button if scene is active
* improvements to character creation
* dedicated property for scene title separate fromn the save directory name
* filter out negations into negative keywords
* increase auto narrate delay
* add character portrait keyword
* summarization should ignore most recent message, as it is often regenerated.
* cohere client
* specify python3
* improve viable runpod text gen detection
* fix formatting in template preview
* cohere command-r plus template that i am not sure if correct or not
* mistral client set to decensor
* fix issue with parsing json responses
* command-r prompts updated
* use official mistralai python client
* send max_tokens
* new input autocomplete functionality
* prompt tweeaks
* llama 3 templates
* add <|eot_id|> to stopping strings
* prompt tweak
* tooltip
* llama-3 identifier
* command-r and command-r plus prompt identifiers
* text-gen-webui client tweaks to make llama3 eos tokens work correctly
* better llama-3 detection
* better llama-3 finalizing of parameters
* streamline client prompt finalizers
reduce YY model smoothing factor from 0.3 to 0.1 for text-generation-webui client
* relock
* linting
* set 0.23.0
* add new gpt-4 models
* set 0.23.0
* add note about conecting to text-gen-webui from docker
* fix openai image generation no longer working
* default to concept_art
* linux dev instance shortcuts
* add voice samples to gitignore
* direction mode: inner monologue
* actor direction fixes
* py script support for scene logic
* fix end_simulation call
* port sim suite logic to python
* remove dupe log
* fix typing
* section off the text
* fix end simulation command
* simulation goal, prompt tweaks
* prompt tweaks
* dialogue format improvements
* director action logged with message
* call director action log and other fixes
* generate character dialogue instructions, prompt fixes, director action ux
* fix question / answer call
* generate dialogue instructions when loading from character cards
* more dialogue format improvements
* set scene content context more reliably.
* fix innermonologue perspective
* conversation prompt should honor the client's decensor setting
* fix comfyui checkpoint list not loading
* more dialogue format fixes
* prompt tweaks
* fix sim suite group characters, prompt fixes
* npm relock
* handle inanimate objects, handle player name change issues
* don't rename details if the original name was "You"
* As the conversation goes on, dialogue instructions should be moved backwards further to have a weaker effect on immediate generations.
* add more context to character creation prompt
* fix select next talking actor when natural language flow is turned on and the LLM returns multiple character names
* prompt fixes for dialogue generation
* summarization fixes
* default to script format
* seperate dialogue prompt by formatting style, tweak conversation system prompt
* remove cruft
* add gen format to agent details
* relock
* relock
* prep 0.22.0
* add claude-3-haiku-20240307
* readme
* cleanup
* refactor clean_dialogue
* prompt fixes
* prompt fixes
* conversation format types - movie script and chat (legacy)
* stopping strings updated
* mistral.ai client
* prompt tweaks
* mistral client return token counts
* anthropic client
* archive history emits whole object so we can inspectr time stamps
* show timestamp in history dialog
* openai compat fixes to stop trying to coerce openai url path schema and to never attempt to retrieve the model name automatically, hopefully improving compatibility with the various openai api implementations across the board
* openai compat client let api control prompt template via config option
* fix custom client configs and implement max backscroll
* fix backscroll limit
* remove debug message
* prep 0.21.0
* include model name in prompt template selection label
* use tabs for side nav in app config modal
* readme / docs
* fix issue where "No API key set" could be persisted as the selected model name to the config
* deepinfra example
* linting
* fix issue where recent save cover images would sometimes not load
* paraphrase prompt tweaks
* action_to_narration regenerate compatibility fixes
* sim suite add asnwer question instruction
* more sim suite tweaks
* refactor agent details display in agent bar
* visual agent progres (a1111 support)
* visual gen prompt tweaks
* openai compat client pass max_tokens
* world state sequential reinforcement max tokens tightened
* improve item names
* Improve item names
* attempt to remove "changed from.." notes when altering an existing character sheet
* prompt improvements for single character portraits
* visual agent progress
* fix issue where character.update wouldn't update long-term memory
* remove experimental flag for now
* add better instructions for updating existing character sheet
* background processing for agents, visual and tts
* fix selected voice not saving between restarts for elevenlabs
* lessen timeout
* clean up agent status logic
* conditional agent configs
* comfyui support
* visualization queue
* refactor visual styles, comfyui progress
* regen images
auto cover image assign
websocket handler plugin abstraction
agent websocket handler
* automatic1111 fixes
agent status and ready checks
* tweaks to character portrait prompt
* system prompt for visualize
* textgenwebui use temp smoothing on yi models
* comment out api key for now
* fixes issues with openai compat client for retaining api key and auto fixing urls
* update_reinforcment tweaks
* agent status emit from one place
* emit agent status as asyncio task
* remove debug output
* tts add openai support
* openai img gen support
* fix issue with confyui checkbox list not loading
* tts model selection for openai
* narrate_query include character sheet if character is referenced in query
improve visual character portrit generation prompt
* client implementation extra field support and runpod vllm client example
* relock
* fix issue where changing context length would cause next generation to error
* visual agent tweaks and auto gen character cover image in sim suite
* fix issue with readyness lock when there werent any clients defined
* load scene readiness fixes
* linting
* docs
* notes for the runpod vllm example
* linting
* improve prompt devtools: test changes, show more information
* some more polish for the new promp devtools
* up default conversation gen length to 128
* openai client tweaks, talemate sets max_tokens on gpt-3.5 generations
* support new openai embeddings (and default to text-embedding-3-small)
* ux polish for character sheet and character state ux
* actor instructions
* experiment using # for context / instructions
* fix bug where regenerating history would mess up time stamps
* remove trailing ]
* prevent client ctx from being unset
* fix issue where sometimes you'd need to delete a client twice for it to disappear
* upgrade dependencies
* set 0.19.0
* fix performance degradation caused by circular loading animation
* remove coqui studio support
* fix issue when switching from unsaved creative mode to loading a scene
* third party client / agent support
* edit dialogue examples through character / actor editor
* remove "edit dialogue" action from editor - replaced by character actor instructions
* different icon for delete
* prompt adjustment for acting instructions
* adhoc context generation for character attributes and details
* add adhoc generation for character description
* contextual generation tweaks
* contextual generation for dialogue examples
fix some formatting issues
* contextual generation for world entries
* prepopulate initial recen scenarios with demo scenes
add experimental holodeck scenario
* scene info
scene experimental
* assortment of fixes for holodeck improvements
* more holodeck fixes
* refactor holodeck instructions
* rename holodeck to simulation suite
* better scene status messages
* add new gpt-3.5-turbo model, better json response coercion for older models
* allow exclusion of characters when persisting based on world state
* better error handling of world state response
* better error handling of world state response
* more simulation suite fixes
* progress color
* world state character name mapping support
* if neither quote nor asterisk is in message default to quotes
* fix rerun of new paraphrase op
* sim suite ping that ensure's characters are not aware of sim
* fixes for better character name assessment
simulation suite can now give the player character a proper name
* fix bug with new status notifications
* sim suite adjustments and fixes and tuning
* sim suite tweaks
* impl scene restore from file
* prompting tweaks for reinforcement messages and acting instructions
* more tweaks
* dialogue prompt tweaks for rerun + rewrite
* fix bug with character entry / exit with narration
* linting
* simsuite screenshots
* screenshots
* vuetify update
recent saves
* use placeholder instead of prefilling text
* fix scene loading when no coverage image is set
* improve summarize and pin response quality
* summarization use previous entries as informative context
* fixes#49: auto save indicator missleading
* regenerate with instructions
* allow resetting of state reinforcement
* creative tools: introduce new character
creative tools: introduce passive character as active character
* character creation adjustments
* no longer needed
* activate, deactivate characters (work in progress)
* worldstate manager show inactive characters
* allow setting of llm prompt template from ux
reorganize llm prompt template directory for easier local overriding
support a more sane way to write llm prompt templates
* determine prompt template from huggingface
* ignore user overrides
* fix issue with removing narrator messages
* summarization agent config for prev entry inclusion
agent config attribute notes
* client code clean up to allow modularity of clients + generic openai compatible api client
* more client cleanup
* remove debug msg, step size for ctx upped to 1024
* wip on stepped history summarization
* summarization prompt fixes
* include time message for hisory context pushed in scene.context_history
* add / remove characters toggle narration of via ctrl
* fix pydantic namespace warning
fix client emit after reconfig
* set memory ids on character detail entries
* deal with chromadb race condition (maybe)
* activate / deactivate characters from creative editor
switch creative editor to edit characters through world state manager
* set 0.18.0
* relock dependencies
* openai client shortcut to set api key if not set
* set error_action to null
* if scene has just started provide intro for extra context in is_prsent and is_leaving queries
* nice error if determine template via huggingface doesn't work
* fix issue where regenerate would sometimes pick the wrong npc if there are multiple characters talking
* add new openai models
* default to gpt-4-turbo-preview
* improve windows install script to check for compatible python versions, also work with multi version python installs
* bunch of llm prompt templates
* first gamestate directing impl
* lower similarity threshold when checking for repetition in llm responses
* tweaks to narrate after dialog prompt
tweaks to extract character sheet prompt
* set_context cmd
* Xwin MoE
* thematic generator for randomized content stimuli
* add a memory query to extract character sheet
* direct-scene prompt tweaks
* conversation prompt tweaks
* inline character creation from gameplay instruction template
expose thematic generator to prompt templates
* Mixtral
Synthia-MoE
* display prompt and response side by side
* improve ensure_dialogue_format
* prompt tweaks
* prevent double passive narration in one round
improvements to persist character logic
* SlimOrca
OpenBuddy
* prompt tweaks
* runpod status check wrapped in asyncio
* generate_json_list creator agent action
* limit conversation retries to 2
fix issue where REPETITION signal trigger would get sent with the prompt
* smaller agent tweaks
* thematic generator personality list
thematic generator generate from sets of lists
* adjust tests
* mistral prompt adjustment
* director: update content context
* prompt adjustments
* nous-hermes-2-yi
dolphin-2.2-yo
dolphin-2.6-mixtral
* status messages
* determine character goals
generate json lists
* fix error when chromadb add was called before db was ready (wait until the db is fully initiazed)
* only strip extra spaces off of prompt
textgenwebui: half temperature on -yi- models
* prompt tweaks
* more thematic generators
* direct scene without character should just run the scene instructions if they exist
* as_question_answer for query_scene
* context_history revamp
* Aurora-Nights
MixtgralOrochi
dolphin-2.7-mixtral
nous-hermas-2-solar
* remove old context_history calls
* mv world_state.py to subdir
FlatDolphinMaid
Goliath
Norobara
Nous-Capybara
* world state manager first progress
* context db manager
* fix issue with some clients not remembering context length settings after talemate restart
* Sensualize-Solar
* improve RAG prompt
* conversation agent use [ as a stopping string since the new reinforcement messages use that
* new method for RAG during conversation
* mixtral_11bx2_moe
* option to reset context db from manager ui
* fix context db cleanup if scene is closed without saving
* didnt mean to commit that
* hide internal meta tags
* keep track of manual context entries in scene save file so it can be rebuilt.
* auto save
auto progress
quick settings hotbar options
* manual mode
actor dialogue tools
refactor toolbar
* narrate directed progress
reorganiza narration tools into one cmd module
* 0.17.0
* Mixtral_34Bx2
Sensualize-Mixtral
openchat
* fix save-as action
* fix issue where too little context was joined in via RAG
* context pins implementation
* show active pins in world state component
* pin condition eval and world state agent action config
* Open_Gpt4
* summarization prompt improvements
system prompt for summarization
* guidance prompt for time passage narration
* fix rerun for generic / unhandled messages
* prompt fixes
* summarization methods
* prompt adjustments
* world tools to hot bar
ux tweaks
* bagel-dpo
* context state reinforcements support different insertion methods now (sequential, all context or conversation specific context)
* first progress on world state reinforcement templating
* Kunoichi
* tweaks to update reinforcements prompt
* world state templates progress
* world state templates integration into main ux
* fix issue where openai client wouldn't accept context length override
* dont reconfigure client if no arguments are provided
* pin condition prompt fixes
world state apply template comman label set
* world information / lore entries and reinforcement
* show world entry states reinforcers in ux
* gitignore
* dynamic scenario generation progress
* dynamic scenario experiment
* gitignore
* need to emit world state even if we dont run it during scene init
* summarize and pin action
* poetry relock
* template question / attribute cannot be empty
* fix issue with summarize and pin not respecting selected line
* keep reinforcement messages in history, but keep the same one from stacking up
* narrate query prompt more natural sounding response
* manage pins from world entry editor
* pin_only tag
* ts aware summarize and pin
pin text rendered to context with time label
context reuse session id (this fixes issue of editing context entry and not saving the scene causing removal of context entry next time scene is loaded)
* UX to add character state from template within the worldstate manager UX
* move divider
* handle agent emit error
fix issue with state reinforcer validation
* layout fixes in world state character panel
physical health template added to example config
* fix pin_only undefined error in world entry editor
* laser-dolphin
Noromaid-v0.4-Mixtral-Instruct
* show state templates for world and players in favorite list
fix applying world state template
* refresh world entry list on state creation
* changing a state from non-sequential to sequential should queue it as due
* quicksettings to bar
* fix error during memory db delete
* status messages during scene load
* removing a sequential state reinforcement should remove the reinforcement messages
* Nous-Hermes-2-Mixtral
* fix sync issue when editing character details through contextdb
* immutable save property
* enable director
* update example config
* enable director when loading a scene file that has instructions
* fix more openai client funkyness with context size and losing model
* iq dyn scenario prompt fixes
* delay client save so that dragging the ctx slider doesnt send off a million requests
default openai ctx to 8k
* input disabled while clients active
* declare event
* narrate query prompt tweaks
* fixes to dialogue cleanup that would cause messages after : to be cut off.
* init git repo if not exist
* pull current branch
* add 12 hours as option
* world-state persist deactivated
* install npm packages
* fix typo
* prompt tweaks
* new screenshots and features updated
* update screenshot
* remove dbg message
* more work to make clients and agents modular
allow conversation and narrator to attempt to auto break AI repetition
* application settings refactor
setup third party api keys through application settings
* runpod docs
* fix wording
* docs
* improvements to auto-break-repetition functionality
* more auto-break-repetition improvements
* some cleanup to narrate on dialogue chance calculations
* changing api keys via ux should now reflect to ux instantly.
* memory agent / chromadb agent - wrap blocking functions calls in asyncio
* clean up narrate progression prompt and function
* turn off dedupe debug message for now
* encourage the AI to break repetition as well
* indicate if the current model is missing a LLM prompt template
add prompt template to client modal
fix a bunch of bad vue code
* only show llm prompt when editing client
* OpenHermes-2.5-neural-chat
RpBird-Yi-34B
* fix bug with auto rep break when no repetition was found
* allow giving extra instructions to narrator agent
* emit agents as needed, not constantly
* fix a bunch of vue alerts
* fix request-client-status event
* remove undefined reference
* log client / status emit
* worldstate component track scene time
* Tess
Noromaid
* fix narrate-character prompt context length overflow issues
* disable worldstate refresh button while waiting for response
* history timestamp moved to tooltip off of history button
* fixes#39: using openai embeddings for chromadb tends to error
* adjust conversation again default instructions
* poetry lock
* remove debug message
* chromadb - agent status error if openai embeddings are selected in api key isn't set
* prep 0.16.0
* send one request for assign all clients
* tweak narrate-after-dialogue prompt
* elevenlabs default to turbo model and make model id configurable
* improve add client dialogue to be more robust
* prompt for default character creation on character card loads
* rename to model as to not conflict with pydantic
* narrate after dialogue strip dialogue generation unless enabled via new option
* starling and capybara-tess
* narrate dialogue context increased
* relabel tts agent to Voice, show agent label in status bar
* dont expect LLM to handle * and " - most of them are not stable / consistent enough with it
* starling template updated
* if allow dialogue in narration is disabled just assume the entire string is a narration
* reorganization the narrate after dialogue template
* fix more issues with time passage calculations
* move punkt download to agent init and silence
* improved RAG during conversation if AI selected is enabled in conversation agent
* prompt tweaks
* deepseek, chromomaid-storytelling
* relock
* narrate-after-dialogue prompt tweaks
* runpod status queries every 15 secs instead of 60
* default player character prompting when loading character card from talemate storage
* better chunking during split tts generation
* tweak narrate progress prompt
* improvements to ensure_dialogue_format and tests
* to pytest
* prep 0.15.0
* update packages
* dialogue cleanup fixes
* fix openai default model name
fix not being able to edit client due to name check
* free form analyst was using wrong system prompt causing gpt-4 to actually generate json responses