* dockerfiles and docker-compose
* containerization fixes
* docker instructions
* readme
* readme
* dont mount src by default, readme
* hf template determine fixes
* auto determine prompt template
* script to start talemate listening only to 127.0.0.1
* prompt tweaks
* auto narrate round every 3 rounds
* tweaks
* Add return to startscreen button
* Only show return to start screen button if scene is active
* improvements to character creation
* dedicated property for scene title separate fromn the save directory name
* filter out negations into negative keywords
* increase auto narrate delay
* add character portrait keyword
* summarization should ignore most recent message, as it is often regenerated.
* cohere client
* specify python3
* improve viable runpod text gen detection
* fix formatting in template preview
* cohere command-r plus template that i am not sure if correct or not
* mistral client set to decensor
* fix issue with parsing json responses
* command-r prompts updated
* use official mistralai python client
* send max_tokens
* new input autocomplete functionality
* prompt tweeaks
* llama 3 templates
* add <|eot_id|> to stopping strings
* prompt tweak
* tooltip
* llama-3 identifier
* command-r and command-r plus prompt identifiers
* text-gen-webui client tweaks to make llama3 eos tokens work correctly
* better llama-3 detection
* better llama-3 finalizing of parameters
* streamline client prompt finalizers
reduce YY model smoothing factor from 0.3 to 0.1 for text-generation-webui client
* relock
* linting
* set 0.23.0
* add new gpt-4 models
* set 0.23.0
* add note about conecting to text-gen-webui from docker
* fix openai image generation no longer working
* default to concept_art