Files
talemate/docker-compose.yml

21 lines
664 B
YAML
Raw Normal View History

0.23.0 (#91) * dockerfiles and docker-compose * containerization fixes * docker instructions * readme * readme * dont mount src by default, readme * hf template determine fixes * auto determine prompt template * script to start talemate listening only to 127.0.0.1 * prompt tweaks * auto narrate round every 3 rounds * tweaks * Add return to startscreen button * Only show return to start screen button if scene is active * improvements to character creation * dedicated property for scene title separate fromn the save directory name * filter out negations into negative keywords * increase auto narrate delay * add character portrait keyword * summarization should ignore most recent message, as it is often regenerated. * cohere client * specify python3 * improve viable runpod text gen detection * fix formatting in template preview * cohere command-r plus template that i am not sure if correct or not * mistral client set to decensor * fix issue with parsing json responses * command-r prompts updated * use official mistralai python client * send max_tokens * new input autocomplete functionality * prompt tweeaks * llama 3 templates * add <|eot_id|> to stopping strings * prompt tweak * tooltip * llama-3 identifier * command-r and command-r plus prompt identifiers * text-gen-webui client tweaks to make llama3 eos tokens work correctly * better llama-3 detection * better llama-3 finalizing of parameters * streamline client prompt finalizers reduce YY model smoothing factor from 0.3 to 0.1 for text-generation-webui client * relock * linting * set 0.23.0 * add new gpt-4 models * set 0.23.0 * add note about conecting to text-gen-webui from docker * fix openai image generation no longer working * default to concept_art
2024-04-20 01:01:06 +03:00
version: '3.8'
services:
0.27.0 (#137) * move memory agent to directory structure * chromadb settings rework * memory agent improvements embedding presets support switching embeddings without restart support custom sentence transformer embeddings * toggle to hide / show disabled clients * add memory debug tools * chromadb no longer needs its dedicated config entry * add missing emits * fix initial value * hidden disabled clients no longer cause enumeration issues with client actions * improve memory agent error handling and hot reloading * more memory agent error handling * DEBUG_MEMORY_REQUESTS off * relock * sim suite: fix issue with removing or changing characters * relock * fix issue where actor dialogue editor would break with multiple characters in the scene * remove cruft * implement interrupt function * margin adjustments * fix rubber banding issue in world editor when editing certain text fields * status notification when re-importing vectorb due to embeddings change * properly open new client context on agent actions * move jiggle apply to the end of prompt tune stack * narrator agent length limit and jiggle settings added - also improve post generation cleanup * progress story prompt improvements * narrator prompt and cleanup tweaks * prompt tweak * revert * autocomplete dialogue improvements * Unified process (#141) * progress to unified process * --dev arg * use gunicorn to serve built frontend * gunicorn config adjustments * remove dist from gitignore * revert * uvicorn instead * save decode * graceful shutdown * refactor unified process * clean up frontend log messages * more logging fixes * 0.27.0 * startup message * clean up scripts a bit * fixes to update.bat * fixes to install.bat * sim suite supports generation cancellation * debug * simplify narrator prompts * prompt tweaks * unified docker file * update docker compose config for unified docker file * cruft * fix startup in linux docker * download punkt so its available * prompt tweaks * fix bug when editing scene outline would wipe message history * add o1 models * add sampler, scheduler and cfg config to a1111 visualizer * update installation docs * visualizer configurable timeout * memory agent docs * docs * relock * relock * fix issue where changing embeddings on immutable scene would hang * remove debug message * take torch install out of poetry since conditionals don't work. * torch gets installed through some dependency so put it back into poetry, but reinstall with cuda if cuda support exists * fix install syntax * no need for torchvision * torch cuda install added to linux install script * add torch cuda install to update.bat * docs * docs * relock * fix install.sh * handle torch+cuda install in docker * docs * typo
2024-09-23 12:55:34 +03:00
talemate:
0.23.0 (#91) * dockerfiles and docker-compose * containerization fixes * docker instructions * readme * readme * dont mount src by default, readme * hf template determine fixes * auto determine prompt template * script to start talemate listening only to 127.0.0.1 * prompt tweaks * auto narrate round every 3 rounds * tweaks * Add return to startscreen button * Only show return to start screen button if scene is active * improvements to character creation * dedicated property for scene title separate fromn the save directory name * filter out negations into negative keywords * increase auto narrate delay * add character portrait keyword * summarization should ignore most recent message, as it is often regenerated. * cohere client * specify python3 * improve viable runpod text gen detection * fix formatting in template preview * cohere command-r plus template that i am not sure if correct or not * mistral client set to decensor * fix issue with parsing json responses * command-r prompts updated * use official mistralai python client * send max_tokens * new input autocomplete functionality * prompt tweeaks * llama 3 templates * add <|eot_id|> to stopping strings * prompt tweak * tooltip * llama-3 identifier * command-r and command-r plus prompt identifiers * text-gen-webui client tweaks to make llama3 eos tokens work correctly * better llama-3 detection * better llama-3 finalizing of parameters * streamline client prompt finalizers reduce YY model smoothing factor from 0.3 to 0.1 for text-generation-webui client * relock * linting * set 0.23.0 * add new gpt-4 models * set 0.23.0 * add note about conecting to text-gen-webui from docker * fix openai image generation no longer working * default to concept_art
2024-04-20 01:01:06 +03:00
build:
context: .
0.27.0 (#137) * move memory agent to directory structure * chromadb settings rework * memory agent improvements embedding presets support switching embeddings without restart support custom sentence transformer embeddings * toggle to hide / show disabled clients * add memory debug tools * chromadb no longer needs its dedicated config entry * add missing emits * fix initial value * hidden disabled clients no longer cause enumeration issues with client actions * improve memory agent error handling and hot reloading * more memory agent error handling * DEBUG_MEMORY_REQUESTS off * relock * sim suite: fix issue with removing or changing characters * relock * fix issue where actor dialogue editor would break with multiple characters in the scene * remove cruft * implement interrupt function * margin adjustments * fix rubber banding issue in world editor when editing certain text fields * status notification when re-importing vectorb due to embeddings change * properly open new client context on agent actions * move jiggle apply to the end of prompt tune stack * narrator agent length limit and jiggle settings added - also improve post generation cleanup * progress story prompt improvements * narrator prompt and cleanup tweaks * prompt tweak * revert * autocomplete dialogue improvements * Unified process (#141) * progress to unified process * --dev arg * use gunicorn to serve built frontend * gunicorn config adjustments * remove dist from gitignore * revert * uvicorn instead * save decode * graceful shutdown * refactor unified process * clean up frontend log messages * more logging fixes * 0.27.0 * startup message * clean up scripts a bit * fixes to update.bat * fixes to install.bat * sim suite supports generation cancellation * debug * simplify narrator prompts * prompt tweaks * unified docker file * update docker compose config for unified docker file * cruft * fix startup in linux docker * download punkt so its available * prompt tweaks * fix bug when editing scene outline would wipe message history * add o1 models * add sampler, scheduler and cfg config to a1111 visualizer * update installation docs * visualizer configurable timeout * memory agent docs * docs * relock * relock * fix issue where changing embeddings on immutable scene would hang * remove debug message * take torch install out of poetry since conditionals don't work. * torch gets installed through some dependency so put it back into poetry, but reinstall with cuda if cuda support exists * fix install syntax * no need for torchvision * torch cuda install added to linux install script * add torch cuda install to update.bat * docs * docs * relock * fix install.sh * handle torch+cuda install in docker * docs * typo
2024-09-23 12:55:34 +03:00
dockerfile: Dockerfile
args:
- CUDA_AVAILABLE=${CUDA_AVAILABLE:-false}
0.23.0 (#91) * dockerfiles and docker-compose * containerization fixes * docker instructions * readme * readme * dont mount src by default, readme * hf template determine fixes * auto determine prompt template * script to start talemate listening only to 127.0.0.1 * prompt tweaks * auto narrate round every 3 rounds * tweaks * Add return to startscreen button * Only show return to start screen button if scene is active * improvements to character creation * dedicated property for scene title separate fromn the save directory name * filter out negations into negative keywords * increase auto narrate delay * add character portrait keyword * summarization should ignore most recent message, as it is often regenerated. * cohere client * specify python3 * improve viable runpod text gen detection * fix formatting in template preview * cohere command-r plus template that i am not sure if correct or not * mistral client set to decensor * fix issue with parsing json responses * command-r prompts updated * use official mistralai python client * send max_tokens * new input autocomplete functionality * prompt tweeaks * llama 3 templates * add <|eot_id|> to stopping strings * prompt tweak * tooltip * llama-3 identifier * command-r and command-r plus prompt identifiers * text-gen-webui client tweaks to make llama3 eos tokens work correctly * better llama-3 detection * better llama-3 finalizing of parameters * streamline client prompt finalizers reduce YY model smoothing factor from 0.3 to 0.1 for text-generation-webui client * relock * linting * set 0.23.0 * add new gpt-4 models * set 0.23.0 * add note about conecting to text-gen-webui from docker * fix openai image generation no longer working * default to concept_art
2024-04-20 01:01:06 +03:00
ports:
0.27.0 (#137) * move memory agent to directory structure * chromadb settings rework * memory agent improvements embedding presets support switching embeddings without restart support custom sentence transformer embeddings * toggle to hide / show disabled clients * add memory debug tools * chromadb no longer needs its dedicated config entry * add missing emits * fix initial value * hidden disabled clients no longer cause enumeration issues with client actions * improve memory agent error handling and hot reloading * more memory agent error handling * DEBUG_MEMORY_REQUESTS off * relock * sim suite: fix issue with removing or changing characters * relock * fix issue where actor dialogue editor would break with multiple characters in the scene * remove cruft * implement interrupt function * margin adjustments * fix rubber banding issue in world editor when editing certain text fields * status notification when re-importing vectorb due to embeddings change * properly open new client context on agent actions * move jiggle apply to the end of prompt tune stack * narrator agent length limit and jiggle settings added - also improve post generation cleanup * progress story prompt improvements * narrator prompt and cleanup tweaks * prompt tweak * revert * autocomplete dialogue improvements * Unified process (#141) * progress to unified process * --dev arg * use gunicorn to serve built frontend * gunicorn config adjustments * remove dist from gitignore * revert * uvicorn instead * save decode * graceful shutdown * refactor unified process * clean up frontend log messages * more logging fixes * 0.27.0 * startup message * clean up scripts a bit * fixes to update.bat * fixes to install.bat * sim suite supports generation cancellation * debug * simplify narrator prompts * prompt tweaks * unified docker file * update docker compose config for unified docker file * cruft * fix startup in linux docker * download punkt so its available * prompt tweaks * fix bug when editing scene outline would wipe message history * add o1 models * add sampler, scheduler and cfg config to a1111 visualizer * update installation docs * visualizer configurable timeout * memory agent docs * docs * relock * relock * fix issue where changing embeddings on immutable scene would hang * remove debug message * take torch install out of poetry since conditionals don't work. * torch gets installed through some dependency so put it back into poetry, but reinstall with cuda if cuda support exists * fix install syntax * no need for torchvision * torch cuda install added to linux install script * add torch cuda install to update.bat * docs * docs * relock * fix install.sh * handle torch+cuda install in docker * docs * typo
2024-09-23 12:55:34 +03:00
- "${FRONTEND_PORT:-8080}:8080"
- "${BACKEND_PORT:-5050}:5050"
0.23.0 (#91) * dockerfiles and docker-compose * containerization fixes * docker instructions * readme * readme * dont mount src by default, readme * hf template determine fixes * auto determine prompt template * script to start talemate listening only to 127.0.0.1 * prompt tweaks * auto narrate round every 3 rounds * tweaks * Add return to startscreen button * Only show return to start screen button if scene is active * improvements to character creation * dedicated property for scene title separate fromn the save directory name * filter out negations into negative keywords * increase auto narrate delay * add character portrait keyword * summarization should ignore most recent message, as it is often regenerated. * cohere client * specify python3 * improve viable runpod text gen detection * fix formatting in template preview * cohere command-r plus template that i am not sure if correct or not * mistral client set to decensor * fix issue with parsing json responses * command-r prompts updated * use official mistralai python client * send max_tokens * new input autocomplete functionality * prompt tweeaks * llama 3 templates * add <|eot_id|> to stopping strings * prompt tweak * tooltip * llama-3 identifier * command-r and command-r plus prompt identifiers * text-gen-webui client tweaks to make llama3 eos tokens work correctly * better llama-3 detection * better llama-3 finalizing of parameters * streamline client prompt finalizers reduce YY model smoothing factor from 0.3 to 0.1 for text-generation-webui client * relock * linting * set 0.23.0 * add new gpt-4 models * set 0.23.0 * add note about conecting to text-gen-webui from docker * fix openai image generation no longer working * default to concept_art
2024-04-20 01:01:06 +03:00
volumes:
- ./config.yaml:/app/config.yaml
- ./scenes:/app/scenes
- ./templates:/app/templates
- ./chroma:/app/chroma
environment:
- PYTHONUNBUFFERED=1
0.27.0 (#137) * move memory agent to directory structure * chromadb settings rework * memory agent improvements embedding presets support switching embeddings without restart support custom sentence transformer embeddings * toggle to hide / show disabled clients * add memory debug tools * chromadb no longer needs its dedicated config entry * add missing emits * fix initial value * hidden disabled clients no longer cause enumeration issues with client actions * improve memory agent error handling and hot reloading * more memory agent error handling * DEBUG_MEMORY_REQUESTS off * relock * sim suite: fix issue with removing or changing characters * relock * fix issue where actor dialogue editor would break with multiple characters in the scene * remove cruft * implement interrupt function * margin adjustments * fix rubber banding issue in world editor when editing certain text fields * status notification when re-importing vectorb due to embeddings change * properly open new client context on agent actions * move jiggle apply to the end of prompt tune stack * narrator agent length limit and jiggle settings added - also improve post generation cleanup * progress story prompt improvements * narrator prompt and cleanup tweaks * prompt tweak * revert * autocomplete dialogue improvements * Unified process (#141) * progress to unified process * --dev arg * use gunicorn to serve built frontend * gunicorn config adjustments * remove dist from gitignore * revert * uvicorn instead * save decode * graceful shutdown * refactor unified process * clean up frontend log messages * more logging fixes * 0.27.0 * startup message * clean up scripts a bit * fixes to update.bat * fixes to install.bat * sim suite supports generation cancellation * debug * simplify narrator prompts * prompt tweaks * unified docker file * update docker compose config for unified docker file * cruft * fix startup in linux docker * download punkt so its available * prompt tweaks * fix bug when editing scene outline would wipe message history * add o1 models * add sampler, scheduler and cfg config to a1111 visualizer * update installation docs * visualizer configurable timeout * memory agent docs * docs * relock * relock * fix issue where changing embeddings on immutable scene would hang * remove debug message * take torch install out of poetry since conditionals don't work. * torch gets installed through some dependency so put it back into poetry, but reinstall with cuda if cuda support exists * fix install syntax * no need for torchvision * torch cuda install added to linux install script * add torch cuda install to update.bat * docs * docs * relock * fix install.sh * handle torch+cuda install in docker * docs * typo
2024-09-23 12:55:34 +03:00
- PYTHONPATH=/app/src:$PYTHONPATH
command: ["/bin/bash", "-c", "source /app/talemate_env/bin/activate && python src/talemate/server/run.py runserver --host 0.0.0.0 --port 5050 --frontend-host 0.0.0.0 --frontend-port 8080"]