mirror of
https://github.com/vegu-ai/talemate.git
synced 2025-12-24 15:39:34 +01:00
Compare commits
26 Commits
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
c36fd3a9b0 | ||
|
|
5874d6f05c | ||
|
|
4c15ca5290 | ||
|
|
595b04b8dd | ||
|
|
c7e614c01a | ||
|
|
626da5c551 | ||
|
|
e5de5dad4d | ||
|
|
ce2517dd03 | ||
|
|
4b26d5e410 | ||
|
|
73240b5791 | ||
|
|
44a91094e6 | ||
|
|
7f11b4859e | ||
|
|
c3d932a020 | ||
|
|
c0173523f5 | ||
|
|
23f26b75da | ||
|
|
d396e9b1f5 | ||
|
|
ee0efb86a0 | ||
|
|
596c4c7740 | ||
|
|
1ce74ada42 | ||
|
|
9fb78fc684 | ||
|
|
9ce82d110c | ||
|
|
c73097d668 | ||
|
|
b296956e95 | ||
|
|
9c790b7529 | ||
|
|
5bc8e05d60 | ||
|
|
b2d7adc40e |
3
.gitignore
vendored
3
.gitignore
vendored
@@ -1,10 +1,7 @@
|
||||
.lmer
|
||||
*.pyc
|
||||
problems
|
||||
*.swp
|
||||
*.swo
|
||||
*.egg-info
|
||||
tales/
|
||||
*-internal*
|
||||
*.internal*
|
||||
*_internal*
|
||||
|
||||
82
README.md
82
README.md
@@ -1,10 +1,10 @@
|
||||
# Talemate
|
||||
|
||||
Talemate is an experimental application that allows you to roleplay scenarios with large language models. I've worked on this on and off since early 2023, as a private project, but decided i might as well put in the extra effort and open source it.
|
||||
Allows you to play roleplay scenarios with large language models.
|
||||
|
||||
It does not run LLMs itself but relies on existing APIs. Currently supports text-generation-webui and openai.
|
||||
It does not run any large language models itself but relies on existing APIs. Currently supports **text-generation-webui** and **openai**.
|
||||
|
||||
This means you need to either have an openai api key or know how to setup [oobabooga/text-generation-webui](https://github.com/oobabooga/text-generation-webui) (locally or remotely via gpu renting.)
|
||||
This means you need to either have an openai api key or know how to setup [oobabooga/text-generation-webui](https://github.com/oobabooga/text-generation-webui) (locally or remotely via gpu renting. `--api` flag needs to be set)
|
||||
|
||||

|
||||

|
||||
@@ -12,21 +12,23 @@ This means you need to either have an openai api key or know how to setup [oobab
|
||||
## Current features
|
||||
|
||||
- responive modern ui
|
||||
- multi-client (agents can be connected to separate LLMs)
|
||||
- agents
|
||||
- conversation
|
||||
- narration
|
||||
- summarization
|
||||
- director
|
||||
- creative
|
||||
- long term memory
|
||||
- multi-client (agents can be connected to separate APIs)
|
||||
- long term memory (experimental)
|
||||
- chromadb integration
|
||||
- passage of time
|
||||
- narrative world state
|
||||
- narrative tools
|
||||
- creative mode
|
||||
- creative tools
|
||||
- AI backed character creation with template support (jinja2)
|
||||
- AI backed scenario creation
|
||||
- runpod integration
|
||||
- overridable templates foe all LLM prompts. (jinja2)
|
||||
- overridable templates for all prompts. (jinja2)
|
||||
|
||||
## Planned features
|
||||
|
||||
@@ -34,20 +36,27 @@ Kinda making it up as i go along, but i want to lean more into gameplay through
|
||||
|
||||
In no particular order:
|
||||
|
||||
- Automatic1111 client
|
||||
- Gameplay loop governed by AI
|
||||
- Extension support
|
||||
- modular agents and clients
|
||||
- Improved world state
|
||||
- Dynamic player choice generation
|
||||
- Better creative tools
|
||||
- node based scenario / character creation
|
||||
- Improved long term memory (base is there, but its very rough at the moment)
|
||||
- Improved and consistent long term memory
|
||||
- Improved director agent
|
||||
- Right now this doesn't really work well on anything but GPT-4 (and even there it's debatable). It tends to steer the story in a way that introduces pacing issues. It needs a model that is creative but also reasons really well i think.
|
||||
- Gameplay loop governed by AI
|
||||
- objectives
|
||||
- quests
|
||||
- win / lose conditions
|
||||
- Automatic1111 client
|
||||
|
||||
# Quickstart
|
||||
|
||||
## Installation
|
||||
|
||||
Post [here](https://github.com/final-wombat/talemate/issues/17) if you run into problems during installation.
|
||||
|
||||
### Windows
|
||||
|
||||
1. Download and install Python 3.10 or higher from the [official Python website](https://www.python.org/downloads/windows/).
|
||||
@@ -64,7 +73,7 @@ In no particular order:
|
||||
1. `git clone git@github.com:final-wombat/talemate`
|
||||
1. `cd talemate`
|
||||
1. `source install.sh`
|
||||
1. Start the backend: `python src/talemate/server/run.py runserver --host 0.0.0.0 --port 5001`.
|
||||
1. Start the backend: `python src/talemate/server/run.py runserver --host 0.0.0.0 --port 5050`.
|
||||
1. Open a new terminal, navigate to the `talemate_frontend` directory, and start the frontend server by running `npm run serve`.
|
||||
|
||||
## Configuration
|
||||
@@ -100,20 +109,45 @@ Note: this is my personal opinion while using talemate. If you find a model that
|
||||
|
||||
Will be updated as i test more models and over time.
|
||||
|
||||
| Model Name | Status | Type | Notes |
|
||||
|-------------------------------|------------------|-----------------|-------------------------------------------------------------------------------------------------------------------|
|
||||
| [GPT-4](https://platform.openai.com/) | GOOD | Remote | Costs money and is heavily censored, while talemate will send a general "decensor" system prompt, depending on the type of content you want to roleplay, there is a chance your key will be banned. **If you do use this make sure to monitor your api usage, talemate tends to send a lot more requests than other roleplaying applications.** |
|
||||
| [GPT-3.5-turbo](https://platform.openai.com/) | AVOID | Remote | Costs money and is heavily censored, while talemate will send a general "decensor" system prompt, depending on the type of content you want to roleplay, there is a chance your key will be banned. Can roleplay, but not great at consistently generating JSON responses needed for various parts of talemate (world-state etc.) |
|
||||
| [Nous Hermes LLama2](https://huggingface.co/TheBloke/Nous-Hermes-Llama2-GPTQ) | RECOMMENDED | 13B model | My go-to model for 13B parameters. It's good at roleplay and also smart enough to handle the world state and narrative tools. A 13B model loaded via exllama also allows you run chromadb with the xl instructor embeddings off of a single 4090. |
|
||||
| [MythoMax](https://huggingface.co/TheBloke/MythoMax-L2-13B-GPTQ) | RECOMMENDED | 13B model | Similar quality to Hermes LLama2, but a bit more creative. Rarely fails on JSON responses. |
|
||||
| [Synthia v1.2 34B](https://huggingface.co/TheBloke/Synthia-34B-v1.2-GPTQ) | RECOMMENDED | 34B model | Cannot be run at full context together with chromadb instructor models on a single 4090. But a great choice if you're running chromadb with the default embeddings (or on cpu). |
|
||||
| [Genz](https://huggingface.co/TheBloke/Genz-70b-GPTQ) | GOOD | 70B model | Great choice if you have the hardware to run it (or can rent it). |
|
||||
| [Synthia v1.2 70B](https://huggingface.co/TheBloke/Synthia-70B-v1.2-GPTQ) | GOOD | 70B model | Great choice if you have the hardware to run it (or can rent it). |
|
||||
| Model Name | Type | Notes |
|
||||
|-------------------------------|-----------------|-------------------------------------------------------------------------------------------------------------------|
|
||||
| [Nous Hermes LLama2](https://huggingface.co/TheBloke/Nous-Hermes-Llama2-GPTQ) | 13B model | My go-to model for 13B parameters. It's good at roleplay and also smart enough to handle the world state and narrative tools. A 13B model loaded via exllama also allows you run chromadb with the xl instructor embeddings off of a single 4090. |
|
||||
| [Xwin-LM-13B](https://huggingface.co/TheBloke/Xwin-LM-13B-V0.1-GPTQ) | 13B model | Really strong model, roleplaying capability still needs more testing |
|
||||
| [MythoMax](https://huggingface.co/TheBloke/MythoMax-L2-13B-GPTQ) | 13B model | Similar quality to Hermes LLama2, but a bit more creative. Rarely fails on JSON responses. |
|
||||
| [Synthia v1.2 34B](https://huggingface.co/TheBloke/Synthia-34B-v1.2-GPTQ) | 34B model | Cannot be run at full context together with chromadb instructor models on a single 4090. But a great choice if you're running chromadb with the default embeddings (or on cpu). |
|
||||
| [Xwin-LM-70B](https://huggingface.co/TheBloke/Xwin-LM-70B-V0.1-GPTQ) | 70B model | Great choice if you have the hardware to run it (or can rent it). |
|
||||
| [Synthia v1.2 70B](https://huggingface.co/TheBloke/Synthia-70B-v1.2-GPTQ) | 70B model | Great choice if you have the hardware to run it (or can rent it). |
|
||||
| [GPT-4](https://platform.openai.com/) | Remote | Still the best for consistency and reasoning, but is heavily censored. Talemate will send a general "decensor" system prompt, ymmv. **If you do use this make sure to monitor your api usage, talemate tends to send a lot more requests than other roleplaying applications.** |
|
||||
| [GPT-3.5-turbo](https://platform.openai.com/) | Remote | It's really inconsistent with JSON responses, plus its probably still just as heavily censored as GPT-4. If you want to run it i'd suggest running it for the conversation agent, and use GPT-4 for the other agents. **If you do use this make sure to monitor your api usage, talemate tends to send a lot more requests than other roleplaying applications.** |
|
||||
|
||||
I have not tested with Llama 1 mnodels in a while, Lazarus was really good at roleplay, but started failing on JSON requirements.
|
||||
I have not tested with Llama 1 models in a while, Lazarus was really good at roleplay, but started failing on JSON requirements.
|
||||
|
||||
I have not tested with anything below 13B parameters.
|
||||
|
||||
## Connecting to an LLM
|
||||
|
||||
On the right hand side click the "Add Client" button. If there is no button, you may need to toggle the client options by clicking this button:
|
||||
|
||||

|
||||
|
||||
### Text-generation-webui
|
||||
|
||||
In the modal if you're planning to connect to text-generation-webui, you can likely leave everything as is and just click Save.
|
||||
|
||||

|
||||
|
||||
### OpenAI
|
||||
|
||||
If you want to add an OpenAI client, just change the client type and select the apropriate model.
|
||||
|
||||

|
||||
|
||||
### Ready to go
|
||||
|
||||
You will know you are good to go when the client and all the agents have a green dot next to them.
|
||||
|
||||

|
||||
|
||||
## Load the introductory scenario "Infinity Quest"
|
||||
|
||||
Generated using talemate creative tools, mostly used for testing / demoing.
|
||||
@@ -130,9 +164,13 @@ Expand the "Load" menu in the top left corner and either click on "Upload a char
|
||||
|
||||

|
||||
|
||||
Once a character is uploaded, talemate may actually take a moment because it needs to convert it to a talemate format and will also run additional LLM prompts to generate character attributes and world state.
|
||||
|
||||
Make sure you save the scene after the character is loaded as it can then be loaded as normal talemate scenario in the future.
|
||||
|
||||
## Further documentation
|
||||
|
||||
- Creative mode (docs WIP)
|
||||
- Prompt template overrides
|
||||
- [ChromaDB (long term memory)](docs/chromadb.md)
|
||||
- Runpod Integration
|
||||
- Runpod Integration
|
||||
|
||||
@@ -1,31 +1,61 @@
|
||||
## ChromaDB
|
||||
# ChromaDB
|
||||
|
||||
Talemate uses ChromaDB to maintain long-term memory. The default embeddings used are really fast but also not incredibly accurate. If you want to use more accurate embeddings you can use the instructor embeddings or the openai embeddings. See below for instructions on how to enable these.
|
||||
|
||||
In my testing so far, instructor-xl has proved to be the most accurate (even more-so than openai)
|
||||
|
||||
## Local instructor embeddings
|
||||
|
||||
If you want chromaDB to use the more accurate (but much slower) instructor embeddings add the following to `config.yaml`:
|
||||
|
||||
**Note**: The `xl` model takes a while to load even with cuda. Expect a minute of loading time on the first scene you load.
|
||||
|
||||
```yaml
|
||||
chromadb:
|
||||
embeddings: instructor
|
||||
instructor_device: cpu
|
||||
instructor_model: hkunlp/instructor-xl"
|
||||
instructor_model: hkunlp/instructor-xl
|
||||
```
|
||||
|
||||
### Instructor embedding models
|
||||
|
||||
- `hkunlp/instructor-base` (smallest / fastest)
|
||||
- `hkunlp/instructor-large`
|
||||
- `hkunlp/instructor-xl` (largest / slowest) - requires about 5GB of memory
|
||||
|
||||
You will need to restart the backend for this change to take effect.
|
||||
|
||||
**NOTE** - The first time you do this it will need to download the instructor model you selected. This may take a while, and the talemate backend will be un-responsive during that time.
|
||||
|
||||
Once the download is finished, if talemate is still un-responsive, try reloading the front-end to reconnect. When all fails just restart the backend as well. I'll try to make this more robust in the future.
|
||||
|
||||
### GPU support
|
||||
|
||||
If you want to use the instructor embeddings with GPU support, you will need to install pytorch with CUDA support.
|
||||
|
||||
To do this on windows, run `install-pytorch-cuda.bat` from the project root. Then change your device in the config to `cuda`:
|
||||
To do this on windows, run `install-pytorch-cuda.bat` from the project directory. Then change your device in the config to `cuda`:
|
||||
|
||||
```yaml
|
||||
chromadb:
|
||||
embeddings: instructor
|
||||
instructor_device: cuda
|
||||
instructor_model: hkunlp/instructor-xl"
|
||||
instructor_model: hkunlp/instructor-xl
|
||||
```
|
||||
|
||||
Instructor embedding models:
|
||||
## OpenAI embeddings
|
||||
|
||||
- `hkunlp/instructor-base` (smallest / fastest)
|
||||
- `hkunlp/instructor-large`
|
||||
- `hkunlp/instructor-xl` (largest / slowest) - requires about 5GB of GPU memory
|
||||
First make sure your openai key is specified in the `config.yaml` file
|
||||
|
||||
```yaml
|
||||
openai:
|
||||
api_key: <your-key-here>
|
||||
```
|
||||
|
||||
Then add the following to `config.yaml` for chromadb:
|
||||
|
||||
```yaml
|
||||
chromadb:
|
||||
embeddings: openai
|
||||
```
|
||||
|
||||
**Note**: As with everything openai, using this isn't free. It's way cheaper than their text completion though. ALSO - if you send super explicit content they may flag / ban your key, so keep that in mind (i hear they usually send warnings first though), and always monitor your usage on their dashboard.
|
||||
BIN
docs/img/add-client-modal-openai.png
Normal file
BIN
docs/img/add-client-modal-openai.png
Normal file
Binary file not shown.
|
After Width: | Height: | Size: 9.8 KiB |
BIN
docs/img/add-client-modal.png
Normal file
BIN
docs/img/add-client-modal.png
Normal file
Binary file not shown.
|
After Width: | Height: | Size: 16 KiB |
BIN
docs/img/client-options-toggle.png
Normal file
BIN
docs/img/client-options-toggle.png
Normal file
Binary file not shown.
|
After Width: | Height: | Size: 1.3 KiB |
BIN
docs/img/client-setup-complete.png
Normal file
BIN
docs/img/client-setup-complete.png
Normal file
Binary file not shown.
|
After Width: | Height: | Size: 23 KiB |
@@ -14,7 +14,7 @@
|
||||
|
||||
1. With the virtual environment activated and dependencies installed, you can start the backend server.
|
||||
2. Navigate to the `src/talemate/server` directory.
|
||||
3. Run the server with `python run.py runserver --host 0.0.0.0 --port 5001`.
|
||||
3. Run the server with `python run.py runserver --host 0.0.0.0 --port 5050`.
|
||||
|
||||
### Running the Frontend
|
||||
|
||||
@@ -22,4 +22,4 @@
|
||||
2. If you haven't already, install npm dependencies by running `npm install`.
|
||||
3. Start the server with `npm run serve`.
|
||||
|
||||
Please note that you may need to set environment variables or modify the host and port as per your setup. You can refer to the `runserver.sh` and `frontend.sh` files for more details.
|
||||
Please note that you may need to set environment variables or modify the host and port as per your setup. You can refer to the `runserver.sh` and `frontend.sh` files for more details.
|
||||
|
||||
909
poetry.lock
generated
909
poetry.lock
generated
File diff suppressed because it is too large
Load Diff
@@ -4,10 +4,10 @@ build-backend = "poetry.masonry.api"
|
||||
|
||||
[tool.poetry]
|
||||
name = "talemate"
|
||||
version = "0.1.0"
|
||||
description = "AI companionship and roleplay."
|
||||
authors = ["FinalWombat <finalwombat@gmail.com>"]
|
||||
license = "MIT"
|
||||
version = "0.10.1"
|
||||
description = "AI-backed roleplay and narrative tools"
|
||||
authors = ["FinalWombat"]
|
||||
license = "GNU Affero General Public License v3.0"
|
||||
|
||||
[tool.poetry.dependencies]
|
||||
python = ">=3.10,<4.0"
|
||||
@@ -35,6 +35,7 @@ websockets = "^11.0.3"
|
||||
structlog = "^23.1.0"
|
||||
runpod = "==1.2.0"
|
||||
nest_asyncio = "^1.5.7"
|
||||
isodate = ">=0.6.1"
|
||||
|
||||
# ChromaDB
|
||||
chromadb = ">=0.4,<1"
|
||||
@@ -74,4 +75,4 @@ include_trailing_comma = true
|
||||
force_grid_wrap = 0
|
||||
use_parentheses = true
|
||||
ensure_newline_before_comments = true
|
||||
line_length = 88
|
||||
line_length = 88
|
||||
|
||||
@@ -4,7 +4,29 @@
|
||||
"name": "Infinity Quest",
|
||||
"history": [],
|
||||
"environment": "scene",
|
||||
"archived_history": [],
|
||||
"ts": "P1Y",
|
||||
"archived_history": [
|
||||
{
|
||||
"text": "Captain Elmer and Kaira first met during their rigorous training for the Infinity Quest mission. Their initial interactions were marked by a sense of mutual respect and curiosity.",
|
||||
"ts": "PT1S"
|
||||
},
|
||||
{
|
||||
"text": "Over the course of several months, as they trained together, Elmer and Kaira developed a strong bond. They often spent their free time discussing their dreams of exploring the cosmos.",
|
||||
"ts": "P3M"
|
||||
},
|
||||
{
|
||||
"text": "During a simulated mission, the Starlight Nomad encountered a sudden system malfunction. Elmer and Kaira worked tirelessly together to resolve the issue and avert a potential disaster. This incident strengthened their trust in each other's abilities.",
|
||||
"ts": "P6M"
|
||||
},
|
||||
{
|
||||
"text": "As they ventured further into uncharted space, the crew faced a perilous encounter with a hostile alien species. Elmer and Kaira's coordinated efforts were instrumental in negotiating a peaceful resolution and avoiding conflict.",
|
||||
"ts": "P8M"
|
||||
},
|
||||
{
|
||||
"text": "One memorable evening, while gazing at the stars through the ship's observation deck, Elmer and Kaira shared personal stories from their past. This intimate conversation deepened their connection and understanding of each other.",
|
||||
"ts": "P11M"
|
||||
}
|
||||
],
|
||||
"character_states": {},
|
||||
"characters": [
|
||||
{
|
||||
|
||||
@@ -2,4 +2,4 @@ from .agents import Agent
|
||||
from .client import TextGeneratorWebuiClient
|
||||
from .tale_mate import *
|
||||
|
||||
VERSION = "0.8.0"
|
||||
VERSION = "0.10.1"
|
||||
@@ -12,6 +12,32 @@ import talemate.util as util
|
||||
from talemate.emit import emit
|
||||
|
||||
|
||||
__all__ = [
|
||||
"Agent",
|
||||
"set_processing",
|
||||
]
|
||||
|
||||
def set_processing(fn):
|
||||
"""
|
||||
decorator that emits the agent status as processing while the function
|
||||
is running.
|
||||
|
||||
Done via a try - final block to ensure the status is reset even if
|
||||
the function fails.
|
||||
"""
|
||||
|
||||
async def wrapper(self, *args, **kwargs):
|
||||
try:
|
||||
await self.emit_status(processing=True)
|
||||
return await fn(self, *args, **kwargs)
|
||||
finally:
|
||||
await self.emit_status(processing=False)
|
||||
|
||||
wrapper.__name__ = fn.__name__
|
||||
|
||||
return wrapper
|
||||
|
||||
|
||||
class Agent(ABC):
|
||||
"""
|
||||
Base agent class, defines a role
|
||||
@@ -19,6 +45,8 @@ class Agent(ABC):
|
||||
|
||||
agent_type = "agent"
|
||||
verbose_name = None
|
||||
|
||||
set_processing = set_processing
|
||||
|
||||
@property
|
||||
def agent_details(self):
|
||||
@@ -51,16 +79,29 @@ class Agent(ABC):
|
||||
@property
|
||||
def status(self):
|
||||
if self.ready:
|
||||
return "idle"
|
||||
return "idle" if getattr(self, "processing", 0) == 0 else "busy"
|
||||
else:
|
||||
return "uninitialized"
|
||||
|
||||
async def emit_status(self, processing: bool = None):
|
||||
if processing is not None:
|
||||
self.processing = processing
|
||||
|
||||
status = "busy" if getattr(self, "processing", False) else self.status
|
||||
|
||||
|
||||
# should keep a count of processing requests, and when the
|
||||
# number is 0 status is "idle", if the number is greater than 0
|
||||
# status is "busy"
|
||||
#
|
||||
# increase / decrease based on value of `processing`
|
||||
|
||||
if getattr(self, "processing", None) is None:
|
||||
self.processing = 0
|
||||
|
||||
if not processing:
|
||||
self.processing -= 1
|
||||
self.processing = max(0, self.processing)
|
||||
else:
|
||||
self.processing += 1
|
||||
|
||||
status = "busy" if self.processing > 0 else "idle"
|
||||
|
||||
emit(
|
||||
"agent_status",
|
||||
message=self.verbose_name or "",
|
||||
|
||||
@@ -11,7 +11,7 @@ from talemate.emit import emit
|
||||
from talemate.scene_message import CharacterMessage, DirectorMessage
|
||||
from talemate.prompts import Prompt
|
||||
|
||||
from .base import Agent
|
||||
from .base import Agent, set_processing
|
||||
from .registry import register
|
||||
|
||||
if TYPE_CHECKING:
|
||||
@@ -29,6 +29,8 @@ class ConversationAgent(Agent):
|
||||
|
||||
agent_type = "conversation"
|
||||
verbose_name = "Conversation"
|
||||
|
||||
min_dialogue_length = 75
|
||||
|
||||
def __init__(
|
||||
self,
|
||||
@@ -156,29 +158,15 @@ class ConversationAgent(Agent):
|
||||
|
||||
def clean_result(self, result, character):
|
||||
|
||||
|
||||
log.debug("clean result", result=result)
|
||||
|
||||
if "#" in result:
|
||||
result = result.split("#")[0]
|
||||
|
||||
result = result.replace("\n", " ").strip()
|
||||
|
||||
|
||||
# Check for occurrence of a character name followed by a colon
|
||||
# that does NOT match the character name of the current character
|
||||
if "." in result and re.search(rf"(?!{self.character.name})\w+:", result):
|
||||
result = re.sub(rf"(?!{character.name})\w+:(.*\n*)*", "", result)
|
||||
|
||||
|
||||
# Removes partial sentence at the end
|
||||
result = re.sub(r"[^\.\?\!\*]+(\n|$)", "", result)
|
||||
|
||||
|
||||
result = result.replace(" :", ":")
|
||||
|
||||
result = result.strip().strip('"').strip()
|
||||
|
||||
result = result.replace("[", "*").replace("]", "*")
|
||||
result = result.replace("(", "*").replace(")", "*")
|
||||
result = result.replace("**", "*")
|
||||
|
||||
# if there is an uneven number of '*' add one to the end
|
||||
@@ -188,13 +176,12 @@ class ConversationAgent(Agent):
|
||||
|
||||
return result
|
||||
|
||||
@set_processing
|
||||
async def converse(self, actor, editor=None):
|
||||
"""
|
||||
Have a conversation with the AI
|
||||
"""
|
||||
|
||||
await self.emit_status(processing=True)
|
||||
|
||||
history = actor.history
|
||||
self.current_memory_context = None
|
||||
|
||||
@@ -212,7 +199,7 @@ class ConversationAgent(Agent):
|
||||
empty_result_count = 0
|
||||
|
||||
# Validate AI response
|
||||
while loop_count < max_loops:
|
||||
while loop_count < max_loops and len(total_result) < self.min_dialogue_length:
|
||||
log.debug("conversation agent", result=result)
|
||||
result = await self.client.send_prompt(
|
||||
await self.build_prompt(character, char_message=total_result)
|
||||
@@ -227,7 +214,7 @@ class ConversationAgent(Agent):
|
||||
|
||||
loop_count += 1
|
||||
|
||||
if len(total_result) >= 250:
|
||||
if len(total_result) > self.min_dialogue_length:
|
||||
break
|
||||
|
||||
# if result is empty, increment empty_result_count
|
||||
@@ -240,13 +227,13 @@ class ConversationAgent(Agent):
|
||||
|
||||
result = result.replace(" :", ":")
|
||||
|
||||
# Removes any line starting with another character name followed by a colon
|
||||
total_result = re.sub(rf"(?!{character.name})\w+:(.*\n*)*", "", total_result)
|
||||
|
||||
total_result = total_result.split("#")[0]
|
||||
|
||||
# Removes partial sentence at the end
|
||||
total_result = re.sub(r"[^\.\?\!\*]+(\n|$)", "", total_result)
|
||||
total_result = util.strip_partial_sentences(total_result)
|
||||
|
||||
# Remove "{character.name}:" - all occurences
|
||||
total_result = total_result.replace(f"{character.name}:", "")
|
||||
|
||||
if total_result.count("*") % 2 == 1:
|
||||
total_result += "*"
|
||||
@@ -277,6 +264,4 @@ class ConversationAgent(Agent):
|
||||
# Add message and response to conversation history
|
||||
actor.scene.push_history(messages)
|
||||
|
||||
await self.emit_status(processing=False)
|
||||
|
||||
return messages
|
||||
|
||||
@@ -14,6 +14,7 @@ from talemate.automated_action import AutomatedAction
|
||||
import talemate.automated_action as automated_action
|
||||
from .conversation import ConversationAgent
|
||||
from .registry import register
|
||||
from .base import set_processing
|
||||
|
||||
if TYPE_CHECKING:
|
||||
from talemate import Actor, Character, Player, Scene
|
||||
@@ -68,37 +69,34 @@ class DirectorAgent(ConversationAgent):
|
||||
log.info("question_direction", response=response)
|
||||
return response, evaluation, prompt
|
||||
|
||||
|
||||
@set_processing
|
||||
async def direct(self, character: Character, goal_override:str=None):
|
||||
|
||||
await self.emit_status(processing=True)
|
||||
analysis, current_goal, action = await self.decide_action(character, goal_override=goal_override)
|
||||
|
||||
try:
|
||||
if action == "watch":
|
||||
return None
|
||||
if action == "watch":
|
||||
return None
|
||||
|
||||
if action == "direct":
|
||||
return await self.direct_character_with_self_reflection(character, analysis, goal_override=current_goal)
|
||||
|
||||
if action.startswith("narrate"):
|
||||
|
||||
if action == "direct":
|
||||
return await self.direct_character_with_self_reflection(character, analysis, goal_override=current_goal)
|
||||
narration_type = action.split(":")[1]
|
||||
|
||||
if action.startswith("narrate"):
|
||||
|
||||
narration_type = action.split(":")[1]
|
||||
|
||||
direct_narrative = await self.direct_narrative(analysis, narration_type=narration_type, goal=current_goal)
|
||||
if direct_narrative:
|
||||
narrator = self.scene.get_helper("narrator").agent
|
||||
narrator_response = await narrator.progress_story(direct_narrative)
|
||||
if not narrator_response:
|
||||
return None
|
||||
narrator_message = NarratorMessage(narrator_response, source="progress_story")
|
||||
self.scene.push_history(narrator_message)
|
||||
emit("narrator", narrator_message)
|
||||
return True
|
||||
finally:
|
||||
await self.emit_status(processing=False)
|
||||
direct_narrative = await self.direct_narrative(analysis, narration_type=narration_type, goal=current_goal)
|
||||
if direct_narrative:
|
||||
narrator = self.scene.get_helper("narrator").agent
|
||||
narrator_response = await narrator.progress_story(direct_narrative)
|
||||
if not narrator_response:
|
||||
return None
|
||||
narrator_message = NarratorMessage(narrator_response, source="progress_story")
|
||||
self.scene.push_history(narrator_message)
|
||||
emit("narrator", narrator_message)
|
||||
return True
|
||||
|
||||
|
||||
@set_processing
|
||||
async def direct_narrative(self, analysis:str, narration_type:str="progress", goal:str=None):
|
||||
|
||||
if goal is None:
|
||||
@@ -120,6 +118,7 @@ class DirectorAgent(ConversationAgent):
|
||||
|
||||
return response
|
||||
|
||||
@set_processing
|
||||
async def direct_character_with_self_reflection(self, character: Character, analysis:str, goal_override:str=None):
|
||||
|
||||
max_retries = 3
|
||||
@@ -162,6 +161,7 @@ class DirectorAgent(ConversationAgent):
|
||||
|
||||
return response
|
||||
|
||||
@set_processing
|
||||
async def transform_character_direction_to_inner_monologue(self, character:Character, direction:str):
|
||||
|
||||
inner_monologue = await Prompt.request(
|
||||
@@ -179,6 +179,7 @@ class DirectorAgent(ConversationAgent):
|
||||
return inner_monologue
|
||||
|
||||
|
||||
@set_processing
|
||||
async def direct_character(
|
||||
self,
|
||||
character: Character,
|
||||
@@ -229,6 +230,7 @@ class DirectorAgent(ConversationAgent):
|
||||
|
||||
|
||||
|
||||
@set_processing
|
||||
async def direct_character_self_reflect(self, direction:str, character: Character, goal:str, direction_prompt:Prompt) -> (bool, str):
|
||||
|
||||
change_matches = ["change", "retry", "alter", "reconsider"]
|
||||
@@ -253,6 +255,7 @@ class DirectorAgent(ConversationAgent):
|
||||
return keep, response
|
||||
|
||||
|
||||
@set_processing
|
||||
async def direct_character_analyze(self, direction:str, character: Character, goal:str, direction_prompt:Prompt):
|
||||
|
||||
prompt = Prompt.get("director.direct-character-analyze", vars={
|
||||
@@ -317,6 +320,7 @@ class DirectorAgent(ConversationAgent):
|
||||
else:
|
||||
return ""
|
||||
|
||||
@set_processing
|
||||
async def goal_analyze(self, goal:str):
|
||||
|
||||
prompt = Prompt.get("director.goal-analyze", vars={
|
||||
|
||||
@@ -50,14 +50,13 @@ class MemoryAgent(Agent):
|
||||
def close_db(self):
|
||||
raise NotImplementedError()
|
||||
|
||||
async def add(self, text, character=None, uid=None):
|
||||
async def add(self, text, character=None, uid=None, ts:str=None, **kwargs):
|
||||
if not text:
|
||||
return
|
||||
|
||||
log.debug("memory add", text=text, character=character, uid=uid)
|
||||
await self._add(text, character=character, uid=uid)
|
||||
await self._add(text, character=character, uid=uid, ts=ts, **kwargs)
|
||||
|
||||
async def _add(self, text, character=None):
|
||||
async def _add(self, text, character=None, ts:str=None, **kwargs):
|
||||
raise NotImplementedError()
|
||||
|
||||
async def add_many(self, objects: list[dict]):
|
||||
@@ -79,7 +78,7 @@ class MemoryAgent(Agent):
|
||||
return self.db.get(id)
|
||||
|
||||
def on_archive_add(self, event: events.ArchiveEvent):
|
||||
asyncio.ensure_future(self.add(event.text, uid=event.memory_id))
|
||||
asyncio.ensure_future(self.add(event.text, uid=event.memory_id, ts=event.ts, typ="history"))
|
||||
|
||||
def on_character_state(self, event: events.CharacterStateEvent):
|
||||
asyncio.ensure_future(
|
||||
@@ -256,7 +255,7 @@ class ChromaDBMemoryAgent(MemoryAgent):
|
||||
|
||||
self.db_client = chromadb.Client(Settings(anonymized_telemetry=False))
|
||||
|
||||
openai_key = self.config.get("openai").get("api_key") or os.environ.get("OPENAI_API_KEY"),
|
||||
openai_key = self.config.get("openai").get("api_key") or os.environ.get("OPENAI_API_KEY")
|
||||
|
||||
if openai_key and self.USE_OPENAI:
|
||||
log.info(
|
||||
@@ -300,25 +299,35 @@ class ChromaDBMemoryAgent(MemoryAgent):
|
||||
except ValueError:
|
||||
pass
|
||||
|
||||
async def _add(self, text, character=None, uid=None):
|
||||
async def _add(self, text, character=None, uid=None, ts:str=None, **kwargs):
|
||||
metadatas = []
|
||||
ids = []
|
||||
|
||||
await self.emit_status(processing=True)
|
||||
|
||||
if character:
|
||||
metadatas.append({"character": character.name, "source": "talemate"})
|
||||
meta = {"character": character.name, "source": "talemate"}
|
||||
if ts:
|
||||
meta["ts"] = ts
|
||||
meta.update(kwargs)
|
||||
metadatas.append(meta)
|
||||
self.memory_tracker.setdefault(character.name, 0)
|
||||
self.memory_tracker[character.name] += 1
|
||||
id = uid or f"{character.name}-{self.memory_tracker[character.name]}"
|
||||
ids = [id]
|
||||
else:
|
||||
metadatas.append({"character": "__narrator__", "source": "talemate"})
|
||||
meta = {"character": "__narrator__", "source": "talemate"}
|
||||
if ts:
|
||||
meta["ts"] = ts
|
||||
meta.update(kwargs)
|
||||
metadatas.append(meta)
|
||||
self.memory_tracker.setdefault("__narrator__", 0)
|
||||
self.memory_tracker["__narrator__"] += 1
|
||||
id = uid or f"__narrator__-{self.memory_tracker['__narrator__']}"
|
||||
ids = [id]
|
||||
|
||||
log.debug("chromadb agent add", text=text, meta=meta, id=id)
|
||||
|
||||
self.db.upsert(documents=[text], metadatas=metadatas, ids=ids)
|
||||
|
||||
await self.emit_status(processing=False)
|
||||
@@ -341,7 +350,6 @@ class ChromaDBMemoryAgent(MemoryAgent):
|
||||
metadatas.append(meta)
|
||||
uid = obj.get("id", f"{character}-{self.memory_tracker[character]}")
|
||||
ids.append(uid)
|
||||
|
||||
self.db.upsert(documents=documents, metadatas=metadatas, ids=ids)
|
||||
|
||||
await self.emit_status(processing=False)
|
||||
@@ -371,14 +379,27 @@ class ChromaDBMemoryAgent(MemoryAgent):
|
||||
#log.debug("crhomadb agent get", text=text, where=where)
|
||||
|
||||
_results = self.db.query(query_texts=[text], where=where)
|
||||
|
||||
|
||||
results = []
|
||||
|
||||
for i in range(len(_results["distances"][0])):
|
||||
await asyncio.sleep(0.001)
|
||||
distance = _results["distances"][0][i]
|
||||
|
||||
doc = _results["documents"][0][i]
|
||||
meta = _results["metadatas"][0][i]
|
||||
ts = meta.get("ts")
|
||||
|
||||
if distance < 1:
|
||||
results.append(_results["documents"][0][i])
|
||||
|
||||
try:
|
||||
date_prefix = util.iso8601_diff_to_human(ts, self.scene.ts)
|
||||
except Exception:
|
||||
log.error("chromadb agent", error="failed to get date prefix", ts=ts, scene_ts=self.scene.ts)
|
||||
date_prefix = None
|
||||
|
||||
if date_prefix:
|
||||
doc = f"{date_prefix}: {doc}"
|
||||
results.append(doc)
|
||||
else:
|
||||
break
|
||||
|
||||
|
||||
@@ -7,6 +7,7 @@ from typing import TYPE_CHECKING, Callable, List, Optional, Union
|
||||
import talemate.util as util
|
||||
from talemate.emit import wait_for_input
|
||||
from talemate.prompts import Prompt
|
||||
from talemate.agents.base import set_processing
|
||||
|
||||
from .conversation import ConversationAgent
|
||||
from .registry import register
|
||||
@@ -23,10 +24,6 @@ class NarratorAgent(ConversationAgent):
|
||||
if "#" in result:
|
||||
result = result.split("#")[0]
|
||||
|
||||
|
||||
# Removes partial sentence at the end
|
||||
# result = re.sub(r"[^\.\?\!]+(\n|$)", "", result)
|
||||
|
||||
cleaned = []
|
||||
for line in result.split("\n"):
|
||||
if ":" in line.strip():
|
||||
@@ -35,14 +32,12 @@ class NarratorAgent(ConversationAgent):
|
||||
|
||||
return "\n".join(cleaned)
|
||||
|
||||
@set_processing
|
||||
async def narrate_scene(self):
|
||||
"""
|
||||
Narrate the scene
|
||||
"""
|
||||
|
||||
await self.emit_status(processing=True)
|
||||
|
||||
|
||||
response = await Prompt.request(
|
||||
"narrator.narrate-scene",
|
||||
self.client,
|
||||
@@ -55,17 +50,14 @@ class NarratorAgent(ConversationAgent):
|
||||
|
||||
response = f"*{response.strip('*')}*"
|
||||
|
||||
await self.emit_status(processing=False)
|
||||
|
||||
return response
|
||||
|
||||
@set_processing
|
||||
async def progress_story(self, narrative_direction:str=None):
|
||||
"""
|
||||
Narrate the scene
|
||||
"""
|
||||
|
||||
await self.emit_status(processing=True)
|
||||
|
||||
scene = self.scene
|
||||
director = scene.get_helper("director").agent
|
||||
pc = scene.get_player_character()
|
||||
@@ -113,17 +105,13 @@ class NarratorAgent(ConversationAgent):
|
||||
response = response.replace("*", "")
|
||||
response = f"*{response}*"
|
||||
|
||||
await self.emit_status(processing=False)
|
||||
|
||||
return response
|
||||
|
||||
@set_processing
|
||||
async def narrate_query(self, query:str, at_the_end:bool=False, as_narrative:bool=True):
|
||||
"""
|
||||
Narrate a specific query
|
||||
"""
|
||||
|
||||
await self.emit_status(processing=True)
|
||||
|
||||
response = await Prompt.request(
|
||||
"narrator.narrate-query",
|
||||
self.client,
|
||||
@@ -141,15 +129,14 @@ class NarratorAgent(ConversationAgent):
|
||||
if as_narrative:
|
||||
response = f"*{response}*"
|
||||
|
||||
await self.emit_status(processing=False)
|
||||
return response
|
||||
|
||||
@set_processing
|
||||
async def narrate_character(self, character):
|
||||
"""
|
||||
Narrate a specific character
|
||||
"""
|
||||
|
||||
await self.emit_status(processing=True)
|
||||
budget = self.client.max_token_length - 300
|
||||
|
||||
memory_budget = min(int(budget * 0.05), 200)
|
||||
@@ -176,11 +163,9 @@ class NarratorAgent(ConversationAgent):
|
||||
response = self.clean_result(response.strip())
|
||||
response = f"*{response}*"
|
||||
|
||||
await self.emit_status(processing=False)
|
||||
return response
|
||||
|
||||
|
||||
|
||||
@set_processing
|
||||
async def augment_context(self):
|
||||
|
||||
"""
|
||||
|
||||
@@ -1,19 +1,21 @@
|
||||
from __future__ import annotations
|
||||
|
||||
import asyncio
|
||||
import traceback
|
||||
from typing import TYPE_CHECKING, Callable, List, Optional, Union
|
||||
|
||||
import talemate.data_objects as data_objects
|
||||
import talemate.util as util
|
||||
from talemate.prompts import Prompt
|
||||
from talemate.scene_message import DirectorMessage
|
||||
from talemate.scene_message import DirectorMessage, TimePassageMessage
|
||||
|
||||
from .base import Agent
|
||||
from .base import Agent, set_processing
|
||||
from .registry import register
|
||||
|
||||
import structlog
|
||||
|
||||
import time
|
||||
import re
|
||||
|
||||
log = structlog.get_logger("talemate.agents.summarize")
|
||||
|
||||
@@ -40,6 +42,17 @@ class SummarizeAgent(Agent):
|
||||
super().connect(scene)
|
||||
scene.signals["history_add"].connect(self.on_history_add)
|
||||
|
||||
def clean_result(self, result):
|
||||
if "#" in result:
|
||||
result = result.split("#")[0]
|
||||
|
||||
# Removes partial sentence at the end
|
||||
result = re.sub(r"[^\.\?\!]+(\n|$)", "", result)
|
||||
result = result.strip()
|
||||
|
||||
return result
|
||||
|
||||
@set_processing
|
||||
async def build_archive(self, scene):
|
||||
end = None
|
||||
|
||||
@@ -48,16 +61,36 @@ class SummarizeAgent(Agent):
|
||||
recent_entry = None
|
||||
else:
|
||||
recent_entry = scene.archived_history[-1]
|
||||
start = recent_entry["end"] + 1
|
||||
start = recent_entry.get("end", 0) + 1
|
||||
|
||||
token_threshold = 1300
|
||||
token_threshold = 1500
|
||||
tokens = 0
|
||||
dialogue_entries = []
|
||||
ts = "PT0S"
|
||||
time_passage_termination = False
|
||||
|
||||
if recent_entry:
|
||||
ts = recent_entry.get("ts", ts)
|
||||
|
||||
for i in range(start, len(scene.history)):
|
||||
dialogue = scene.history[i]
|
||||
if isinstance(dialogue, DirectorMessage):
|
||||
if i == start:
|
||||
start += 1
|
||||
continue
|
||||
|
||||
if isinstance(dialogue, TimePassageMessage):
|
||||
log.debug("build_archive", time_passage_message=dialogue)
|
||||
if i == start:
|
||||
ts = util.iso8601_add(ts, dialogue.ts)
|
||||
log.debug("build_archive", time_passage_message=dialogue, start=start, i=i, ts=ts)
|
||||
start += 1
|
||||
continue
|
||||
log.debug("build_archive", time_passage_message_termination=dialogue)
|
||||
time_passage_termination = True
|
||||
end = i - 1
|
||||
break
|
||||
|
||||
tokens += util.count_tokens(dialogue)
|
||||
dialogue_entries.append(dialogue)
|
||||
if tokens > token_threshold: #
|
||||
@@ -67,53 +100,66 @@ class SummarizeAgent(Agent):
|
||||
if end is None:
|
||||
# nothing to archive yet
|
||||
return
|
||||
await self.emit_status(processing=True)
|
||||
|
||||
log.debug("build_archive", start=start, end=end, ts=ts, time_passage_termination=time_passage_termination)
|
||||
|
||||
extra_context = None
|
||||
if recent_entry:
|
||||
extra_context = recent_entry["text"]
|
||||
|
||||
# in order to summarize coherently, we need to determine if there is a favorable
|
||||
# cutoff point (e.g., the scene naturally ends or shifts meaninfully in the middle
|
||||
# of the dialogue)
|
||||
#
|
||||
# One way to do this is to check if the last line is a TimePassageMessage, which
|
||||
# indicates a scene change or a significant pause.
|
||||
#
|
||||
# If not, we can ask the AI to find a good point of
|
||||
# termination.
|
||||
|
||||
if not time_passage_termination:
|
||||
|
||||
# No TimePassageMessage, so we need to ask the AI to find a good point of termination
|
||||
|
||||
terminating_line = await self.analyze_dialoge(dialogue_entries)
|
||||
|
||||
terminating_line = await self.analyze_dialoge(dialogue_entries)
|
||||
if terminating_line:
|
||||
adjusted_dialogue = []
|
||||
for line in dialogue_entries:
|
||||
if str(line) in terminating_line:
|
||||
break
|
||||
adjusted_dialogue.append(line)
|
||||
dialogue_entries = adjusted_dialogue
|
||||
end = start + len(dialogue_entries)
|
||||
|
||||
if dialogue_entries:
|
||||
summarized = await self.summarize(
|
||||
"\n".join(map(str, dialogue_entries)), extra_context=extra_context
|
||||
)
|
||||
|
||||
else:
|
||||
# AI has likely identified the first line as a scene change, so we can't summarize
|
||||
# just use the first line
|
||||
summarized = str(scene.history[start])
|
||||
|
||||
log.debug("summarize agent build archive", terminating_line=terminating_line)
|
||||
# determine the appropariate timestamp for the summarization
|
||||
|
||||
if terminating_line:
|
||||
adjusted_dialogue = []
|
||||
for line in dialogue_entries:
|
||||
if str(line) in terminating_line:
|
||||
break
|
||||
adjusted_dialogue.append(line)
|
||||
dialogue_entries = adjusted_dialogue
|
||||
end = start + len(dialogue_entries)
|
||||
|
||||
summarized = await self.summarize(
|
||||
"\n".join(map(str, dialogue_entries)), extra_context=extra_context
|
||||
)
|
||||
|
||||
scene.push_archive(data_objects.ArchiveEntry(summarized, start, end))
|
||||
await self.emit_status(processing=False)
|
||||
scene.push_archive(data_objects.ArchiveEntry(summarized, start, end, ts=ts))
|
||||
|
||||
return True
|
||||
|
||||
@set_processing
|
||||
async def analyze_dialoge(self, dialogue):
|
||||
instruction = "Examine the dialogue from the beginning and find the first line that marks a scene change. Repeat the line back to me exactly as it is written"
|
||||
await self.emit_status(processing=True)
|
||||
|
||||
prepare_response = "The first line that marks a scene change is: "
|
||||
|
||||
prompt = dialogue + ["", instruction, f"<|BOT|>{prepare_response}"]
|
||||
|
||||
response = await self.client.send_prompt("\n".join(map(str, prompt)), kind="summarize")
|
||||
|
||||
if prepare_response in response:
|
||||
response = response.replace(prepare_response, "")
|
||||
|
||||
response = await Prompt.request("summarizer.analyze-dialogue", self.client, "analyze_freeform", vars={
|
||||
"dialogue": "\n".join(map(str, dialogue)),
|
||||
"scene": self.scene,
|
||||
"max_tokens": self.client.max_token_length,
|
||||
})
|
||||
|
||||
response = self.clean_result(response)
|
||||
|
||||
await self.emit_status(processing=False)
|
||||
|
||||
return response
|
||||
|
||||
|
||||
@set_processing
|
||||
async def summarize(
|
||||
self,
|
||||
text: str,
|
||||
@@ -125,24 +171,20 @@ class SummarizeAgent(Agent):
|
||||
Summarize the given text
|
||||
"""
|
||||
|
||||
await self.emit_status(processing=True)
|
||||
|
||||
response = await Prompt.request("summarizer.summarize-dialogue", self.client, "summarize", vars={
|
||||
"dialogue": text,
|
||||
"scene": self.scene,
|
||||
"max_tokens": self.client.max_token_length,
|
||||
})
|
||||
|
||||
self.scene.log.info("summarize", dialogue=text, response=response)
|
||||
|
||||
await self.emit_status(processing=False)
|
||||
self.scene.log.info("summarize", dialogue_length=len(text), summarized_length=len(response))
|
||||
|
||||
return self.clean_result(response)
|
||||
|
||||
@set_processing
|
||||
async def simple_summary(
|
||||
self, text: str, prompt_kind: str = "summarize", instructions: str = "Summarize"
|
||||
):
|
||||
await self.emit_status(processing=True)
|
||||
prompt = [
|
||||
text,
|
||||
"",
|
||||
@@ -153,62 +195,100 @@ class SummarizeAgent(Agent):
|
||||
response = await self.client.send_prompt("\n".join(map(str, prompt)), kind=prompt_kind)
|
||||
if ":" in response:
|
||||
response = response.split(":")[1].strip()
|
||||
await self.emit_status(processing=False)
|
||||
return response
|
||||
|
||||
|
||||
@set_processing
|
||||
async def request_world_state(self):
|
||||
|
||||
await self.emit_status(processing=True)
|
||||
try:
|
||||
|
||||
t1 = time.time()
|
||||
t1 = time.time()
|
||||
|
||||
_, world_state = await Prompt.request(
|
||||
"summarizer.request-world-state",
|
||||
self.client,
|
||||
"analyze",
|
||||
vars = {
|
||||
"scene": self.scene,
|
||||
"max_tokens": self.client.max_token_length,
|
||||
"object_type": "character",
|
||||
"object_type_plural": "characters",
|
||||
}
|
||||
)
|
||||
|
||||
self.scene.log.debug("request_world_state", response=world_state, time=time.time() - t1)
|
||||
|
||||
return world_state
|
||||
finally:
|
||||
await self.emit_status(processing=False)
|
||||
_, world_state = await Prompt.request(
|
||||
"summarizer.request-world-state",
|
||||
self.client,
|
||||
"analyze",
|
||||
vars = {
|
||||
"scene": self.scene,
|
||||
"max_tokens": self.client.max_token_length,
|
||||
"object_type": "character",
|
||||
"object_type_plural": "characters",
|
||||
}
|
||||
)
|
||||
|
||||
self.scene.log.debug("request_world_state", response=world_state, time=time.time() - t1)
|
||||
|
||||
return world_state
|
||||
|
||||
|
||||
@set_processing
|
||||
async def request_world_state_inline(self):
|
||||
|
||||
"""
|
||||
EXPERIMENTAL, Overall the one shot request seems about as coherent as the inline request, but the inline request is is about twice as slow and would need to run on every dialogue line.
|
||||
"""
|
||||
|
||||
await self.emit_status(processing=True)
|
||||
try:
|
||||
|
||||
t1 = time.time()
|
||||
t1 = time.time()
|
||||
|
||||
# first, we need to get the marked items (objects etc.)
|
||||
# first, we need to get the marked items (objects etc.)
|
||||
|
||||
marked_items_response = await Prompt.request(
|
||||
"summarizer.request-world-state-inline-items",
|
||||
self.client,
|
||||
"analyze_freeform",
|
||||
vars = {
|
||||
"scene": self.scene,
|
||||
"max_tokens": self.client.max_token_length,
|
||||
}
|
||||
)
|
||||
|
||||
self.scene.log.debug("request_world_state_inline", marked_items=marked_items_response, time=time.time() - t1)
|
||||
|
||||
return marked_items_response
|
||||
finally:
|
||||
await self.emit_status(processing=False)
|
||||
|
||||
marked_items_response = await Prompt.request(
|
||||
"summarizer.request-world-state-inline-items",
|
||||
self.client,
|
||||
"analyze_freeform",
|
||||
vars = {
|
||||
"scene": self.scene,
|
||||
"max_tokens": self.client.max_token_length,
|
||||
}
|
||||
)
|
||||
|
||||
self.scene.log.debug("request_world_state_inline", marked_items=marked_items_response, time=time.time() - t1)
|
||||
|
||||
return marked_items_response
|
||||
|
||||
@set_processing
|
||||
async def analyze_time_passage(
|
||||
self,
|
||||
text: str,
|
||||
):
|
||||
|
||||
response = await Prompt.request(
|
||||
"summarizer.analyze-time-passage",
|
||||
self.client,
|
||||
"analyze_freeform_short",
|
||||
vars = {
|
||||
"scene": self.scene,
|
||||
"max_tokens": self.client.max_token_length,
|
||||
"text": text,
|
||||
}
|
||||
)
|
||||
|
||||
duration = response.split("\n")[0].split(" ")[0].strip()
|
||||
|
||||
if not duration.startswith("P"):
|
||||
duration = "P"+duration
|
||||
|
||||
return duration
|
||||
|
||||
|
||||
@set_processing
|
||||
async def analyze_text_and_answer_question(
|
||||
self,
|
||||
text: str,
|
||||
query: str,
|
||||
):
|
||||
|
||||
response = await Prompt.request(
|
||||
"summarizer.analyze-text-and-answer-question",
|
||||
self.client,
|
||||
"analyze_freeform",
|
||||
vars = {
|
||||
"scene": self.scene,
|
||||
"max_tokens": self.client.max_token_length,
|
||||
"text": text,
|
||||
"query": query,
|
||||
}
|
||||
)
|
||||
|
||||
log.debug("analyze_text_and_answer_question", query=query, text=text, response=response)
|
||||
|
||||
return response
|
||||
@@ -4,6 +4,9 @@ Context managers for various client-side operations.
|
||||
|
||||
from contextvars import ContextVar
|
||||
from pydantic import BaseModel, Field
|
||||
from copy import deepcopy
|
||||
|
||||
import structlog
|
||||
|
||||
__all__ = [
|
||||
'context_data',
|
||||
@@ -11,11 +14,25 @@ __all__ = [
|
||||
'ContextModel',
|
||||
]
|
||||
|
||||
log = structlog.get_logger()
|
||||
|
||||
def model_to_dict_without_defaults(model_instance):
|
||||
model_dict = model_instance.dict()
|
||||
for field_name, field in model_instance.__class__.__fields__.items():
|
||||
if field.default == model_dict.get(field_name):
|
||||
del model_dict[field_name]
|
||||
return model_dict
|
||||
|
||||
class ConversationContext(BaseModel):
|
||||
talking_character: str = None
|
||||
other_characters: list[str] = Field(default_factory=list)
|
||||
|
||||
class ContextModel(BaseModel):
|
||||
"""
|
||||
Pydantic model for the context data.
|
||||
"""
|
||||
nuke_repetition: float = Field(0.0, ge=0.0, le=3.0)
|
||||
conversation: ConversationContext = Field(default_factory=ConversationContext)
|
||||
|
||||
# Define the context variable as an empty dictionary
|
||||
context_data = ContextVar('context_data', default=ContextModel().dict())
|
||||
@@ -41,33 +58,23 @@ class ClientContext:
|
||||
Initialize the context manager with the key-value pairs to be set.
|
||||
"""
|
||||
# Validate the data with the Pydantic model
|
||||
self.values = ContextModel(**kwargs).dict()
|
||||
self.tokens = {}
|
||||
self.values = model_to_dict_without_defaults(ContextModel(**kwargs))
|
||||
|
||||
def __enter__(self):
|
||||
"""
|
||||
Set the key-value pairs to the context variable `context_data` when entering the context.
|
||||
"""
|
||||
# Get the current context data
|
||||
data = context_data.get()
|
||||
# For each key-value pair, save the current value of the key (if it exists) and set the new value
|
||||
for key, value in self.values.items():
|
||||
self.tokens[key] = data.get(key, None)
|
||||
data[key] = value
|
||||
|
||||
data = deepcopy(context_data.get()) if context_data.get() else {}
|
||||
data.update(self.values)
|
||||
|
||||
# Update the context data
|
||||
context_data.set(data)
|
||||
self.token = context_data.set(data)
|
||||
|
||||
def __exit__(self, exc_type, exc_val, exc_tb):
|
||||
"""
|
||||
Reset the context variable `context_data` to its previous values when exiting the context.
|
||||
"""
|
||||
# Get the current context data
|
||||
data = context_data.get()
|
||||
# For each key, if a previous value exists, reset it. Otherwise, remove the key
|
||||
for key in self.values.keys():
|
||||
if self.tokens[key] is not None:
|
||||
data[key] = self.tokens[key]
|
||||
else:
|
||||
data.pop(key, None)
|
||||
# Update the context data
|
||||
context_data.set(data)
|
||||
|
||||
context_data.reset(self.token)
|
||||
|
||||
@@ -9,7 +9,6 @@ from talemate.client.registry import register
|
||||
from talemate.emit import emit
|
||||
from talemate.config import load_config
|
||||
import talemate.client.system_prompts as system_prompts
|
||||
|
||||
import structlog
|
||||
|
||||
__all__ = [
|
||||
@@ -142,5 +141,14 @@ class OpenAIClient:
|
||||
|
||||
log.debug("openai response", response=response)
|
||||
|
||||
emit("prompt_sent", data={
|
||||
"kind": kind,
|
||||
"prompt": prompt,
|
||||
"response": response,
|
||||
# TODO use tiktoken
|
||||
"prompt_tokens": "?",
|
||||
"response_tokens": "?",
|
||||
})
|
||||
|
||||
self.emit_status(processing=False)
|
||||
return response
|
||||
|
||||
@@ -417,11 +417,21 @@ class TextGeneratorWebuiClient(RESTTaleMateClient):
|
||||
prompt,
|
||||
)
|
||||
|
||||
stopping_strings = ["<|end_of_turn|>"]
|
||||
|
||||
conversation_context = client_context_attribute("conversation")
|
||||
|
||||
stopping_strings += [
|
||||
f"{character}:" for character in conversation_context["other_characters"]
|
||||
]
|
||||
|
||||
log.debug("prompt_config_conversation", stopping_strings=stopping_strings, conversation_context=conversation_context)
|
||||
|
||||
config = {
|
||||
"prompt": prompt,
|
||||
"max_new_tokens": 75,
|
||||
"chat_prompt_size": self.max_token_length,
|
||||
"stopping_strings": ["<|end_of_turn|>", "\n\n"],
|
||||
"stopping_strings": stopping_strings,
|
||||
}
|
||||
config.update(PRESET_TALEMATE_CONVERSATION)
|
||||
|
||||
@@ -481,10 +491,15 @@ class TextGeneratorWebuiClient(RESTTaleMateClient):
|
||||
"chat_prompt_size": self.max_token_length,
|
||||
}
|
||||
|
||||
config.update(PRESET_SIMPLE_1)
|
||||
config.update(PRESET_LLAMA_PRECISE)
|
||||
return config
|
||||
|
||||
|
||||
def prompt_config_analyze_freeform_short(self, prompt: str) -> dict:
|
||||
config = self.prompt_config_analyze_freeform(prompt)
|
||||
config["max_new_tokens"] = 10
|
||||
return config
|
||||
|
||||
def prompt_config_narrate(self, prompt: str) -> dict:
|
||||
prompt = self.prompt_template(
|
||||
system_prompts.NARRATOR,
|
||||
@@ -596,7 +611,7 @@ class TextGeneratorWebuiClient(RESTTaleMateClient):
|
||||
fn_prompt_config = getattr(self, f"prompt_config_{kind}")
|
||||
fn_url = self.prompt_url
|
||||
message = fn_prompt_config(prompt)
|
||||
|
||||
|
||||
if client_context_attribute("nuke_repetition") > 0.0:
|
||||
log.info("nuke repetition", offset=client_context_attribute("nuke_repetition"), temperature=message["temperature"], repetition_penalty=message["repetition_penalty"])
|
||||
message = jiggle_randomness(message, offset=client_context_attribute("nuke_repetition"))
|
||||
@@ -611,12 +626,22 @@ class TextGeneratorWebuiClient(RESTTaleMateClient):
|
||||
log.debug("send_prompt", token_length=token_length, max_token_length=self.max_token_length)
|
||||
|
||||
message["prompt"] = message["prompt"].strip()
|
||||
|
||||
#print(f"prompt: |{message['prompt']}|")
|
||||
|
||||
response = await self.send_message(message, fn_url())
|
||||
|
||||
response = response.split("#")[0]
|
||||
self.emit_status(processing=False)
|
||||
await asyncio.sleep(0.01)
|
||||
|
||||
emit("prompt_sent", data={
|
||||
"kind": kind,
|
||||
"prompt": message["prompt"],
|
||||
"response": response,
|
||||
"prompt_tokens": token_length,
|
||||
"response_tokens": int(len(response) / 3.6)
|
||||
})
|
||||
|
||||
return response
|
||||
|
||||
|
||||
|
||||
@@ -22,6 +22,7 @@ from .cmd_save import CmdSave
|
||||
from .cmd_save_as import CmdSaveAs
|
||||
from .cmd_save_characters import CmdSaveCharacters
|
||||
from .cmd_setenv import CmdSetEnvironmentToScene, CmdSetEnvironmentToCreative
|
||||
from .cmd_time_util import *
|
||||
from .cmd_world_state import CmdWorldState
|
||||
from .cmd_run_helios_test import CmdHeliosTest
|
||||
from .manager import Manager
|
||||
@@ -20,8 +20,11 @@ class CmdRebuildArchive(TalemateCommand):
|
||||
if not summarizer:
|
||||
self.system_message("No summarizer found")
|
||||
return True
|
||||
|
||||
self.scene.archived_history = []
|
||||
|
||||
# clear out archived history, but keep pre-established history
|
||||
self.scene.archived_history = [
|
||||
ah for ah in self.scene.archived_history if ah.get("end") is None
|
||||
]
|
||||
|
||||
while True:
|
||||
more = await summarizer.agent.build_archive(self.scene)
|
||||
|
||||
50
src/talemate/commands/cmd_time_util.py
Normal file
50
src/talemate/commands/cmd_time_util.py
Normal file
@@ -0,0 +1,50 @@
|
||||
"""
|
||||
Commands to manage scene timescale
|
||||
"""
|
||||
|
||||
import asyncio
|
||||
import logging
|
||||
|
||||
from talemate.commands.base import TalemateCommand
|
||||
from talemate.commands.manager import register
|
||||
from talemate.prompts.base import set_default_sectioning_handler
|
||||
from talemate.scene_message import TimePassageMessage
|
||||
from talemate.util import iso8601_duration_to_human
|
||||
from talemate.emit import wait_for_input, emit
|
||||
import isodate
|
||||
|
||||
__all__ = [
|
||||
"CmdAdvanceTime",
|
||||
]
|
||||
|
||||
@register
|
||||
class CmdAdvanceTime(TalemateCommand):
|
||||
"""
|
||||
Command class for the 'advance_time' command
|
||||
"""
|
||||
|
||||
name = "advance_time"
|
||||
description = "Advance the scene time by a given amount (expects iso8601 duration))"
|
||||
aliases = ["time_a"]
|
||||
|
||||
async def run(self):
|
||||
if not self.args:
|
||||
self.emit("system", "You must specify an amount of time to advance")
|
||||
return
|
||||
|
||||
try:
|
||||
isodate.parse_duration(self.args[0])
|
||||
except isodate.ISO8601Error:
|
||||
self.emit("system", "Invalid duration")
|
||||
return
|
||||
|
||||
try:
|
||||
msg = self.args[1]
|
||||
except IndexError:
|
||||
msg = iso8601_duration_to_human(self.args[0], suffix=" later")
|
||||
|
||||
message = TimePassageMessage(ts=self.args[0], message=msg)
|
||||
emit('time', message)
|
||||
|
||||
self.scene.push_history(message)
|
||||
self.scene.emit_status()
|
||||
@@ -1,8 +1,12 @@
|
||||
from dataclasses import dataclass
|
||||
|
||||
__all__ = [
|
||||
"ArchiveEntry",
|
||||
]
|
||||
|
||||
@dataclass
|
||||
class ArchiveEntry:
|
||||
text: str
|
||||
start: int
|
||||
end: int
|
||||
start: int = None
|
||||
end: int = None
|
||||
ts: str = None
|
||||
@@ -29,6 +29,7 @@ class AbortCommand(IOError):
|
||||
class Emission:
|
||||
typ: str
|
||||
message: str = None
|
||||
message_object: SceneMessage = None
|
||||
character: Character = None
|
||||
scene: Scene = None
|
||||
status: str = None
|
||||
@@ -43,12 +44,16 @@ def emit(
|
||||
if typ not in handlers:
|
||||
raise ValueError(f"Unknown message type: {typ}")
|
||||
|
||||
|
||||
if isinstance(message, SceneMessage):
|
||||
kwargs["id"] = message.id
|
||||
message_object = message
|
||||
message = message.message
|
||||
else:
|
||||
message_object = None
|
||||
|
||||
handlers[typ].send(
|
||||
Emission(typ=typ, message=message, character=character, scene=scene, **kwargs)
|
||||
Emission(typ=typ, message=message, character=character, scene=scene, message_object=message_object, **kwargs)
|
||||
)
|
||||
|
||||
|
||||
|
||||
@@ -5,6 +5,7 @@ NarratorMessage = signal("narrator")
|
||||
CharacterMessage = signal("character")
|
||||
PlayerMessage = signal("player")
|
||||
DirectorMessage = signal("director")
|
||||
TimePassageMessage = signal("time")
|
||||
|
||||
ClearScreen = signal("clear_screen")
|
||||
|
||||
@@ -14,6 +15,7 @@ ReceiveInput = signal("receive_input")
|
||||
ClientStatus = signal("client_status")
|
||||
AgentStatus = signal("agent_status")
|
||||
ClientBootstraps = signal("client_bootstraps")
|
||||
PromptSent = signal("prompt_sent")
|
||||
|
||||
RemoveMessage = signal("remove_message")
|
||||
|
||||
@@ -30,6 +32,7 @@ handlers = {
|
||||
"character": CharacterMessage,
|
||||
"player": PlayerMessage,
|
||||
"director": DirectorMessage,
|
||||
"time": TimePassageMessage,
|
||||
"request_input": RequestInput,
|
||||
"receive_input": ReceiveInput,
|
||||
"client_status": ClientStatus,
|
||||
@@ -42,4 +45,5 @@ handlers = {
|
||||
"world_state": WorldState,
|
||||
"archived_history": ArchivedHistory,
|
||||
"message_edited": MessageEdited,
|
||||
"prompt_sent": PromptSent,
|
||||
}
|
||||
|
||||
@@ -27,6 +27,7 @@ class HistoryEvent(Event):
|
||||
class ArchiveEvent(Event):
|
||||
text: str
|
||||
memory_id: str = None
|
||||
ts: str = None
|
||||
|
||||
|
||||
@dataclass
|
||||
|
||||
@@ -6,7 +6,9 @@ from dotenv import load_dotenv
|
||||
import talemate.events as events
|
||||
from talemate import Actor, Character, Player
|
||||
from talemate.config import load_config
|
||||
from talemate.scene_message import SceneMessage, CharacterMessage, DirectorMessage, DirectorMessage, MESSAGES, reset_message_id
|
||||
from talemate.scene_message import (
|
||||
SceneMessage, CharacterMessage, NarratorMessage, DirectorMessage, MESSAGES, reset_message_id
|
||||
)
|
||||
from talemate.world_state import WorldState
|
||||
import talemate.instance as instance
|
||||
|
||||
@@ -144,12 +146,20 @@ async def load_scene_from_data(
|
||||
)
|
||||
scene.assets.cover_image = scene_data.get("assets", {}).get("cover_image", None)
|
||||
scene.assets.load_assets(scene_data.get("assets", {}).get("assets", {}))
|
||||
|
||||
|
||||
scene.sync_time()
|
||||
log.debug("scene time", ts=scene.ts)
|
||||
|
||||
for ah in scene.archived_history:
|
||||
if reset:
|
||||
break
|
||||
ts = ah.get("ts", "PT1S")
|
||||
|
||||
if not ah.get("ts"):
|
||||
ah["ts"] = ts
|
||||
|
||||
scene.signals["archive_add"].send(
|
||||
events.ArchiveEvent(scene=scene, event_type="archive_add", text=ah["text"])
|
||||
events.ArchiveEvent(scene=scene, event_type="archive_add", text=ah["text"], ts=ts)
|
||||
)
|
||||
|
||||
for character_name, cs in scene.character_states.items():
|
||||
@@ -312,7 +322,7 @@ def _prepare_legacy_history(entry):
|
||||
"""
|
||||
|
||||
if entry.startswith("*"):
|
||||
cls = DirectorMessage
|
||||
cls = NarratorMessage
|
||||
elif entry.startswith("Director instructs"):
|
||||
cls = DirectorMessage
|
||||
else:
|
||||
|
||||
@@ -177,6 +177,9 @@ class Prompt:
|
||||
# prompt variables
|
||||
vars: dict = dataclasses.field(default_factory=dict)
|
||||
|
||||
# pad prepared response and ai response with a white-space
|
||||
pad_prepended_response: bool = True
|
||||
|
||||
prepared_response: str = ""
|
||||
|
||||
eval_response: bool = False
|
||||
@@ -282,6 +285,7 @@ class Prompt:
|
||||
env.globals["set_question_eval"] = self.set_question_eval
|
||||
env.globals["query_scene"] = self.query_scene
|
||||
env.globals["query_memory"] = self.query_memory
|
||||
env.globals["query_text"] = self.query_text
|
||||
env.globals["uuidgen"] = lambda: str(uuid.uuid4())
|
||||
env.globals["to_int"] = lambda x: int(x)
|
||||
env.globals["config"] = self.config
|
||||
@@ -336,7 +340,15 @@ class Prompt:
|
||||
f"Answer: " + loop.run_until_complete(narrator.narrate_query(query, at_the_end=at_the_end, as_narrative=as_narrative)),
|
||||
])
|
||||
|
||||
|
||||
|
||||
def query_text(self, query:str, text:str):
|
||||
loop = asyncio.get_event_loop()
|
||||
summarizer = instance.get_agent("summarizer")
|
||||
query = query.format(**self.vars)
|
||||
return "\n".join([
|
||||
f"Question: {query}",
|
||||
f"Answer: " + loop.run_until_complete(summarizer.analyze_text_and_answer_question(text, query)),
|
||||
])
|
||||
|
||||
def query_memory(self, query:str, as_question_answer:bool=True):
|
||||
loop = asyncio.get_event_loop()
|
||||
@@ -524,7 +536,8 @@ class Prompt:
|
||||
response = await client.send_prompt(str(self), kind=kind)
|
||||
|
||||
if not response.lower().startswith(self.prepared_response.lower()):
|
||||
response = self.prepared_response.rstrip() + " " + response.strip()
|
||||
pad = " " if self.pad_prepended_response else ""
|
||||
response = self.prepared_response.rstrip() + pad + response.strip()
|
||||
|
||||
|
||||
if self.eval_response:
|
||||
|
||||
@@ -5,14 +5,19 @@
|
||||
<|CLOSE_SECTION|>
|
||||
<|SECTION:CHARACTERS|>
|
||||
{% for character in characters -%}
|
||||
{{ character.name }}: {{ character.description }}
|
||||
{{ character.name }}:
|
||||
{{ character.filtered_sheet(['name', 'age', 'gender']) }}
|
||||
{{ query_memory("what is "+character.name+"'s personality?", as_question_answer=False) }}
|
||||
|
||||
{{ character.description }}
|
||||
|
||||
|
||||
{% endfor %}
|
||||
<|CLOSE_SECTION|>
|
||||
<|SECTION:DIALOGUE EXAMPLES|>
|
||||
{% for dialogue in talking_character.example_dialogue -%}
|
||||
{{ dialogue }}
|
||||
{% endfor -%}
|
||||
{% for example in talking_character.random_dialogue_examples(num=3) -%}
|
||||
{{ example }}
|
||||
{% endfor %}
|
||||
<|CLOSE_SECTION|>
|
||||
|
||||
<|SECTION:TASK|>
|
||||
@@ -28,9 +33,7 @@ Based on {{ talking_character.name}}'s example dialogue style, create a continua
|
||||
|
||||
You may chose to have {{ talking_character.name}} respond to {{main_character.name}}'s last message, or you may chose to have {{ talking_character.name}} perform a new action that is in line with {{ talking_character.name}}'s character.
|
||||
|
||||
{% if scene.history and scene.history[-1].type == "director" -%}
|
||||
Follow the instructions to you for your next message as {{ talking_character.name}}. NEVER directly respond to the instructions, but use the direction we have given you as you perform {{ talking_character.name }}'s response to {{main_character.name}}. You can separate thoughts and actual dialogue by containing thoughts inside curly brackets. Example: "{stuff you want to keep private} stuff you want to say publicly."
|
||||
{% endif -%}
|
||||
Use an informal and colloquial register with a conversational tone…Overall, their dialog is Informal, conversational, natural, and spontaneous, with a sense of immediacy.
|
||||
<|CLOSE_SECTION|>
|
||||
|
||||
<|SECTION:SCENE|>
|
||||
|
||||
@@ -48,12 +48,12 @@ Examples: John, Mary, Jane, Bob, Alice, etc.
|
||||
Respond with a number only
|
||||
{% endif -%}
|
||||
{% if character_sheet.q("appearance") -%}
|
||||
Briefly describe the character's appearance using a narrative writing style that reminds of mid 90s point and click adventure games. (2 - 3 sentences). {{ spice("Make it {spice}.", spices) }}
|
||||
Briefly describe the character's appearance using a narrative writing style that reminds of mid 90s point and click adventure games. (1 - 2 sentences). {{ spice("Make it {spice}.", spices) }}
|
||||
{% endif -%}
|
||||
{% block generate_appearance %}
|
||||
{% endblock %}
|
||||
{% if character_sheet.q("personality") -%}
|
||||
Briefly describe the character's personality using a narrative writing style that reminds of mid 90s point and click adventure games. (2 - 3 sentences). {{ spice("Make it {spice}.", spices) }}
|
||||
Briefly describe the character's personality using a narrative writing style that reminds of mid 90s point and click adventure games. (1 - 2 sentences). {{ spice("Make it {spice}.", spices) }}
|
||||
{% endif -%}
|
||||
{% if character_sheet.q("family and fiends") %}
|
||||
List close family and friends of {{ character_sheet("name") }}. Respond with a comma separated list of names. (2 - 3 names, include age)
|
||||
@@ -69,7 +69,7 @@ List some things that {{ character_sheet("name") }} dislikes. Respond with a com
|
||||
Examples: cats, dogs, pizza, etc.
|
||||
{% endif -%}
|
||||
{% if character_sheet.q("clothes and accessories") -%}
|
||||
Briefly describe the character's clothes and accessories using a narrative writing style that reminds of mid 90s point and click adventure games. (2 - 3 sentences). {{ spice("Make it {spice}.", spices) }}
|
||||
Briefly describe the character's clothes and accessories using a narrative writing style that reminds of mid 90s point and click adventure games. (1 - 2 sentences). {{ spice("Make it {spice}.", spices) }}
|
||||
{% endif %}
|
||||
{% block generate_misc %}{% endblock -%}
|
||||
{% for custom_attribute, instructions in custom_attributes.items() -%}
|
||||
|
||||
@@ -4,6 +4,6 @@
|
||||
<|SECTION:TASK|>
|
||||
Summarize {{ character.name }} based on the character sheet above.
|
||||
|
||||
Use a narrative writing style that reminds of mid 90s point and click adventure games about a {{ content_context }}
|
||||
Use a narrative writing style that reminds of mid 90s point and click adventure games about {{ content_context }}
|
||||
<|CLOSE_SECTION|>
|
||||
{{ set_prepared_response(character.name+ " is ") }}
|
||||
@@ -0,0 +1,6 @@
|
||||
|
||||
{{ dialogue }}
|
||||
<|SECTION:TASK|>
|
||||
Examine the dialogue from the beginning and find the last line that marks a scene change. Repeat the line back to me exactly as it is written.
|
||||
<|CLOSE_SECTION|>
|
||||
{{ bot_token }}The first line that marks a scene change is:
|
||||
@@ -0,0 +1,7 @@
|
||||
{{ text }}
|
||||
|
||||
<|SECTION:TASK|>
|
||||
Analyze the text above and answer the question.
|
||||
|
||||
Question: {{ query }}
|
||||
{{ bot_token }}Answer:
|
||||
@@ -0,0 +1,5 @@
|
||||
<|SECTION:SCENE|>
|
||||
{{ text }}
|
||||
<|SECTION:TASK|>
|
||||
Question: How much time has passed in the scene above?
|
||||
{{ bot_token }}Answer (ISO8601 duration): P
|
||||
@@ -5,6 +5,9 @@
|
||||
<|SECTION:TASK|>
|
||||
Question: What happens within the dialogue? Summarize into narrative description.
|
||||
Content Context: This is a specific scene from {{ scene.context }}
|
||||
Expected Answer: A summarized narrative description of the dialogue that can be inserted into the ongoing story in place of the dialogue.
|
||||
Expected Answer: A summarized narrative description of the dialogue that can be inserted into the ongoing story in place of the dialogue.
|
||||
|
||||
Include implied time skips (for example characters plan to meet at a later date and then they meet).
|
||||
|
||||
<|CLOSE_SECTION|>
|
||||
Narrator answers:
|
||||
@@ -1,4 +1,5 @@
|
||||
from dataclasses import dataclass, field
|
||||
import isodate
|
||||
|
||||
_message_id = 0
|
||||
|
||||
@@ -11,11 +12,23 @@ def reset_message_id():
|
||||
global _message_id
|
||||
_message_id = 0
|
||||
|
||||
|
||||
@dataclass
|
||||
class SceneMessage:
|
||||
|
||||
"""
|
||||
Base class for all messages that are sent to the scene.
|
||||
"""
|
||||
|
||||
# the mesage itself
|
||||
message: str
|
||||
|
||||
# the id of the message
|
||||
id: int = field(default_factory=get_message_id)
|
||||
|
||||
# the source of the message (e.g. "ai", "progress_story", "director")
|
||||
source: str = ""
|
||||
|
||||
typ = "scene"
|
||||
|
||||
|
||||
@@ -83,12 +96,25 @@ class DirectorMessage(SceneMessage):
|
||||
|
||||
return f"[Story progression instructions for {char_name}: {message}]"
|
||||
|
||||
|
||||
@dataclass
|
||||
class TimePassageMessage(SceneMessage):
|
||||
ts: str = "PT0S"
|
||||
source: str = "manual"
|
||||
typ = "time"
|
||||
|
||||
def __dict__(self):
|
||||
return {
|
||||
"message": self.message,
|
||||
"id": self.id,
|
||||
"typ": "time",
|
||||
"source": self.source,
|
||||
"ts": self.ts,
|
||||
}
|
||||
|
||||
|
||||
MESSAGES = {
|
||||
"scene": SceneMessage,
|
||||
"character": CharacterMessage,
|
||||
"narrator": NarratorMessage,
|
||||
"director": DirectorMessage,
|
||||
"time": TimePassageMessage,
|
||||
}
|
||||
@@ -148,13 +148,13 @@ class CharacterCreatorServerPlugin:
|
||||
async def handle_submit_step3(self, data:dict):
|
||||
|
||||
creator = self.scene.get_helper("creator").agent
|
||||
character, _ = self.apply_step_data(data)
|
||||
character, step_data = self.apply_step_data(data)
|
||||
|
||||
self.emit_step_start(3)
|
||||
|
||||
description = await creator.create_character_description(
|
||||
character,
|
||||
content_context=self.character_creation_data.scenario_context,
|
||||
content_context=step_data.scenario_context,
|
||||
)
|
||||
|
||||
character.description = description
|
||||
|
||||
@@ -292,6 +292,24 @@ class WebsocketHandler(Receiver):
|
||||
}
|
||||
)
|
||||
|
||||
def handle_time(self, emission: Emission):
|
||||
self.queue_put(
|
||||
{
|
||||
"type": "time",
|
||||
"message": emission.message,
|
||||
"id": emission.id,
|
||||
"ts": emission.message_object.ts,
|
||||
}
|
||||
)
|
||||
|
||||
def handle_prompt_sent(self, emission: Emission):
|
||||
self.queue_put(
|
||||
{
|
||||
"type": "prompt_sent",
|
||||
"data": emission.data,
|
||||
}
|
||||
)
|
||||
|
||||
def handle_clear_screen(self, emission: Emission):
|
||||
self.queue_put(
|
||||
{
|
||||
|
||||
@@ -5,6 +5,7 @@ import os
|
||||
import random
|
||||
import traceback
|
||||
import re
|
||||
import isodate
|
||||
from typing import Dict, List, Optional, Union
|
||||
|
||||
from blinker import signal
|
||||
@@ -18,11 +19,12 @@ import talemate.util as util
|
||||
import talemate.save as save
|
||||
from talemate.emit import Emitter, emit, wait_for_input
|
||||
from talemate.util import colored_text, count_tokens, extract_metadata, wrap_text
|
||||
from talemate.scene_message import SceneMessage, CharacterMessage, DirectorMessage, NarratorMessage
|
||||
from talemate.scene_message import SceneMessage, CharacterMessage, DirectorMessage, NarratorMessage, TimePassageMessage
|
||||
from talemate.exceptions import ExitScene, RestartSceneLoop, ResetScene, TalemateError, TalemateInterrupt, LLMAccuracyError
|
||||
from talemate.world_state import WorldState
|
||||
from talemate.config import SceneConfig
|
||||
from talemate.scene_assets import SceneAssets
|
||||
from talemate.client.context import ClientContext, ConversationContext
|
||||
import talemate.automated_action as automated_action
|
||||
|
||||
|
||||
@@ -139,6 +141,46 @@ class Character:
|
||||
return ""
|
||||
|
||||
return random.choice(self.example_dialogue)
|
||||
|
||||
def random_dialogue_examples(self, num:int=3):
|
||||
"""
|
||||
Get multiple random example dialogue lines for this character.
|
||||
|
||||
Will return up to `num` examples and not have any duplicates.
|
||||
"""
|
||||
|
||||
if not self.example_dialogue:
|
||||
return []
|
||||
|
||||
# create copy of example_dialogue so we dont modify the original
|
||||
|
||||
examples = self.example_dialogue.copy()
|
||||
|
||||
# shuffle the examples so we get a random order
|
||||
|
||||
random.shuffle(examples)
|
||||
|
||||
# now pop examples until we have `num` examples or we run out of examples
|
||||
|
||||
return [examples.pop() for _ in range(min(num, len(examples)))]
|
||||
|
||||
|
||||
def filtered_sheet(self, attributes: list[str]):
|
||||
|
||||
"""
|
||||
Same as sheet but only returns the attributes in the given list
|
||||
|
||||
Attributes that dont exist will be ignored
|
||||
"""
|
||||
|
||||
sheet_list = []
|
||||
|
||||
for key, value in self.base_attributes.items():
|
||||
if key.lower() not in attributes:
|
||||
continue
|
||||
sheet_list.append(f"{key}: {value}")
|
||||
|
||||
return "\n".join(sheet_list)
|
||||
|
||||
def save(self, file_path: str):
|
||||
"""
|
||||
@@ -242,10 +284,11 @@ class Character:
|
||||
if "color" in metadata:
|
||||
self.color = metadata["color"]
|
||||
if "mes_example" in metadata:
|
||||
new_line_match = "\r\n" if "\r\n" in metadata["mes_example"] else "\n"
|
||||
for message in metadata["mes_example"].split("<START>"):
|
||||
if message.strip("\r\n"):
|
||||
if message.strip(new_line_match):
|
||||
self.example_dialogue.extend(
|
||||
[m for m in message.split("\r\n") if m]
|
||||
[m for m in message.split(new_line_match) if m]
|
||||
)
|
||||
|
||||
|
||||
@@ -317,9 +360,9 @@ class Character:
|
||||
}
|
||||
})
|
||||
|
||||
for detail in self.details:
|
||||
for key, detail in self.details.items():
|
||||
items.append({
|
||||
"text": f"{self.name} details: {detail}",
|
||||
"text": f"{self.name} - {key}: {detail}",
|
||||
"meta": {
|
||||
"character": self.name,
|
||||
"typ": "details",
|
||||
@@ -413,8 +456,14 @@ class Actor:
|
||||
|
||||
self.agent.character = self.character
|
||||
|
||||
messages = await self.agent.converse(self, editor=editor)
|
||||
await asyncio.sleep(0)
|
||||
conversation_context = ConversationContext(
|
||||
talking_character=self.character.name,
|
||||
other_characters=[actor.character.name for actor in self.scene.actors if actor != self],
|
||||
)
|
||||
|
||||
with ClientContext(conversation=conversation_context):
|
||||
messages = await self.agent.converse(self, editor=editor)
|
||||
|
||||
return messages
|
||||
|
||||
|
||||
@@ -498,6 +547,7 @@ class Scene(Emitter):
|
||||
self.environment = "scene"
|
||||
self.goal = None
|
||||
self.world_state = WorldState()
|
||||
self.ts = "PT0S"
|
||||
|
||||
self.automated_actions = {}
|
||||
|
||||
@@ -586,6 +636,10 @@ class Scene(Emitter):
|
||||
|
||||
def push_history(self, messages: list[SceneMessage]):
|
||||
|
||||
"""
|
||||
Adds one or more messages to the scene history
|
||||
"""
|
||||
|
||||
if isinstance(messages, SceneMessage):
|
||||
messages = [messages]
|
||||
|
||||
@@ -599,6 +653,9 @@ class Scene(Emitter):
|
||||
if isinstance(self.history[idx], DirectorMessage):
|
||||
self.history.pop(idx)
|
||||
break
|
||||
|
||||
elif isinstance(message, TimePassageMessage):
|
||||
self.advance_time(message.ts)
|
||||
|
||||
self.history.extend(messages)
|
||||
self.signals["history_add"].send(
|
||||
@@ -610,19 +667,26 @@ class Scene(Emitter):
|
||||
)
|
||||
|
||||
def push_archive(self, entry: data_objects.ArchiveEntry):
|
||||
|
||||
"""
|
||||
Adds an entry to the archive history.
|
||||
|
||||
The archive history is a list of summarized history entries.
|
||||
"""
|
||||
|
||||
self.archived_history.append(entry.__dict__)
|
||||
self.signals["archive_add"].send(
|
||||
events.ArchiveEvent(
|
||||
scene=self,
|
||||
event_type="archive_add",
|
||||
text=entry.text,
|
||||
ts=entry.ts,
|
||||
)
|
||||
)
|
||||
emit("archived_history", data={
|
||||
"history":[archived_history["text"] for archived_history in self.archived_history]
|
||||
})
|
||||
|
||||
|
||||
def edit_message(self, message_id:int, message:str):
|
||||
"""
|
||||
Finds the message in `history` by its id and will update its contents
|
||||
@@ -804,7 +868,7 @@ class Scene(Emitter):
|
||||
# we then take the history from the end index to the end of the history
|
||||
|
||||
if self.archived_history:
|
||||
end = self.archived_history[-1]["end"]
|
||||
end = self.archived_history[-1].get("end", 0)
|
||||
else:
|
||||
end = 0
|
||||
|
||||
@@ -841,8 +905,17 @@ class Scene(Emitter):
|
||||
# description at tbe beginning of the context history
|
||||
|
||||
archive_insert_idx = 0
|
||||
|
||||
# iterate backwards through archived history and count how many entries
|
||||
# there are that have an end index
|
||||
num_archived_entries = 0
|
||||
if add_archieved_history:
|
||||
for i in range(len(self.archived_history) - 1, -1, -1):
|
||||
if self.archived_history[i].get("end") is None:
|
||||
break
|
||||
num_archived_entries += 1
|
||||
|
||||
if len(self.archived_history) <= 2 and add_archieved_history:
|
||||
if num_archived_entries <= 2 and add_archieved_history:
|
||||
|
||||
|
||||
for character in self.characters:
|
||||
@@ -874,6 +947,13 @@ class Scene(Emitter):
|
||||
context_history.insert(archive_insert_idx, "<|CLOSE_SECTION|>")
|
||||
|
||||
while i >= 0 and limit > 0 and add_archieved_history:
|
||||
|
||||
# we skip predefined history, that should be joined in through
|
||||
# long term memory queries
|
||||
|
||||
if self.archived_history[i].get("end") is None:
|
||||
break
|
||||
|
||||
text = self.archived_history[i]["text"]
|
||||
if count_tokens(context_history) + count_tokens(text) > budget:
|
||||
break
|
||||
@@ -1014,6 +1094,11 @@ class Scene(Emitter):
|
||||
self.history.pop(i)
|
||||
log.info(f"Deleted message {message_id}")
|
||||
emit("remove_message", "", id=message_id)
|
||||
|
||||
if isinstance(message, TimePassageMessage):
|
||||
self.sync_time()
|
||||
self.emit_status()
|
||||
|
||||
break
|
||||
|
||||
def emit_status(self):
|
||||
@@ -1026,6 +1111,7 @@ class Scene(Emitter):
|
||||
"scene_config": self.scene_config,
|
||||
"assets": self.assets.dict(),
|
||||
"characters": [actor.character.serialize for actor in self.actors],
|
||||
"scene_time": util.iso8601_duration_to_human(self.ts, suffix="") if self.ts else None,
|
||||
},
|
||||
)
|
||||
|
||||
@@ -1035,7 +1121,58 @@ class Scene(Emitter):
|
||||
"""
|
||||
self.environment = environment
|
||||
self.emit_status()
|
||||
|
||||
|
||||
def advance_time(self, ts: str):
|
||||
"""
|
||||
Accepts an iso6801 duration string and advances the scene's world state by that amount
|
||||
"""
|
||||
|
||||
self.ts = isodate.duration_isoformat(
|
||||
isodate.parse_duration(self.ts) + isodate.parse_duration(ts)
|
||||
)
|
||||
|
||||
def sync_time(self):
|
||||
"""
|
||||
Loops through self.history looking for TimePassageMessage and will
|
||||
advance the world state by the amount of time passed for each
|
||||
"""
|
||||
|
||||
# reset time
|
||||
|
||||
self.ts = "PT0S"
|
||||
|
||||
for message in self.history:
|
||||
if isinstance(message, TimePassageMessage):
|
||||
self.advance_time(message.ts)
|
||||
|
||||
|
||||
self.log.info("sync_time", ts=self.ts)
|
||||
|
||||
# TODO: need to adjust archived_history ts as well
|
||||
# but removal also probably means the history needs to be regenerated
|
||||
# anyway.
|
||||
|
||||
def calc_time(self, start_idx:int=0, end_idx:int=None):
|
||||
"""
|
||||
Loops through self.history looking for TimePassageMessage and will
|
||||
return the sum iso8601 duration string
|
||||
|
||||
Defines start and end indexes
|
||||
"""
|
||||
|
||||
ts = "PT0S"
|
||||
found = False
|
||||
|
||||
for message in self.history[start_idx:end_idx]:
|
||||
if isinstance(message, TimePassageMessage):
|
||||
util.iso8601_add(ts, message.ts)
|
||||
found = True
|
||||
|
||||
if not found:
|
||||
return None
|
||||
|
||||
return ts
|
||||
|
||||
async def start(self):
|
||||
"""
|
||||
Start the scene
|
||||
@@ -1093,7 +1230,7 @@ class Scene(Emitter):
|
||||
actor = self.get_character(char_name).actor
|
||||
except AttributeError:
|
||||
# If the character is not an actor, then it is the narrator
|
||||
self.narrator_message(item)
|
||||
emit(item.typ, item)
|
||||
continue
|
||||
emit("character", item, character=actor.character)
|
||||
if not actor.character.is_player:
|
||||
@@ -1225,6 +1362,7 @@ class Scene(Emitter):
|
||||
"context": scene.context,
|
||||
"world_state": scene.world_state.dict(),
|
||||
"assets": scene.assets.dict(),
|
||||
"ts": scene.ts,
|
||||
}
|
||||
|
||||
emit("system", "Saving scene data to: " + filepath)
|
||||
|
||||
@@ -4,6 +4,8 @@ import json
|
||||
import re
|
||||
import textwrap
|
||||
import structlog
|
||||
import isodate
|
||||
import datetime
|
||||
from typing import List
|
||||
|
||||
from colorama import Back, Fore, Style, init
|
||||
@@ -297,6 +299,26 @@ def pronouns(gender: str) -> tuple[str, str]:
|
||||
return (pronoun, possessive_determiner)
|
||||
|
||||
|
||||
def strip_partial_sentences(text:str) -> str:
|
||||
# Sentence ending characters
|
||||
sentence_endings = ['.', '!', '?', '"', "*"]
|
||||
|
||||
# Check if the last character is already a sentence ending
|
||||
if text[-1] in sentence_endings:
|
||||
return text
|
||||
|
||||
# Split the text into words
|
||||
words = text.split()
|
||||
|
||||
# Iterate over the words in reverse order until a sentence ending is found
|
||||
for i in range(len(words) - 1, -1, -1):
|
||||
if words[i][-1] in sentence_endings:
|
||||
return ' '.join(words[:i+1])
|
||||
|
||||
# If no sentence ending is found, return the original text
|
||||
return text
|
||||
|
||||
|
||||
def clean_paragraph(paragraph: str) -> str:
|
||||
"""
|
||||
Cleans up a paragraph of text by:
|
||||
@@ -431,4 +453,145 @@ def fix_faulty_json(data: str) -> str:
|
||||
data = re.sub(r',\s*}', '}', data)
|
||||
data = re.sub(r',\s*]', ']', data)
|
||||
|
||||
return data
|
||||
return data
|
||||
|
||||
def duration_to_timedelta(duration):
|
||||
"""Convert an isodate.Duration object to a datetime.timedelta object."""
|
||||
days = int(duration.years) * 365 + int(duration.months) * 30 + int(duration.days)
|
||||
return datetime.timedelta(days=days)
|
||||
|
||||
def timedelta_to_duration(delta):
|
||||
"""Convert a datetime.timedelta object to an isodate.Duration object."""
|
||||
days = delta.days
|
||||
years = days // 365
|
||||
days %= 365
|
||||
months = days // 30
|
||||
days %= 30
|
||||
return isodate.duration.Duration(years=years, months=months, days=days)
|
||||
|
||||
def parse_duration_to_isodate_duration(duration_str):
|
||||
"""Parse ISO 8601 duration string and ensure the result is an isodate.Duration."""
|
||||
parsed_duration = isodate.parse_duration(duration_str)
|
||||
if isinstance(parsed_duration, datetime.timedelta):
|
||||
days = parsed_duration.days
|
||||
years = days // 365
|
||||
days %= 365
|
||||
months = days // 30
|
||||
days %= 30
|
||||
return isodate.duration.Duration(years=years, months=months, days=days)
|
||||
return parsed_duration
|
||||
|
||||
def iso8601_diff(duration_str1, duration_str2):
|
||||
# Parse the ISO 8601 duration strings ensuring they are isodate.Duration objects
|
||||
duration1 = parse_duration_to_isodate_duration(duration_str1)
|
||||
duration2 = parse_duration_to_isodate_duration(duration_str2)
|
||||
|
||||
# Convert to timedelta
|
||||
timedelta1 = duration_to_timedelta(duration1)
|
||||
timedelta2 = duration_to_timedelta(duration2)
|
||||
|
||||
# Calculate the difference
|
||||
difference_timedelta = abs(timedelta1 - timedelta2)
|
||||
|
||||
# Convert back to Duration for further processing
|
||||
difference = timedelta_to_duration(difference_timedelta)
|
||||
|
||||
return difference
|
||||
|
||||
def iso8601_duration_to_human(iso_duration, suffix:str=" ago"):
|
||||
# Parse the ISO8601 duration string into an isodate duration object
|
||||
|
||||
if isinstance(iso_duration, isodate.Duration):
|
||||
duration = iso_duration
|
||||
else:
|
||||
duration = isodate.parse_duration(iso_duration)
|
||||
|
||||
if isinstance(duration, isodate.Duration):
|
||||
years = duration.years
|
||||
months = duration.months
|
||||
days = duration.days
|
||||
seconds = duration.tdelta.total_seconds()
|
||||
else:
|
||||
years, months = 0, 0
|
||||
days = duration.days
|
||||
seconds = duration.total_seconds() - days * 86400 # Extract time-only part
|
||||
|
||||
hours, seconds = divmod(seconds, 3600)
|
||||
minutes, seconds = divmod(seconds, 60)
|
||||
|
||||
components = []
|
||||
if years:
|
||||
components.append(f"{years} Year{'s' if years > 1 else ''}")
|
||||
if months:
|
||||
components.append(f"{months} Month{'s' if months > 1 else ''}")
|
||||
if days:
|
||||
components.append(f"{days} Day{'s' if days > 1 else ''}")
|
||||
if hours:
|
||||
components.append(f"{int(hours)} Hour{'s' if hours > 1 else ''}")
|
||||
if minutes:
|
||||
components.append(f"{int(minutes)} Minute{'s' if minutes > 1 else ''}")
|
||||
if seconds:
|
||||
components.append(f"{int(seconds)} Second{'s' if seconds > 1 else ''}")
|
||||
|
||||
# Construct the human-readable string
|
||||
if len(components) > 1:
|
||||
last = components.pop()
|
||||
human_str = ', '.join(components) + ' and ' + last
|
||||
elif components:
|
||||
human_str = components[0]
|
||||
else:
|
||||
human_str = "0 Seconds"
|
||||
|
||||
return f"{human_str}{suffix}"
|
||||
|
||||
def iso8601_diff_to_human(start, end):
|
||||
if not start or not end:
|
||||
return ""
|
||||
|
||||
diff = iso8601_diff(start, end)
|
||||
return iso8601_duration_to_human(diff)
|
||||
|
||||
|
||||
def iso8601_add(date_a:str, date_b:str) -> str:
|
||||
"""
|
||||
Adds two ISO 8601 durations together.
|
||||
"""
|
||||
# Validate input
|
||||
if not date_a or not date_b:
|
||||
return "PT0S"
|
||||
|
||||
new_ts = isodate.parse_duration(date_a.strip()) + isodate.parse_duration(date_b.strip())
|
||||
return isodate.duration_isoformat(new_ts)
|
||||
|
||||
def iso8601_correct_duration(duration: str) -> str:
|
||||
# Split the string into date and time components using 'T' as the delimiter
|
||||
parts = duration.split("T")
|
||||
|
||||
# Handle the date component
|
||||
date_component = parts[0]
|
||||
time_component = ""
|
||||
|
||||
# If there's a time component, process it
|
||||
if len(parts) > 1:
|
||||
time_component = parts[1]
|
||||
|
||||
# Check if the time component has any date values (Y, M, D) and move them to the date component
|
||||
for char in "YD": # Removed 'M' from this loop
|
||||
if char in time_component:
|
||||
index = time_component.index(char)
|
||||
date_component += time_component[:index+1]
|
||||
time_component = time_component[index+1:]
|
||||
|
||||
# If the date component contains any time values (H, M, S), move them to the time component
|
||||
for char in "HMS":
|
||||
if char in date_component:
|
||||
index = date_component.index(char)
|
||||
time_component = date_component[index:] + time_component
|
||||
date_component = date_component[:index]
|
||||
|
||||
# Combine the corrected date and time components
|
||||
corrected_duration = date_component
|
||||
if time_component:
|
||||
corrected_duration += "T" + time_component
|
||||
|
||||
return corrected_duration
|
||||
86
talemate_frontend/src/components/DebugToolPromptLog.vue
Normal file
86
talemate_frontend/src/components/DebugToolPromptLog.vue
Normal file
@@ -0,0 +1,86 @@
|
||||
<template>
|
||||
<v-list-subheader class="text-uppercase"><v-icon>mdi-post-outline</v-icon> Prompts
|
||||
<v-chip size="x-small" color="primary">{{ max_prompts }}</v-chip>
|
||||
</v-list-subheader>
|
||||
|
||||
<v-list-item density="compact">
|
||||
<v-slider density="compact" v-model="max_prompts" min="1" hide-details max="250" step="1" color="primary"></v-slider>
|
||||
</v-list-item>
|
||||
|
||||
<v-list-item v-for="(prompt, index) in prompts" :key="index" @click="openPromptView(prompt)">
|
||||
<v-list-item-title class="text-caption">
|
||||
{{ prompt.kind }}
|
||||
</v-list-item-title>
|
||||
<v-list-item-subtitle>
|
||||
<v-chip size="x-small"><v-icon size="14"
|
||||
class="mr-1">mdi-pound</v-icon>{{ prompt.num }}</v-chip>
|
||||
<v-chip size="x-small" color="primary">{{ prompt.prompt_tokens }}<v-icon size="14"
|
||||
class="ml-1">mdi-arrow-down-bold</v-icon></v-chip>
|
||||
<v-chip size="x-small" color="secondary">{{ prompt.response_tokens }}<v-icon size="14"
|
||||
class="ml-1">mdi-arrow-up-bold</v-icon></v-chip>
|
||||
</v-list-item-subtitle>
|
||||
<v-divider class="mt-1"></v-divider>
|
||||
</v-list-item>
|
||||
|
||||
<DebugToolPromptView ref="promptView" />
|
||||
</template>
|
||||
<script>
|
||||
|
||||
import DebugToolPromptView from './DebugToolPromptView.vue';
|
||||
|
||||
export default {
|
||||
name: 'DebugToolPromptLog',
|
||||
data() {
|
||||
return {
|
||||
prompts: [],
|
||||
total: 0,
|
||||
max_prompts: 50,
|
||||
}
|
||||
},
|
||||
components: {
|
||||
DebugToolPromptView,
|
||||
},
|
||||
inject: [
|
||||
'getWebsocket',
|
||||
'registerMessageHandler',
|
||||
'setWaitingForInput',
|
||||
],
|
||||
|
||||
methods: {
|
||||
handleMessage(data) {
|
||||
|
||||
if(data.type === "system"&& data.id === "scene.loaded") {
|
||||
this.prompts = [];
|
||||
this.total = 0;
|
||||
return;
|
||||
}
|
||||
|
||||
if(data.type === "prompt_sent") {
|
||||
// add to prompts array, and truncate if necessary (max 50)
|
||||
this.prompts.unshift({
|
||||
prompt: data.data.prompt,
|
||||
response: data.data.response,
|
||||
kind: data.data.kind,
|
||||
response_tokens: data.data.response_tokens,
|
||||
prompt_tokens: data.data.prompt_tokens,
|
||||
num: this.total++,
|
||||
})
|
||||
|
||||
while(this.prompts.length > this.max_prompts) {
|
||||
this.prompts.pop();
|
||||
}
|
||||
}
|
||||
},
|
||||
|
||||
openPromptView(prompt) {
|
||||
this.$refs.promptView.open(prompt);
|
||||
}
|
||||
},
|
||||
|
||||
created() {
|
||||
this.registerMessageHandler(this.handleMessage);
|
||||
},
|
||||
|
||||
}
|
||||
|
||||
</script>
|
||||
68
talemate_frontend/src/components/DebugToolPromptView.vue
Normal file
68
talemate_frontend/src/components/DebugToolPromptView.vue
Normal file
@@ -0,0 +1,68 @@
|
||||
<template>
|
||||
<v-dialog v-model="dialog" max-width="50%">
|
||||
<v-card>
|
||||
<v-card-title>
|
||||
#{{ prompt.num }} - {{ prompt.kind }}
|
||||
</v-card-title>
|
||||
<v-tabs color="primary" v-model="tab">
|
||||
<v-tab value="prompt">
|
||||
Prompt
|
||||
</v-tab>
|
||||
<v-tab value="response">
|
||||
Response
|
||||
</v-tab>
|
||||
</v-tabs>
|
||||
|
||||
<v-window v-model="tab">
|
||||
<v-window-item value="prompt">
|
||||
<v-card flat>
|
||||
<v-card-text style="max-height:600px; overflow-y:scroll;">
|
||||
<div class="prompt-view">{{ prompt.prompt }}</div>
|
||||
</v-card-text>
|
||||
</v-card>
|
||||
</v-window-item>
|
||||
<v-window-item value="response">
|
||||
<v-card flat>
|
||||
<v-card-text style="max-height:600px; overflow-y:scroll;">
|
||||
<div class="prompt-view">{{ prompt.response }}</div>
|
||||
</v-card-text>
|
||||
</v-card>
|
||||
</v-window-item>
|
||||
</v-window>
|
||||
</v-card>
|
||||
</v-dialog>
|
||||
</template>
|
||||
<script>
|
||||
|
||||
export default {
|
||||
name: 'DebugToolPromptView',
|
||||
data() {
|
||||
return {
|
||||
prompt: null,
|
||||
dialog: false,
|
||||
tab: "prompt"
|
||||
}
|
||||
},
|
||||
methods: {
|
||||
open(prompt) {
|
||||
this.prompt = prompt;
|
||||
this.dialog = true;
|
||||
},
|
||||
close() {
|
||||
this.dialog = false;
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
</script>
|
||||
|
||||
<style scoped>
|
||||
|
||||
.prompt-view {
|
||||
font-family: monospace;
|
||||
font-size: 12px;
|
||||
white-space: pre-wrap;
|
||||
word-wrap: break-word;
|
||||
}
|
||||
|
||||
</style>
|
||||
54
talemate_frontend/src/components/DebugTools.vue
Normal file
54
talemate_frontend/src/components/DebugTools.vue
Normal file
@@ -0,0 +1,54 @@
|
||||
<template>
|
||||
|
||||
<v-list-item>
|
||||
<v-checkbox density="compact" v-model="log_socket_messages" label="Log Websocket Messages" color="primary"></v-checkbox>
|
||||
<v-text-field v-if="log_socket_messages === true" density="compact" v-model="filter_socket_messages" label="Filter Websocket Messages" color="primary"></v-text-field>
|
||||
</v-list-item>
|
||||
|
||||
<DebugToolPromptLog ref="promptLog"/>
|
||||
</template>
|
||||
<script>
|
||||
|
||||
import DebugToolPromptLog from './DebugToolPromptLog.vue';
|
||||
|
||||
export default {
|
||||
name: 'DebugTools',
|
||||
components: {
|
||||
DebugToolPromptLog,
|
||||
},
|
||||
data() {
|
||||
return {
|
||||
expanded: false,
|
||||
log_socket_messages: false,
|
||||
filter_socket_messages: null,
|
||||
}
|
||||
},
|
||||
|
||||
inject: [
|
||||
'getWebsocket',
|
||||
'registerMessageHandler',
|
||||
'setWaitingForInput',
|
||||
],
|
||||
|
||||
methods: {
|
||||
handleMessage(data) {
|
||||
if(this.log_socket_messages) {
|
||||
|
||||
if(this.filter_socket_messages) {
|
||||
if(data.type.indexOf(this.filter_socket_messages) === -1) {
|
||||
return;
|
||||
}
|
||||
}
|
||||
|
||||
console.log(data);
|
||||
}
|
||||
}
|
||||
},
|
||||
|
||||
created() {
|
||||
this.registerMessageHandler(this.handleMessage);
|
||||
},
|
||||
|
||||
}
|
||||
|
||||
</script>
|
||||
@@ -40,6 +40,11 @@
|
||||
<DirectorMessage :text="message.text" :message_id="message.id" :character="message.character" />
|
||||
</div>
|
||||
</div>
|
||||
<div v-else-if="message.type === 'time'" :class="`message ${message.type}`">
|
||||
<div class="time-message" :id="`message-${message.id}`">
|
||||
<TimePassageMessage :text="message.text" :message_id="message.id" :ts="message.ts" />
|
||||
</div>
|
||||
</div>
|
||||
<div v-else :class="`message ${message.type}`">
|
||||
{{ message.text }}
|
||||
</div>
|
||||
@@ -51,6 +56,7 @@
|
||||
import CharacterMessage from './CharacterMessage.vue';
|
||||
import NarratorMessage from './NarratorMessage.vue';
|
||||
import DirectorMessage from './DirectorMessage.vue';
|
||||
import TimePassageMessage from './TimePassageMessage.vue';
|
||||
|
||||
export default {
|
||||
name: 'SceneMessages',
|
||||
@@ -58,6 +64,7 @@ export default {
|
||||
CharacterMessage,
|
||||
NarratorMessage,
|
||||
DirectorMessage,
|
||||
TimePassageMessage,
|
||||
},
|
||||
data() {
|
||||
return {
|
||||
@@ -87,6 +94,7 @@ export default {
|
||||
multiSelect: data.data.multi_select,
|
||||
color: data.color,
|
||||
sent: false,
|
||||
ts: data.ts,
|
||||
};
|
||||
this.messages.push(message);
|
||||
},
|
||||
@@ -160,13 +168,15 @@ export default {
|
||||
|
||||
return
|
||||
}
|
||||
|
||||
|
||||
if (data.message) {
|
||||
if (data.type === 'character') {
|
||||
const [character, text] = data.message.split(':');
|
||||
const parts = data.message.split(':');
|
||||
const character = parts.shift();
|
||||
const text = parts.join(':');
|
||||
this.messages.push({ id: data.id, type: data.type, character: character.trim(), text: text.trim(), color: data.color }); // Add color property to the message
|
||||
} else if (data.type != 'request_input' && data.type != 'client_status' && data.type != 'agent_status') {
|
||||
this.messages.push({ id: data.id, type: data.type, text: data.message, color: data.color, character: data.character, status:data.status }); // Add color property to the message
|
||||
this.messages.push({ id: data.id, type: data.type, text: data.message, color: data.color, character: data.character, status:data.status, ts:data.ts }); // Add color property to the message
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
@@ -86,6 +86,20 @@
|
||||
</v-btn>
|
||||
</template>
|
||||
</v-tooltip>
|
||||
<v-menu>
|
||||
<template v-slot:activator="{ props }">
|
||||
<v-btn class="hotkey mx-3" v-bind="props" :disabled="isInputDisabled()" color="primary" icon>
|
||||
<v-icon>mdi-clock</v-icon>
|
||||
</v-btn>
|
||||
</template>
|
||||
<v-list>
|
||||
<v-list-subheader>Advance Time</v-list-subheader>
|
||||
<v-list-item v-for="(option, index) in advanceTimeOptions" :key="index"
|
||||
@click="sendHotButtonMessage('!advance_time:' + option.value)">
|
||||
<v-list-item-title>{{ option.title }}</v-list-item-title>
|
||||
</v-list-item>
|
||||
</v-list>
|
||||
</v-menu>
|
||||
<v-divider vertical></v-divider>
|
||||
<v-tooltip :disabled="isInputDisabled()" location="top" text="Direct a character">
|
||||
<template v-slot:activator="{ props }">
|
||||
@@ -142,6 +156,7 @@
|
||||
</v-card>
|
||||
|
||||
</div>
|
||||
|
||||
</template>
|
||||
|
||||
|
||||
@@ -154,6 +169,23 @@ export default {
|
||||
return {
|
||||
commandActive: false,
|
||||
commandName: null,
|
||||
|
||||
advanceTimeOptions: [
|
||||
{"value" : "P10Y", "title": "10 years"},
|
||||
{"value" : "P5Y", "title": "5 years"},
|
||||
{"value" : "P1Y", "title": "1 year"},
|
||||
{"value" : "P6M", "title": "6 months"},
|
||||
{"value" : "P3M", "title": "3 months"},
|
||||
{"value" : "P1M", "title": "1 month"},
|
||||
{"value" : "P7D:1 Week later", "title": "1 week"},
|
||||
{"value" : "P3D", "title": "3 days"},
|
||||
{"value" : "P1D", "title": "1 day"},
|
||||
{"value" : "PT8H", "title": "8 hours"},
|
||||
{"value" : "PT4H", "title": "4 hours"},
|
||||
{"Value" : "PT1H", "title": "1 hour"},
|
||||
{"value" : "PT30M", "title": "30 minutes"},
|
||||
{"value" : "PT15M", "title": "15 minutes"}
|
||||
],
|
||||
}
|
||||
},
|
||||
inject: [
|
||||
|
||||
@@ -53,6 +53,14 @@
|
||||
</v-list>
|
||||
</v-navigation-drawer>
|
||||
|
||||
<!-- debug tools navigation drawer -->
|
||||
<v-navigation-drawer v-model="debugDrawer" app location="right">
|
||||
<v-list>
|
||||
<v-list-subheader class="text-uppercase"><v-icon>mdi-bug</v-icon> Debug Tools</v-list-subheader>
|
||||
<DebugTools ref="debugTools"></DebugTools>
|
||||
</v-list>
|
||||
</v-navigation-drawer>
|
||||
|
||||
<!-- system bar -->
|
||||
<v-system-bar>
|
||||
<v-icon icon="mdi-network-outline"></v-icon>
|
||||
@@ -89,11 +97,17 @@
|
||||
<v-btn v-if="scene.environment === 'scene'" class="ml-1" @click="openSceneHistory()"><v-icon size="14"
|
||||
class="mr-1">mdi-playlist-star</v-icon>History</v-btn>
|
||||
|
||||
<v-chip size="x-small" v-if="scene.scene_time !== undefined">
|
||||
<v-icon>mdi-clock</v-icon>
|
||||
{{ scene.scene_time }}
|
||||
</v-chip>
|
||||
|
||||
</v-toolbar-title>
|
||||
<v-toolbar-title v-else>
|
||||
Talemate
|
||||
</v-toolbar-title>
|
||||
<v-spacer></v-spacer>
|
||||
<v-app-bar-nav-icon @click="toggleNavigation('debug')"><v-icon>mdi-bug</v-icon></v-app-bar-nav-icon>
|
||||
<v-app-bar-nav-icon @click="openAppConfig()"><v-icon>mdi-cog</v-icon></v-app-bar-nav-icon>
|
||||
<v-app-bar-nav-icon @click="toggleNavigation('settings')" v-if="configurationRequired()"
|
||||
color="red"><v-icon>mdi-application-cog</v-icon></v-app-bar-nav-icon>
|
||||
@@ -145,6 +159,7 @@ import CharacterSheet from './CharacterSheet.vue';
|
||||
import SceneHistory from './SceneHistory.vue';
|
||||
import CreativeEditor from './CreativeEditor.vue';
|
||||
import AppConfig from './AppConfig.vue';
|
||||
import DebugTools from './DebugTools.vue';
|
||||
|
||||
export default {
|
||||
components: {
|
||||
@@ -160,6 +175,7 @@ export default {
|
||||
SceneHistory,
|
||||
CreativeEditor,
|
||||
AppConfig,
|
||||
DebugTools,
|
||||
},
|
||||
name: 'TalemateApp',
|
||||
data() {
|
||||
@@ -169,6 +185,7 @@ export default {
|
||||
sceneActive: false,
|
||||
drawer: false,
|
||||
sceneDrawer: true,
|
||||
debugDrawer: false,
|
||||
websocket: null,
|
||||
inputDisabled: false,
|
||||
waitingForInput: false,
|
||||
@@ -282,6 +299,7 @@ export default {
|
||||
this.scene = {
|
||||
name: data.name,
|
||||
environment: data.data.environment,
|
||||
scene_time: data.data.scene_time,
|
||||
}
|
||||
this.sceneActive = true;
|
||||
return;
|
||||
@@ -369,6 +387,8 @@ export default {
|
||||
this.sceneDrawer = !this.sceneDrawer;
|
||||
else if (navigation == "settings")
|
||||
this.drawer = !this.drawer;
|
||||
else if (navigation == "debug")
|
||||
this.debugDrawer = !this.debugDrawer;
|
||||
},
|
||||
getClients() {
|
||||
if (!this.$refs.aiClient) {
|
||||
|
||||
61
talemate_frontend/src/components/TimePassageMessage.vue
Normal file
61
talemate_frontend/src/components/TimePassageMessage.vue
Normal file
@@ -0,0 +1,61 @@
|
||||
<template>
|
||||
<div class="time-container" v-if="show && minimized" >
|
||||
<v-chip closable @click:close="deleteMessage()" color="deep-purple-lighten-3">
|
||||
<v-icon class="mr-2">mdi-clock-outline</v-icon>
|
||||
<span>{{ text }}</span>
|
||||
</v-chip>
|
||||
</div>
|
||||
</template>
|
||||
|
||||
<script>
|
||||
export default {
|
||||
data() {
|
||||
return {
|
||||
show: true,
|
||||
minimized: true
|
||||
}
|
||||
},
|
||||
props: ['text', 'message_id', 'ts'],
|
||||
inject: ['requestDeleteMessage'],
|
||||
methods: {
|
||||
toggle() {
|
||||
this.minimized = !this.minimized;
|
||||
},
|
||||
deleteMessage() {
|
||||
console.log('deleteMessage', this.message_id);
|
||||
this.requestDeleteMessage(this.message_id);
|
||||
}
|
||||
}
|
||||
}
|
||||
</script>
|
||||
|
||||
<style scoped>
|
||||
.highlight {
|
||||
color: #9FA8DA;
|
||||
font-style: italic;
|
||||
margin-left: 2px;
|
||||
margin-right: 2px;
|
||||
}
|
||||
|
||||
.highlight:before {
|
||||
--content: "*";
|
||||
}
|
||||
|
||||
.highlight:after {
|
||||
--content: "*";
|
||||
}
|
||||
|
||||
.time-text {
|
||||
color: #9FA8DA;
|
||||
}
|
||||
|
||||
.time-message {
|
||||
display: flex;
|
||||
flex-direction: row;
|
||||
color: #9FA8DA;
|
||||
}
|
||||
|
||||
.time-container {
|
||||
|
||||
}
|
||||
</style>
|
||||
@@ -1,2 +1,2 @@
|
||||
SYSTEM: {{ system_message }}
|
||||
USER: {{ set_response(prompt, "\nASSISTANT:") }}
|
||||
USER: {{ set_response(prompt, "\nASSISTANT: ") }}
|
||||
@@ -0,0 +1,4 @@
|
||||
{{ system_message }}
|
||||
|
||||
### Instruction:
|
||||
{{ set_response(prompt, "\n\n### Response:\n") }}
|
||||
@@ -1,2 +1,2 @@
|
||||
SYSTEM: {{ system_message }}
|
||||
USER: {{ set_response(prompt, "\nASSISTANT:") }}
|
||||
USER: {{ set_response(prompt, "\nASSISTANT: ") }}
|
||||
2
templates/llm-prompt/Xwin-LM.jinja2
Normal file
2
templates/llm-prompt/Xwin-LM.jinja2
Normal file
@@ -0,0 +1,2 @@
|
||||
{{ system_message }}
|
||||
USER: {{ set_response(prompt, "\nASSISTANT: ") }}
|
||||
Reference in New Issue
Block a user