Compare commits

..

21 Commits

Author SHA1 Message Date
veguAI
b6f4069e8c prep 0.16.1 (#42)
* improve windows install script to check for compatible python versions, also work with multi version python installs

* prep 0.16.1
2023-12-11 21:07:23 +02:00
veguAI
1cb5869f0b Update README.md 2023-12-11 16:03:46 +02:00
veguAI
8ad794aa6c Update README.md 2023-12-11 15:55:40 +02:00
veguAI
611f77a730 Prep 0.16.0 (#40)
* remove dbg message

* more work to make clients and agents modular
allow conversation and narrator to attempt to auto break AI repetition

* application settings refactor
setup third party api keys through application settings

* runpod docs

* fix wording

* docs

* improvements to auto-break-repetition functionality

* more auto-break-repetition improvements

* some cleanup to narrate on dialogue chance calculations

* changing api keys via ux should now reflect to ux instantly.

* memory agent / chromadb agent - wrap blocking functions calls in asyncio

* clean up narrate progression prompt and function

* turn off dedupe debug message for now

* encourage the AI to break repetition as well

* indicate if the current model is missing a LLM prompt template
add prompt template to client modal
fix a bunch of bad vue code

* only show llm prompt when editing client

* OpenHermes-2.5-neural-chat
RpBird-Yi-34B

* fix bug with auto rep break when no repetition was found

* allow giving extra instructions to narrator agent

* emit agents as needed, not constantly

* fix a bunch of vue alerts

* fix request-client-status event

* remove undefined reference

* log client / status emit

* worldstate component track scene time

* Tess
Noromaid

* fix narrate-character prompt context length overflow issues

* disable worldstate refresh button while waiting for response

* history timestamp moved to tooltip off of history button

* fixes #39: using openai embeddings for chromadb tends to error

* adjust conversation again default instructions

* poetry lock

* remove debug message

* chromadb - agent status error if openai embeddings are selected in api key isn't set

* prep 0.16.0
2023-12-08 22:57:44 +02:00
veguAI
0738899ac9 Prep 0.15.0 (#38)
* send one request for assign all clients

* tweak narrate-after-dialogue prompt

* elevenlabs default to turbo model and make model id configurable

* improve add client dialogue to be more robust

* prompt for default character creation on character card loads

* rename to model as to not conflict with pydantic

* narrate after dialogue strip dialogue generation unless enabled via new option

* starling and capybara-tess

* narrate dialogue context increased

* relabel tts agent to Voice, show agent label in status bar

* dont expect LLM to handle * and " - most of them are not stable / consistent enough with it

* starling template updated

* if allow dialogue in narration is disabled just assume the entire string is a narration

* reorganization the narrate after dialogue template

* fix more issues with time passage calculations

* move punkt download to agent init and silence

* improved RAG during conversation if AI selected is enabled in conversation agent

* prompt tweaks

* deepseek, chromomaid-storytelling

* relock

* narrate-after-dialogue prompt tweaks

* runpod status queries every 15 secs instead of 60

* default player character prompting when loading character card from talemate storage

* better chunking during split tts generation

* tweak narrate progress prompt

* improvements to ensure_dialogue_format and tests

* to pytest

* prep 0.15.0

* update packages

* dialogue cleanup fixes

* fix openai default model name
fix not being able to edit client due to name check

* free form analyst was using wrong system prompt causing gpt-4 to actually generate json responses
2023-12-02 00:40:14 +02:00
veguAI
76b7b5c0e0 templating overview (#37)
readme updates

readme updates
2023-11-26 16:35:09 +02:00
veguAI
cae5e8d217 Update README.md
Update textgenwebui setup picture to be in line with current api url requirements
2023-11-26 16:32:50 +02:00
veguAI
97bfd3a672 Add files via upload 2023-11-26 16:31:49 +02:00
veguAI
8fb1341b93 Update README.md
fix references to old repo
2023-11-26 16:25:46 +02:00
fiwo
cba4412f3d Update README.md 2023-11-25 01:49:44 +02:00
fiwo
2ad87f6e8a Prep 0.14.1 (#35)
* tts dont try to play sound if agent not ready

* tts: flag agent as uninitlized if no voice is selected
tts: fix some config issues with voice selection

* 0.14.1
2023-11-25 00:13:33 +02:00
fiwo
496eb469db Prep 0.14.0 (#34)
* tts agent first progress

* coqui support
voice lists

* orca-2

* tts tweaks

* switch to ux for audio gen

* some tweaks for the new audio queue

* fix error handling if llm fails to create a good world state on initial scene load

* loading creative mode for a new scene will now ask for confirmation if the current scene has unsaved progress

* local tts support

* fix voice list reloading when switching tts api
fix agent config ux to auto save on change, remove save / close buttons

* only do a delayed save on agent config on text input changes

* OrionStar

* dont allow scene loading when llm agents arent correctly configured

* wire summarization to game loop, summarizer agent configs

* fix issues with time passage

* editor fix narrator messages

* 0.14.0

* poetry lock

* requires_llm_client moved to cls property

* add additional config stubs

* tts still load voices even if the agent is disabled

* fix bugf that would keep losing voice selection for tts agent after backend restart

* update tts install requirements

* remove debug output
2023-11-24 22:08:13 +02:00
FInalWombat
b78fec3bac Update README.md 2023-11-20 00:13:08 +02:00
FInalWombat
d250df8950 Prep 0.13.2 (#33)
* fix issue with client removal

* client type not editable after creation (keeps things simple)

* fixes issue with openai client bugging out (api_url not set)

* fix issues with edit client not reflecting changes to UX

* 0.13.2
2023-11-19 20:43:15 +02:00
FInalWombat
816f950afe Prep 0.13.1 (#29)
* narrate after dialog constrained a bit more so it doesnt create something unrelated

* fix issue where textgenwebui client would come back as status ok even though no model was loaded

* 0.13.1
2023-11-19 18:58:40 +02:00
FInalWombat
8fb72fdbe9 Update README.md 2023-11-19 14:05:09 +02:00
FInalWombat
54297a4768 Update README.md 2023-11-18 12:20:02 +02:00
FInalWombat
d7e72d27c5 Prep 0.13.0 (#28)
* requirements.txt file

* windows installs from requirements.txt because of silly permission issues

* relock

* narrator - narrate on dialogue agent actions

* add support for new textgenwebui api

* world state auto regen trigger off of gameloop

* funciton !rename command

* ensure_dialog_format error handling

* Cat, Nous-Capybara, dolphin-2.2.1

* narrate after dialog rerun fixes, template fixes

* LMStudio client (experimental)

* dolhpin yi

* refactor client base

* cruft

* openai client to new base

* more client refactor fixes

* tweak context retrieval prompts

* adjust nous capybara template

* add Tess-Medium

* 0.13.0

* switch back to poetry for windows as well

* error on legacy textgenwebui api

* runpod text gen api url fixed

* fix windows install script

* add fllow instruction template

* Psyfighter2
2023-11-18 12:16:29 +02:00
FInalWombat
f9b23f8705 Update README.md 2023-11-14 11:06:52 +02:00
FInalWombat
37a5873330 Update README.md 2023-11-12 15:43:02 +02:00
FInalWombat
bc3f5d63c8 Add files via upload 2023-11-12 15:42:07 +02:00
77 changed files with 4333 additions and 1351 deletions

View File

@@ -2,24 +2,32 @@
Allows you to play roleplay scenarios with large language models.
It does not run any large language models itself but relies on existing APIs. Currently supports **text-generation-webui** and **openai**.
This means you need to either have an openai api key or know how to setup [oobabooga/text-generation-webui](https://github.com/oobabooga/text-generation-webui) (locally or remotely via gpu renting. `--api` flag needs to be set)
|![Screenshot 1](docs/img/Screenshot_9.png)|![Screenshot 2](docs/img/Screenshot_2.png)|
|------------------------------------------|------------------------------------------|
![Screenshot 1](docs/img/Screenshot_8.png)
![Screenshot 2](docs/img/Screenshot_2.png)
> :warning: **It does not run any large language models itself but relies on existing APIs. Currently supports OpenAI, text-generation-webui and LMStudio.**
This means you need to either have:
- an [OpenAI](https://platform.openai.com/overview) api key
- OR setup local (or remote via runpod) LLM inference via one of these options:
- [oobabooga/text-generation-webui](https://github.com/oobabooga/text-generation-webui)
- [LMStudio](https://lmstudio.ai/)
## Current features
- responive modern ui
- agents
- conversation
- narration
- summarization
- director
- creative
- multi-client (agents can be connected to separate APIs)
- long term memory (experimental)
- conversation: handles character dialogue
- narration: handles narrative exposition
- summarization: handles summarization to compress context while maintain history
- director: can be used to direct the story / characters
- editor: improves AI responses (very hit and miss at the moment)
- world state: generates world snapshot and handles passage of time (objects and characters)
- creator: character / scenario creator
- tts: text to speech via elevenlabs, coqui studio, coqui local
- multi-client support (agents can be connected to separate APIs)
- long term memory
- chromadb integration
- passage of time
- narrative world state
@@ -36,6 +44,7 @@ Kinda making it up as i go along, but i want to lean more into gameplay through
In no particular order:
- Extension support
- modular agents and clients
- Improved world state
@@ -49,28 +58,31 @@ In no particular order:
- objectives
- quests
- win / lose conditions
- Automatic1111 client
- Automatic1111 client for in place visual generation
# Quickstart
## Installation
Post [here](https://github.com/final-wombat/talemate/issues/17) if you run into problems during installation.
Post [here](https://github.com/vegu-ai/talemate/issues/17) if you run into problems during installation.
### Windows
1. Download and install Python 3.10 or higher from the [official Python website](https://www.python.org/downloads/windows/).
1. Download and install Python 3.10 or Python 3.11 from the [official Python website](https://www.python.org/downloads/windows/). :warning: python3.12 is currently not supported.
1. Download and install Node.js from the [official Node.js website](https://nodejs.org/en/download/). This will also install npm.
1. Download the Talemate project to your local machine. Download from [the Releases page](https://github.com/final-wombat/talemate/releases).
1. Download the Talemate project to your local machine. Download from [the Releases page](https://github.com/vegu-ai/talemate/releases).
1. Unpack the download and run `install.bat` by double clicking it. This will set up the project on your local machine.
- :warning: If your installation errors with a notification to upgrade "Microsoft Visual C++" go to https://visualstudio.microsoft.com/visual-cpp-build-tools/ and click "Download Build Tools" and run it.
- During installation make sure you select the C++ development package (upper left corner)
- Run `reinstall.bat` inside talemate directory
1. Once the installation is complete, you can start the backend and frontend servers by running `start.bat`.
1. Navigate your browser to http://localhost:8080
### Linux
`python 3.10` or higher is required.
`python 3.10` or `python 3.11` is required. :warning: `python 3.12` not supported yet.
1. `git clone git@github.com:final-wombat/talemate`
1. `git clone git@github.com:vegu-ai/talemate`
1. `cd talemate`
1. `source install.sh`
1. Start the backend: `python src/talemate/server/run.py runserver --host 0.0.0.0 --port 5050`.
@@ -117,9 +129,11 @@ On the right hand side click the "Add Client" button. If there is no button, you
### Text-generation-webui
> :warning: As of version 0.13.0 the legacy text-generator-webui API `--extension api` is no longer supported, please use their new `--extension openai` api implementation instead.
In the modal if you're planning to connect to text-generation-webui, you can likely leave everything as is and just click Save.
![Add client modal](docs/img/add-client-modal.png)
![Add client modal](docs/img/client-setup-0.13.png)
### OpenAI
@@ -155,7 +169,10 @@ Make sure you save the scene after the character is loaded as it can then be loa
## Further documentation
- Creative mode (docs WIP)
- Prompt template overrides
Please read the documents in the `docs` folder for more advanced configuration and usage.
- [Prompt template overrides](docs/templates.md)
- [Text-to-Speech (TTS)](docs/tts.md)
- [ChromaDB (long term memory)](docs/chromadb.md)
- Runpod Integration
- [Runpod Integration](docs/runpod.md)
- Creative mode

View File

@@ -7,20 +7,34 @@ creator:
- a thrilling action story aimed at an adult audience.
- a mysterious adventure aimed at an adult audience.
- an epic sci-fi adventure aimed at an adult audience.
game:
default_player_character:
color: '#6495ed'
description: a young man with a penchant for adventure.
gender: male
name: Elmer
game: {}
## Long-term memory
#chromadb:
# embeddings: instructor
# instructor_device: cuda
# instructor_model: hkunlp/instructor-xl
## Remote LLMs
#openai:
# api_key: <API_KEY>
#runpod:
# api_key: <API_KEY>
# api_key: <API_KEY>
## TTS (Text-to-Speech)
#elevenlabs:
# api_key: <API_KEY>
#coqui:
# api_key: <API_KEY>
#tts:
# device: cuda
# model: tts_models/multilingual/multi-dataset/xtts_v2
# voices:
# - label: <name>
# value: <path to .wav for voice sample>

BIN
docs/img/Screenshot_9.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 551 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 14 KiB

BIN
docs/img/runpod-docs-1.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 6.6 KiB

52
docs/runpod.md Normal file
View File

@@ -0,0 +1,52 @@
## RunPod integration
RunPod allows you to quickly set up and run text-generation-webui instances on powerful GPUs, remotely. If you want to run the significantly larger models (like 70B parameters) with reasonable speeds, this is probably the best way to do it.
### Create / grab your RunPod API key and add it to the talemate config
You can manage your RunPod api keys at [https://www.runpod.io/console/user/settings](https://www.runpod.io/console/user/settings)
Add the key to your Talemate config file (config.yaml):
```yaml
runpod:
api_key: <your api key>
```
Then restart Talemate.
### Create a RunPod instance
#### Community Cloud
The community cloud pods are cheaper and there are generally more GPUs available. They do however not support persistent storage and you will have to download your model and data every time you deploy a pod.
#### Secure Cloud
The secure cloud pods are more expensive and there are generally fewer GPUs available, but they do support persistent storage.
Peristent volumes are super convenient, but optional for our purposes and are **not** free and you will have to pay for the storage you use.
### Deploy pod
For us it does not matter which cloud you choose. The only thing that matters is that it deploys a text-generation-webui instance, and you ensure that by choosing the right template.
Pick the GPU you want to use, for 70B models you want at least 48GB of VRAM and click `Deploy`, then select a template and deploy.
When choosing the template for your pod, choose the `RunPod TheBloke LLMs` template. This template is pre-configured with all the dependencies needed to run text-generation-webui. There are other text-generation-webui templates, but they are usually out of date and this one i found to be consistently good.
> :warning: The name of your pod is important and ensures that Talemate will be able to find it. Talemate will only be able to find pods that have the word `thebloke llms` or `textgen` in their name. (case insensitive)
Once your pod is deployed and has finished setup and is running, the client will automatically appear in the Talemate client list, making it available for you to use like you would use a locally hosted text-generation-webui instance.
![RunPod client](img/runpod-docs-1.png)
### Connecting to the text-generation-webui UI
To manage your text-generation-webui instance, click the `Connect` button in your RunPod pod dashboard at [https://www.runpod.io/console/pods](https://www.runpod.io/console/pods) and in the popup click on `Connect to HTTP Service [Port 7860]` to open the text-generation-webui UI. Then just download and load your model as you normally would.
## :warning: Always check your pod status on the RunPod dashboard
Talemate is not a suitable or reliable way for you to determine whether your pod is currently running or not. **Always** check the runpod dashboard to see if your pod is running or not.
While your pod us running it will be eating up your credits, so make sure to stop it when you're not using it.

82
docs/templates.md Normal file
View File

@@ -0,0 +1,82 @@
# Template Overrides in Talemate
## Introduction to Templates
In Talemate, templates are used to generate dynamic content for various agents involved in roleplaying scenarios. These templates leverage the Jinja2 templating engine, allowing for the inclusion of variables, conditional logic, and custom functions to create rich and interactive narratives.
## Template Structure
A typical template in Talemate consists of several sections, each enclosed within special section tags (`<|SECTION:NAME|>` and `<|CLOSE_SECTION|>`). These sections can include character details, dialogue examples, scenario overviews, tasks, and additional context. Templates utilize loops and blocks to iterate over data and render content conditionally based on the task requirements.
## Overriding Templates
Users can customize the behavior of Talemate by overriding the default templates. To override a template, create a new template file with the same name in the `./templates/prompts/{agent}/` directory. When a custom template is present, Jinja2 will prioritize it over the default template located in the `./src/talemate/prompts/templates/{agent}/` directory.
## Creator Agent Templates
The creator agent templates allow for the creation of new characters within the character creator. Following the naming convention `character-attributes-*.jinja2`, `character-details-*.jinja2`, and `character-example-dialogue-*.jinja2`, users can add new templates that will be available for selection in the character creator.
### Requirements for Creator Templates
- All three types (`attributes`, `details`, `example-dialogue`) need to be available for a choice to be valid in the character creator.
- Users can check the human templates for an understanding of how to structure these templates.
### Example Templates
- [Character Attributes Human Template](src/talemate/prompts/templates/creator/character-attributes-human.jinja2)
- [Character Details Human Template](src/talemate/prompts/templates/creator/character-details-human.jinja2)
- [Character Example Dialogue Human Template](src/talemate/prompts/templates/creator/character-example-dialogue-human.jinja2)
These example templates can serve as a guide for users to create their own custom templates for the character creator.
### Extending Existing Templates
Jinja2's template inheritance feature allows users to extend existing templates and add extra information. By using the `{% extends "template-name.jinja2" %}` tag, a new template can inherit everything from an existing template and then add or override specific blocks of content.
#### Example
To add a description of a character's hairstyle to the human character details template, you could create a new template like this:
```jinja2
{% extends "character-details-human.jinja2" %}
{% block questions %}
{% if character_details.q("what does "+character.name+"'s hair look like?") -%}
Briefly describe {{ character.name }}'s hair-style using a narrative writing style that reminds of mid 90s point and click adventure games. (2 - 3 sentences).
{% endif %}
{% endblock %}
```
This example shows how to extend the `character-details-human.jinja2` template and add a block for questions about the character's hair. The `{% block questions %}` tag is used to define a section where additional questions can be inserted or existing ones can be overridden.
## Advanced Template Topics
### Jinja2 Functions in Talemate
Talemate exposes several functions to the Jinja2 template environment, providing utilities for data manipulation, querying, and controlling content flow. Here's a list of available functions:
1. `set_prepared_response(response, prepend)`: Sets the prepared response with an optional prepend string. This function allows the template to specify the beginning of the LLM response when processing the rendered template. For example, `set_prepared_response("Certainly!")` will ensure that the LLM's response starts with "Certainly!".
2. `set_prepared_response_random(responses, prefix)`: Chooses a random response from a list and sets it as the prepared response with an optional prefix.
3. `set_eval_response(empty)`: Prepares the response for evaluation, optionally initializing a counter for an empty string.
4. `set_json_response(initial_object, instruction, cutoff)`: Prepares for a JSON response with an initial object and optional instruction and cutoff.
5. `set_question_eval(question, trigger, counter, weight)`: Sets up a question for evaluation with a trigger, counter, and weight.
6. `disable_dedupe()`: Disables deduplication of the response text.
7. `random(min, max)`: Generates a random integer between the specified minimum and maximum.
8. `query_scene(query, at_the_end, as_narrative)`: Queries the scene with a question and returns the formatted response.
9. `query_text(query, text, as_question_answer)`: Queries a text with a question and returns the formatted response.
10. `query_memory(query, as_question_answer, **kwargs)`: Queries the memory with a question and returns the formatted response.
11. `instruct_text(instruction, text)`: Instructs the text with a command and returns the result.
12. `retrieve_memories(lines, goal)`: Retrieves memories based on the provided lines and an optional goal.
13. `uuidgen()`: Generates a UUID string.
14. `to_int(x)`: Converts the given value to an integer.
15. `config`: Accesses the configuration settings.
16. `len(x)`: Returns the length of the given object.
17. `count_tokens(x)`: Counts the number of tokens in the given text.
18. `print(x)`: Prints the given object (mainly for debugging purposes).
These functions enhance the capabilities of templates, allowing for dynamic and interactive content generation.
### Error Handling
Errors encountered during template rendering are logged and propagated to the user interface. This ensures that users are informed of any issues that may arise, allowing them to troubleshoot and resolve problems effectively.
By following these guidelines, users can create custom templates that tailor the Talemate experience to their specific storytelling needs.# Template Overrides in Talemate

84
docs/tts.md Normal file
View File

@@ -0,0 +1,84 @@
# Talemate Text-to-Speech (TTS) Configuration
Talemate supports Text-to-Speech (TTS) functionality, allowing users to convert text into spoken audio. This document outlines the steps required to configure TTS for Talemate using different providers, including ElevenLabs, Coqui, and a local TTS API.
## Configuring ElevenLabs TTS
To use ElevenLabs TTS with Talemate, follow these steps:
1. Visit [ElevenLabs](https://elevenlabs.com) and create an account if you don't already have one.
2. Click on your profile in the upper right corner of the Eleven Labs website to access your API key.
3. In the `config.yaml` file, under the `elevenlabs` section, set the `api_key` field with your ElevenLabs API key.
Example configuration snippet:
```yaml
elevenlabs:
api_key: <YOUR_ELEVENLABS_API_KEY>
```
## Configuring Coqui TTS
To use Coqui TTS with Talemate, follow these steps:
1. Visit [Coqui](https://app.coqui.ai) and sign up for an account.
2. Go to the [account page](https://app.coqui.ai/account) and scroll to the bottom to find your API key.
3. In the `config.yaml` file, under the `coqui` section, set the `api_key` field with your Coqui API key.
Example configuration snippet:
```yaml
coqui:
api_key: <YOUR_COQUI_API_KEY>
```
## Configuring Local TTS API
For running a local TTS API, Talemate requires specific dependencies to be installed.
### Windows Installation
Run `install-local-tts.bat` to install the necessary requirements.
### Linux Installation
Execute the following command:
```bash
pip install TTS
```
### Model and Device Configuration
1. Choose a TTS model from the [Coqui TTS model list](https://github.com/coqui-ai/TTS).
2. Decide whether to use `cuda` or `cpu` for the device setting.
3. The first time you run TTS through the local API, it will download the specified model. Please note that this may take some time, and the download progress will be visible in the Talemate backend output.
Example configuration snippet:
```yaml
tts:
device: cuda # or 'cpu'
model: tts_models/multilingual/multi-dataset/xtts_v2
```
### Voice Samples Configuration
Configure voice samples by setting the `value` field to the path of a .wav file voice sample. Official samples can be downloaded from [Coqui XTTS-v2 samples](https://huggingface.co/coqui/XTTS-v2/tree/main/samples).
Example configuration snippet:
```yaml
tts:
voices:
- label: English Male
value: path/to/english_male.wav
- label: English Female
value: path/to/english_female.wav
```
## Saving the Configuration
After configuring the `config.yaml` file, save your changes. Talemate will use the updated settings the next time it starts.
For more detailed information on configuring Talemate, refer to the `config.py` file in the Talemate source code and the `config.example.yaml` file for a barebone configuration example.

4
install-local-tts.bat Normal file
View File

@@ -0,0 +1,4 @@
REM activate the virtual environment
call talemate_env\Scripts\activate
call pip install "TTS>=0.21.1"

View File

@@ -1,11 +1,47 @@
@echo off
REM Check for Python version and use a supported version if available
SET PYTHON=python
python -c "import sys; sys.exit(0 if sys.version_info[:2] in [(3, 10), (3, 11)] else 1)" 2>nul
IF NOT ERRORLEVEL 1 (
echo Selected Python version: %PYTHON%
GOTO EndVersionCheck
)
SET PYTHON=python
FOR /F "tokens=*" %%i IN ('py --list') DO (
echo %%i | findstr /C:"-V:3.11 " >nul && SET PYTHON=py -3.11 && GOTO EndPythonCheck
echo %%i | findstr /C:"-V:3.10 " >nul && SET PYTHON=py -3.10 && GOTO EndPythonCheck
)
:EndPythonCheck
%PYTHON% -c "import sys; sys.exit(0 if sys.version_info[:2] in [(3, 10), (3, 11)] else 1)" 2>nul
IF ERRORLEVEL 1 (
echo Unsupported Python version. Please install Python 3.10 or 3.11.
exit /b 1
)
IF "%PYTHON%"=="python" (
echo Default Python version is being used: %PYTHON%
) ELSE (
echo Selected Python version: %PYTHON%
)
:EndVersionCheck
IF ERRORLEVEL 1 (
echo Unsupported Python version. Please install Python 3.10 or 3.11.
exit /b 1
)
REM create a virtual environment
python -m venv talemate_env
%PYTHON% -m venv talemate_env
REM activate the virtual environment
call talemate_env\Scripts\activate
REM upgrade pip and setuptools
python -m pip install --upgrade pip setuptools
REM install poetry
python -m pip install "poetry==1.7.1" "rapidfuzz>=3" -U

2322
poetry.lock generated

File diff suppressed because it is too large Load Diff

View File

@@ -4,7 +4,7 @@ build-backend = "poetry.masonry.api"
[tool.poetry]
name = "talemate"
version = "0.13.0"
version = "0.16.1"
description = "AI-backed roleplay and narrative tools"
authors = ["FinalWombat"]
license = "GNU Affero General Public License v3.0"
@@ -32,11 +32,12 @@ beautifulsoup4 = "^4.12.2"
python-dotenv = "^1.0.0"
websockets = "^11.0.3"
structlog = "^23.1.0"
runpod = "==1.2.0"
runpod = "^1.2.0"
nest_asyncio = "^1.5.7"
isodate = ">=0.6.1"
thefuzz = ">=0.20.0"
tiktoken = ">=0.5.1"
nltk = ">=3.8.1"
# ChromaDB
chromadb = ">=0.4.17,<1"

View File

@@ -2,4 +2,4 @@ from .agents import Agent
from .client import TextGeneratorWebuiClient
from .tale_mate import *
VERSION = "0.13.0"
VERSION = "0.16.1"

View File

@@ -1,6 +1,5 @@
from .base import Agent
from .creator import CreatorAgent
from .context import ContextAgent
from .conversation import ConversationAgent
from .director import DirectorAgent
from .memory import ChromaDBMemoryAgent, MemoryAgent
@@ -8,4 +7,5 @@ from .narrator import NarratorAgent
from .registry import AGENT_CLASSES, get_agent_class, register
from .summarize import SummarizeAgent
from .editor import EditorAgent
from .world_state import WorldStateAgent
from .world_state import WorldStateAgent
from .tts import TTSAgent

View File

@@ -9,6 +9,7 @@ from blinker import signal
import talemate.instance as instance
import talemate.util as util
from talemate.agents.context import ActiveAgent
from talemate.emit import emit
from talemate.events import GameLoopStartEvent
import talemate.emit.async_signals
@@ -23,16 +24,22 @@ __all__ = [
log = structlog.get_logger("talemate.agents.base")
class AgentActionConfig(pydantic.BaseModel):
type: str
label: str
description: str = ""
value: Union[int, float, str, bool]
value: Union[int, float, str, bool, None] = None
default_value: Union[int, float, str, bool] = None
max: Union[int, float, None] = None
min: Union[int, float, None] = None
step: Union[int, float, None] = None
scope: str = "global"
choices: Union[list[dict[str, str]], None] = None
class Config:
arbitrary_types_allowed = True
class AgentAction(pydantic.BaseModel):
enabled: bool = True
@@ -40,7 +47,6 @@ class AgentAction(pydantic.BaseModel):
description: str = ""
config: Union[dict[str, AgentActionConfig], None] = None
def set_processing(fn):
"""
decorator that emits the agent status as processing while the function
@@ -51,11 +57,12 @@ def set_processing(fn):
"""
async def wrapper(self, *args, **kwargs):
try:
await self.emit_status(processing=True)
return await fn(self, *args, **kwargs)
finally:
await self.emit_status(processing=False)
with ActiveAgent(self, fn):
try:
await self.emit_status(processing=True)
return await fn(self, *args, **kwargs)
finally:
await self.emit_status(processing=False)
wrapper.__name__ = fn.__name__
@@ -70,6 +77,8 @@ class Agent(ABC):
agent_type = "agent"
verbose_name = None
set_processing = set_processing
requires_llm_client = True
auto_break_repetition = False
@property
def agent_details(self):
@@ -89,7 +98,7 @@ class Agent(ABC):
if not getattr(self.client, "enabled", True):
return False
if self.client.current_status in ["error", "warning"]:
if self.client and self.client.current_status in ["error", "warning"]:
return False
return self.client is not None
@@ -135,6 +144,7 @@ class Agent(ABC):
"enabled": agent.enabled if agent else True,
"has_toggle": agent.has_toggle if agent else False,
"experimental": agent.experimental if agent else False,
"requires_llm_client": cls.requires_llm_client,
}
actions = getattr(agent, "actions", None)
@@ -275,6 +285,22 @@ class Agent(ABC):
current_memory_context.append(memory)
return current_memory_context
# LLM client related methods. These are called during or after the client
# sends the prompt to the API.
def inject_prompt_paramters(self, prompt_param:dict, kind:str, agent_function_name:str):
"""
Injects prompt parameters before the client sends off the prompt
Override as needed.
"""
pass
def allow_repetition_break(self, kind:str, agent_function_name:str, auto:bool=False):
"""
Returns True if repetition breaking is allowed, False otherwise.
"""
return False
@dataclasses.dataclass
class AgentEmission:

View File

@@ -1,54 +1,33 @@
from .base import Agent
from .registry import register
from typing import Callable, TYPE_CHECKING
import contextvars
import pydantic
@register
class ContextAgent(Agent):
"""
Agent that helps retrieve context for the continuation
of dialogue.
"""
__all__ = [
"active_agent",
]
agent_type = "context"
active_agent = contextvars.ContextVar("active_agent", default=None)
def __init__(self, client, **kwargs):
self.client = client
class ActiveAgentContext(pydantic.BaseModel):
agent: object
fn: Callable
class Config:
arbitrary_types_allowed=True
@property
def action(self):
return self.fn.__name__
def determine_questions(self, scene_text):
prompt = [
"You are tasked to continue the following dialogue in a roleplaying session, but before you can do so you can ask three questions for extra context."
"",
"What are the questions you would ask?",
"",
"Known context and dialogue:" "",
scene_text,
"",
"Questions:",
"",
]
prompt = "\n".join(prompt)
questions = self.client.send_prompt(prompt, kind="question")
questions = self.clean_result(questions)
return questions.split("\n")
def get_answer(self, question, context):
prompt = [
"Read the context and answer the question:",
"",
"Context:",
"",
context,
"",
f"Question: {question}",
"Answer:",
]
prompt = "\n".join(prompt)
answer = self.client.send_prompt(prompt, kind="answer")
answer = self.clean_result(answer)
return answer
class ActiveAgent:
def __init__(self, agent, fn):
self.agent = ActiveAgentContext(agent=agent, fn=fn)
def __enter__(self):
self.token = active_agent.set(self.agent)
def __exit__(self, *args, **kwargs):
active_agent.reset(self.token)
return False

View File

@@ -85,20 +85,25 @@ class ConversationAgent(Agent):
"instructions": AgentActionConfig(
type="text",
label="Instructions",
value="1-3 sentences.",
value="Write 1-3 sentences. Never wax poetic.",
description="Extra instructions to give the AI for dialog generatrion.",
),
"jiggle": AgentActionConfig(
type="number",
label="Jiggle",
label="Jiggle (Increased Randomness)",
description="If > 0.0 will cause certain generation parameters to have a slight random offset applied to them. The bigger the number, the higher the potential offset.",
value=0.0,
min=0.0,
max=1.0,
step=0.1,
),
)
}
),
"auto_break_repetition": AgentAction(
enabled = True,
label = "Auto Break Repetition",
description = "Will attempt to automatically break AI repetition.",
),
"natural_flow": AgentAction(
enabled = True,
label = "Natural Flow",
@@ -131,7 +136,7 @@ class ConversationAgent(Agent):
config = {
"ai_selected": AgentActionConfig(
type="bool",
label="AI Selected",
label="AI memory retrieval",
description="If enabled, the AI will select the long term memory to use. (will increase how long it takes to generate a response)",
value=False,
),
@@ -406,7 +411,7 @@ class ConversationAgent(Agent):
context = await memory.multi_query(history, max_tokens=500, iterate=5)
self.current_memory_context = "\n".join(context)
self.current_memory_context = "\n\n".join(context)
return self.current_memory_context
@@ -534,3 +539,11 @@ class ConversationAgent(Agent):
actor.scene.push_history(messages)
return messages
def allow_repetition_break(self, kind: str, agent_function_name: str, auto: bool = False):
if auto and not self.actions["auto_break_repetition"].enabled:
return False
return agent_function_name == "converse"

View File

@@ -10,7 +10,7 @@ import talemate.emit.async_signals
from talemate.prompts import Prompt
from talemate.scene_message import DirectorMessage, TimePassageMessage
from .base import Agent, set_processing, AgentAction
from .base import Agent, set_processing, AgentAction, AgentActionConfig
from .registry import register
import structlog
@@ -21,6 +21,7 @@ import re
if TYPE_CHECKING:
from talemate.tale_mate import Actor, Character, Scene
from talemate.agents.conversation import ConversationAgentEmission
from talemate.agents.narrator import NarratorAgentEmission
log = structlog.get_logger("talemate.agents.editor")
@@ -40,7 +41,9 @@ class EditorAgent(Agent):
self.is_enabled = True
self.actions = {
"edit_dialogue": AgentAction(enabled=False, label="Edit dialogue", description="Will attempt to improve the quality of dialogue based on the character and scene. Runs automatically after each AI dialogue."),
"fix_exposition": AgentAction(enabled=True, label="Fix exposition", description="Will attempt to fix exposition and emotes, making sure they are displayed in italics. Runs automatically after each AI dialogue."),
"fix_exposition": AgentAction(enabled=True, label="Fix exposition", description="Will attempt to fix exposition and emotes, making sure they are displayed in italics. Runs automatically after each AI dialogue.", config={
"narrator": AgentActionConfig(type="bool", label="Fix narrator messages", description="Will attempt to fix exposition issues in narrator messages", value=True),
}),
"add_detail": AgentAction(enabled=False, label="Add detail", description="Will attempt to add extra detail and exposition to the dialogue. Runs automatically after each AI dialogue.")
}
@@ -59,6 +62,7 @@ class EditorAgent(Agent):
def connect(self, scene):
super().connect(scene)
talemate.emit.async_signals.get("agent.conversation.generated").connect(self.on_conversation_generated)
talemate.emit.async_signals.get("agent.narrator.generated").connect(self.on_narrator_generated)
async def on_conversation_generated(self, emission:ConversationAgentEmission):
"""
@@ -93,6 +97,24 @@ class EditorAgent(Agent):
emission.generation = edited
async def on_narrator_generated(self, emission:NarratorAgentEmission):
"""
Called when a narrator message is generated
"""
if not self.enabled:
return
log.info("editing narrator", emission=emission)
edited = []
for text in emission.generation:
edit = await self.fix_exposition_on_narrator(text)
edited.append(edit)
emission.generation = edited
@set_processing
async def edit_conversation(self, content:str, character:Character):
@@ -127,12 +149,19 @@ class EditorAgent(Agent):
if not self.actions["fix_exposition"].enabled:
return content
#response = await Prompt.request("editor.fix-exposition", self.client, "edit_fix_exposition", vars={
# "content": content,
# "character": character,
# "scene": self.scene,
# "max_length": self.client.max_token_length
#})
if not character.is_player:
if '"' not in content and '*' not in content:
content = util.strip_partial_sentences(content)
character_prefix = f"{character.name}: "
message = content.split(character_prefix)[1]
content = f"{character_prefix}*{message.strip('*')}*"
return content
elif '"' in content:
# if both are present we strip the * and add them back later
# through ensure_dialog_format - right now most LLMs aren't
# smart enough to do quotes and italics at the same time consistently
# especially throughout long conversations
content = content.replace('*', '')
content = util.clean_dialogue(content, main_name=character.name)
content = util.strip_partial_sentences(content)
@@ -140,6 +169,24 @@ class EditorAgent(Agent):
return content
@set_processing
async def fix_exposition_on_narrator(self, content:str):
if not self.actions["fix_exposition"].enabled:
return content
if not self.actions["fix_exposition"].config["narrator"].value:
return content
content = util.strip_partial_sentences(content)
if '"' not in content:
content = f"*{content.strip('*')}*"
else:
content = util.ensure_dialog_format(content)
return content
@set_processing
async def add_detail(self, content:str, character:Character):
"""

View File

@@ -6,10 +6,14 @@ from typing import TYPE_CHECKING, Callable, List, Optional, Union
from chromadb.config import Settings
import talemate.events as events
import talemate.util as util
from talemate.emit import emit
from talemate.emit.signals import handlers
from talemate.context import scene_is_loading
from talemate.config import load_config
from talemate.agents.base import set_processing
import structlog
import shutil
import functools
try:
import chromadb
@@ -57,6 +61,15 @@ class MemoryAgent(Agent):
self.scene = scene
self.memory_tracker = {}
self.config = load_config()
handlers["config_saved"].connect(self.on_config_saved)
def on_config_saved(self, event):
openai_key = self.openai_api_key
self.config = load_config()
if openai_key != self.openai_api_key:
loop = asyncio.get_running_loop()
loop.run_until_complete(self.emit_status())
async def set_db(self):
raise NotImplementedError()
@@ -67,33 +80,43 @@ class MemoryAgent(Agent):
async def count(self):
raise NotImplementedError()
@set_processing
async def add(self, text, character=None, uid=None, ts:str=None, **kwargs):
if not text:
return
if self.readonly:
log.debug("memory agent", status="readonly")
return
await self._add(text, character=character, uid=uid, ts=ts, **kwargs)
loop = asyncio.get_running_loop()
await loop.run_in_executor(None, functools.partial(self._add, text, character, uid=uid, ts=ts, **kwargs))
async def _add(self, text, character=None, ts:str=None, **kwargs):
def _add(self, text, character=None, ts:str=None, **kwargs):
raise NotImplementedError()
@set_processing
async def add_many(self, objects: list[dict]):
if self.readonly:
log.debug("memory agent", status="readonly")
return
await self._add_many(objects)
loop = asyncio.get_running_loop()
await loop.run_in_executor(None, self._add_many, objects)
async def _add_many(self, objects: list[dict]):
def _add_many(self, objects: list[dict]):
"""
Add multiple objects to the memory
"""
raise NotImplementedError()
@set_processing
async def get(self, text, character=None, **query):
return await self._get(str(text), character, **query)
loop = asyncio.get_running_loop()
return await loop.run_in_executor(None, functools.partial(self._get, text, character, **query))
async def _get(self, text, character=None, **query):
def _get(self, text, character=None, **query):
raise NotImplementedError()
def get_document(self, id):
@@ -140,6 +163,10 @@ class MemoryAgent(Agent):
"""
memory_context = []
if not query:
return memory_context
for memory in await self.get(query):
if memory in memory_context:
continue
@@ -179,6 +206,10 @@ class MemoryAgent(Agent):
memory_context = []
for query in queries:
if not query:
continue
i = 0
for memory in await self.get(formatter(query), limit=iterate, **where):
if memory in memory_context:
@@ -206,9 +237,14 @@ from .registry import register
@register(condition=lambda: chromadb is not None)
class ChromaDBMemoryAgent(MemoryAgent):
requires_llm_client = False
@property
def ready(self):
if self.embeddings == "openai" and not self.openai_api_key:
return False
if getattr(self, "db_client", None):
return True
return False
@@ -217,12 +253,20 @@ class ChromaDBMemoryAgent(MemoryAgent):
def status(self):
if self.ready:
return "active" if not getattr(self, "processing", False) else "busy"
if self.embeddings == "openai" and not self.openai_api_key:
return "error"
return "waiting"
@property
def agent_details(self):
if self.embeddings == "openai" and not self.openai_api_key:
return "No OpenAI API key set"
return f"ChromaDB: {self.embeddings}"
@property
def embeddings(self):
"""
@@ -265,6 +309,10 @@ class ChromaDBMemoryAgent(MemoryAgent):
def db_name(self):
return getattr(self, "collection_name", "<unnamed>")
@property
def openai_api_key(self):
return self.config.get("openai",{}).get("api_key")
def make_collection_name(self, scene):
if self.USE_OPENAI:
@@ -285,17 +333,19 @@ class ChromaDBMemoryAgent(MemoryAgent):
await asyncio.sleep(0)
return self.db.count()
@set_processing
async def set_db(self):
await self.emit_status(processing=True)
loop = asyncio.get_running_loop()
await loop.run_in_executor(None, self._set_db)
def _set_db(self):
if not getattr(self, "db_client", None):
log.info("chromadb agent", status="setting up db client to persistent db")
self.db_client = chromadb.PersistentClient(
settings=Settings(anonymized_telemetry=False)
)
openai_key = self.config.get("openai").get("api_key") or os.environ.get("OPENAI_API_KEY")
openai_key = self.openai_api_key
self.collection_name = collection_name = self.make_collection_name(self.scene)
@@ -340,8 +390,6 @@ class ChromaDBMemoryAgent(MemoryAgent):
self.db = self.db_client.get_or_create_collection(collection_name)
self.scene._memory_never_persisted = self.db.count() == 0
await self.emit_status(processing=False)
log.info("chromadb agent", status="db ready")
def clear_db(self):
@@ -382,12 +430,10 @@ class ChromaDBMemoryAgent(MemoryAgent):
self.db = None
async def _add(self, text, character=None, uid=None, ts:str=None, **kwargs):
def _add(self, text, character=None, uid=None, ts:str=None, **kwargs):
metadatas = []
ids = []
await self.emit_status(processing=True)
if character:
meta = {"character": character.name, "source": "talemate"}
if ts:
@@ -409,20 +455,16 @@ class ChromaDBMemoryAgent(MemoryAgent):
id = uid or f"__narrator__-{self.memory_tracker['__narrator__']}"
ids = [id]
log.debug("chromadb agent add", text=text, meta=meta, id=id)
#log.debug("chromadb agent add", text=text, meta=meta, id=id)
self.db.upsert(documents=[text], metadatas=metadatas, ids=ids)
await self.emit_status(processing=False)
async def _add_many(self, objects: list[dict]):
def _add_many(self, objects: list[dict]):
documents = []
metadatas = []
ids = []
await self.emit_status(processing=True)
for obj in objects:
documents.append(obj["text"])
meta = obj.get("meta", {})
@@ -435,11 +477,7 @@ class ChromaDBMemoryAgent(MemoryAgent):
ids.append(uid)
self.db.upsert(documents=documents, metadatas=metadatas, ids=ids)
await self.emit_status(processing=False)
async def _get(self, text, character=None, limit:int=15, **kwargs):
await self.emit_status(processing=True)
def _get(self, text, character=None, limit:int=15, **kwargs):
where = {}
where.setdefault("$and", [])
@@ -479,9 +517,10 @@ class ChromaDBMemoryAgent(MemoryAgent):
if distance < 1:
try:
#log.debug("chromadb agent get", ts=ts, scene_ts=self.scene.ts)
date_prefix = util.iso8601_diff_to_human(ts, self.scene.ts)
except Exception:
log.error("chromadb agent", error="failed to get date prefix", ts=ts, scene_ts=self.scene.ts)
except Exception as e:
log.error("chromadb agent", error="failed to get date prefix", details=e, ts=ts, scene_ts=self.scene.ts)
date_prefix = None
if date_prefix:
@@ -495,6 +534,4 @@ class ChromaDBMemoryAgent(MemoryAgent):
if len(results) > limit:
break
await self.emit_status(processing=False)
return results

View File

@@ -1,13 +1,14 @@
from __future__ import annotations
from typing import TYPE_CHECKING, Callable, List, Optional, Union
import dataclasses
import structlog
import random
import talemate.util as util
from talemate.emit import emit
import talemate.emit.async_signals
from talemate.prompts import Prompt
from talemate.agents.base import set_processing, Agent, AgentAction, AgentActionConfig
from talemate.agents.base import set_processing as _set_processing, Agent, AgentAction, AgentActionConfig, AgentEmission
from talemate.agents.world_state import TimePassageEmission
from talemate.scene_message import NarratorMessage
from talemate.events import GameLoopActorIterEvent
@@ -20,6 +21,33 @@ if TYPE_CHECKING:
log = structlog.get_logger("talemate.agents.narrator")
@dataclasses.dataclass
class NarratorAgentEmission(AgentEmission):
generation: list[str] = dataclasses.field(default_factory=list)
talemate.emit.async_signals.register(
"agent.narrator.generated"
)
def set_processing(fn):
"""
Custom decorator that emits the agent status as processing while the function
is running and then emits the result of the function as a NarratorAgentEmission
"""
@_set_processing
async def wrapper(self, *args, **kwargs):
response = await fn(self, *args, **kwargs)
emission = NarratorAgentEmission(
agent=self,
generation=[response],
)
await talemate.emit.async_signals.get("agent.narrator.generated").send(emission)
return emission.generation[0]
wrapper.__name__ = fn.__name__
return wrapper
@register()
class NarratorAgent(Agent):
@@ -40,6 +68,24 @@ class NarratorAgent(Agent):
# agent actions
self.actions = {
"generation_override": AgentAction(
enabled = True,
label = "Generation Override",
description = "Override generation parameters",
config = {
"instructions": AgentActionConfig(
type="text",
label="Instructions",
value="Never wax poetic.",
description="Extra instructions to give to the AI for narrative generation.",
),
}
),
"auto_break_repetition": AgentAction(
enabled = True,
label = "Auto Break Repetition",
description = "Will attempt to automatically break AI repetition.",
),
"narrate_time_passage": AgentAction(enabled=True, label="Narrate Time Passage", description="Whenever you indicate passage of time, narrate right after"),
"narrate_dialogue": AgentAction(
enabled=True,
@@ -64,10 +110,22 @@ class NarratorAgent(Agent):
max=1.0,
step=0.1,
),
"generate_dialogue": AgentActionConfig(
type="bool",
label="Allow Dialogue in Narration",
description="Allow the narrator to generate dialogue in narration",
value=False,
),
}
),
}
@property
def extra_instructions(self):
if self.actions["generation_override"].enabled:
return self.actions["generation_override"].config["instructions"].value
return ""
def clean_result(self, result):
"""
@@ -125,16 +183,22 @@ class NarratorAgent(Agent):
if not self.actions["narrate_dialogue"].enabled:
return
narrate_on_ai_chance = self.actions["narrate_dialogue"].config["ai_dialog"].value
narrate_on_player_chance = self.actions["narrate_dialogue"].config["player_dialog"].value
narrate_on_ai = random.random() < narrate_on_ai_chance
narrate_on_player = random.random() < narrate_on_player_chance
log.debug(
"narrate on dialog",
narrate_on_ai=narrate_on_ai,
narrate_on_ai_chance=narrate_on_ai_chance,
narrate_on_player=narrate_on_player,
narrate_on_player_chance=narrate_on_player_chance,
)
narrate_on_ai_chance = random.random() < self.actions["narrate_dialogue"].config["ai_dialog"].value
narrate_on_player_chance = random.random() < self.actions["narrate_dialogue"].config["player_dialog"].value
log.debug("narrate on dialog", narrate_on_ai_chance=narrate_on_ai_chance, narrate_on_player_chance=narrate_on_player_chance)
if event.actor.character.is_player and not narrate_on_player_chance:
if event.actor.character.is_player and not narrate_on_player:
return
if not event.actor.character.is_player and not narrate_on_ai_chance:
if not event.actor.character.is_player and not narrate_on_ai:
return
response = await self.narrate_after_dialogue(event.actor.character)
@@ -155,6 +219,7 @@ class NarratorAgent(Agent):
vars = {
"scene": self.scene,
"max_tokens": self.client.max_token_length,
"extra_instructions": self.extra_instructions,
}
)
@@ -172,22 +237,11 @@ class NarratorAgent(Agent):
"""
scene = self.scene
director = scene.get_helper("director").agent
pc = scene.get_player_character()
npcs = list(scene.get_npc_characters())
npc_names= ", ".join([npc.name for npc in npcs])
#summarized_history = await scene.summarized_dialogue_history(
# budget = self.client.max_token_length - 300,
# min_dialogue = 50,
#)
#augmented_context = await self.augment_context()
if narrative_direction is None:
#narrative_direction = await director.direct_narrative(
# scene.context_history(budget=self.client.max_token_length - 500, min_dialogue=20),
#)
narrative_direction = "Slightly move the current scene forward."
self.scene.log.info("narrative_direction", narrative_direction=narrative_direction)
@@ -198,13 +252,12 @@ class NarratorAgent(Agent):
"narrate",
vars = {
"scene": self.scene,
#"summarized_history": summarized_history,
#"augmented_context": augmented_context,
"max_tokens": self.client.max_token_length,
"narrative_direction": narrative_direction,
"player_character": pc,
"npcs": npcs,
"npc_names": npc_names,
"extra_instructions": self.extra_instructions,
}
)
@@ -235,6 +288,7 @@ class NarratorAgent(Agent):
"query": query,
"at_the_end": at_the_end,
"as_narrative": as_narrative,
"extra_instructions": self.extra_instructions,
}
)
log.info("narrate_query", response=response)
@@ -271,6 +325,7 @@ class NarratorAgent(Agent):
"character": character,
"max_tokens": self.client.max_token_length,
"memory": memory_context,
"extra_instructions": self.extra_instructions,
}
)
@@ -295,6 +350,7 @@ class NarratorAgent(Agent):
vars = {
"scene": self.scene,
"max_tokens": self.client.max_token_length,
"extra_instructions": self.extra_instructions,
}
)
@@ -315,6 +371,7 @@ class NarratorAgent(Agent):
"max_tokens": self.client.max_token_length,
"memory": memory_context,
"questions": questions,
"extra_instructions": self.extra_instructions,
}
)
@@ -340,6 +397,7 @@ class NarratorAgent(Agent):
"max_tokens": self.client.max_token_length,
"duration": duration,
"narrative": narrative,
"extra_instructions": self.extra_instructions,
}
)
@@ -365,7 +423,8 @@ class NarratorAgent(Agent):
"scene": self.scene,
"max_tokens": self.client.max_token_length,
"character": character,
"last_line": str(self.scene.history[-1])
"last_line": str(self.scene.history[-1]),
"extra_instructions": self.extra_instructions,
}
)
@@ -373,5 +432,28 @@ class NarratorAgent(Agent):
response = self.clean_result(response.strip().strip("*"))
response = f"*{response}*"
allow_dialogue = self.actions["narrate_dialogue"].config["generate_dialogue"].value
if not allow_dialogue:
response = response.split('"')[0].strip()
response = response.replace("*", "")
response = util.strip_partial_sentences(response)
response = f"*{response}*"
return response
return response
# LLM client related methods. These are called during or after the client
def inject_prompt_paramters(self, prompt_param: dict, kind: str, agent_function_name: str):
log.debug("inject_prompt_paramters", prompt_param=prompt_param, kind=kind, agent_function_name=agent_function_name)
character_names = [f"\n{c.name}:" for c in self.scene.get_characters()]
if prompt_param.get("extra_stopping_strings") is None:
prompt_param["extra_stopping_strings"] = []
prompt_param["extra_stopping_strings"] += character_names
def allow_repetition_break(self, kind: str, agent_function_name: str, auto:bool=False):
if auto and not self.actions["auto_break_repetition"].enabled:
return False
return True

View File

@@ -5,11 +5,13 @@ import traceback
from typing import TYPE_CHECKING, Callable, List, Optional, Union
import talemate.data_objects as data_objects
import talemate.emit.async_signals
import talemate.util as util
from talemate.prompts import Prompt
from talemate.scene_message import DirectorMessage, TimePassageMessage
from talemate.events import GameLoopEvent
from .base import Agent, set_processing
from .base import Agent, set_processing, AgentAction, AgentActionConfig
from .registry import register
import structlog
@@ -34,14 +36,40 @@ class SummarizeAgent(Agent):
def __init__(self, client, **kwargs):
self.client = client
def on_history_add(self, event):
asyncio.ensure_future(self.build_archive(event.scene))
self.actions = {
"archive": AgentAction(
enabled=True,
label="Summarize to long-term memory archive",
description="Automatically summarize scene dialogue when the number of tokens in the history exceeds a threshold. This helps keep the context history from growing too large.",
config={
"threshold": AgentActionConfig(
type="number",
label="Token Threshold",
description="Will summarize when the number of tokens in the history exceeds this threshold",
min=512,
max=8192,
step=256,
value=1536,
)
}
)
}
def connect(self, scene):
super().connect(scene)
scene.signals["history_add"].connect(self.on_history_add)
talemate.emit.async_signals.get("game_loop").connect(self.on_game_loop)
async def on_game_loop(self, emission:GameLoopEvent):
"""
Called when a conversation is generated
"""
await self.build_archive(self.scene)
def clean_result(self, result):
if "#" in result:
result = result.split("#")[0]
@@ -53,21 +81,31 @@ class SummarizeAgent(Agent):
return result
@set_processing
async def build_archive(self, scene, token_threshold:int=1500):
async def build_archive(self, scene):
end = None
if not self.actions["archive"].enabled:
return
if not scene.archived_history:
start = 0
recent_entry = None
else:
recent_entry = scene.archived_history[-1]
start = recent_entry.get("end", 0) + 1
if "end" not in recent_entry:
# permanent historical archive entry, not tied to any specific history entry
# meaning we are still at the beginning of the scene
start = 0
else:
start = recent_entry.get("end", 0)+1
tokens = 0
dialogue_entries = []
ts = "PT0S"
time_passage_termination = False
token_threshold = self.actions["archive"].config["threshold"].value
log.debug("build_archive", start=start, recent_entry=recent_entry)
if recent_entry:
@@ -75,6 +113,9 @@ class SummarizeAgent(Agent):
for i in range(start, len(scene.history)):
dialogue = scene.history[i]
#log.debug("build_archive", idx=i, content=str(dialogue)[:64]+"...")
if isinstance(dialogue, DirectorMessage):
if i == start:
start += 1
@@ -131,7 +172,7 @@ class SummarizeAgent(Agent):
break
adjusted_dialogue.append(line)
dialogue_entries = adjusted_dialogue
end = start + len(dialogue_entries)
end = start + len(dialogue_entries)-1
if dialogue_entries:
summarized = await self.summarize(

617
src/talemate/agents/tts.py Normal file
View File

@@ -0,0 +1,617 @@
from __future__ import annotations
from typing import Union
import asyncio
import httpx
import io
import os
import pydantic
import nltk
import tempfile
import base64
import uuid
import functools
from nltk.tokenize import sent_tokenize
import talemate.config as config
import talemate.emit.async_signals
import talemate.instance as instance
from talemate.emit import emit
from talemate.emit.signals import handlers
from talemate.events import GameLoopNewMessageEvent
from talemate.scene_message import CharacterMessage, NarratorMessage
from .base import Agent, set_processing, AgentAction, AgentActionConfig
from .registry import register
import structlog
import time
try:
from TTS.api import TTS
except ImportError:
TTS = None
log = structlog.get_logger("talemate.agents.tts")#
if not TTS:
# TTS installation is massive and requires a lot of dependencies
# so we don't want to require it unless the user wants to use it
log.info("TTS (local) requires the TTS package, please install with `pip install TTS` if you want to use the local api")
def parse_chunks(text):
text = text.replace("...", "__ellipsis__")
chunks = sent_tokenize(text)
cleaned_chunks = []
for chunk in chunks:
chunk = chunk.replace("*","")
if not chunk:
continue
cleaned_chunks.append(chunk)
for i, chunk in enumerate(cleaned_chunks):
chunk = chunk.replace("__ellipsis__", "...")
cleaned_chunks[i] = chunk
return cleaned_chunks
def clean_quotes(chunk:str):
# if there is an uneven number of quotes, remove the last one if its
# at the end of the chunk. If its in the middle, add a quote to the end
if chunk.count('"') % 2 == 1:
if chunk.endswith('"'):
chunk = chunk[:-1]
else:
chunk += '"'
return chunk
def rejoin_chunks(chunks:list[str], chunk_size:int=250):
"""
Will combine chunks split by punctuation into a single chunk until
max chunk size is reached
"""
joined_chunks = []
current_chunk = ""
for chunk in chunks:
if len(current_chunk) + len(chunk) > chunk_size:
joined_chunks.append(clean_quotes(current_chunk))
current_chunk = ""
current_chunk += chunk
if current_chunk:
joined_chunks.append(clean_quotes(current_chunk))
return joined_chunks
class Voice(pydantic.BaseModel):
value:str
label:str
class VoiceLibrary(pydantic.BaseModel):
api: str
voices: list[Voice] = pydantic.Field(default_factory=list)
last_synced: float = None
@register()
class TTSAgent(Agent):
"""
Text to speech agent
"""
agent_type = "tts"
verbose_name = "Voice"
requires_llm_client = False
@classmethod
def config_options(cls, agent=None):
config_options = super().config_options(agent=agent)
if agent:
config_options["actions"]["_config"]["config"]["voice_id"]["choices"] = [
voice.model_dump() for voice in agent.list_voices_sync()
]
return config_options
def __init__(self, **kwargs):
self.is_enabled = False
nltk.download("punkt", quiet=True)
self.voices = {
"elevenlabs": VoiceLibrary(api="elevenlabs"),
"coqui": VoiceLibrary(api="coqui"),
"tts": VoiceLibrary(api="tts"),
}
self.config = config.load_config()
self.playback_done_event = asyncio.Event()
self.actions = {
"_config": AgentAction(
enabled=True,
label="Configure",
description="TTS agent configuration",
config={
"api": AgentActionConfig(
type="text",
choices=[
# TODO at local TTS support
{"value": "tts", "label": "TTS (Local)"},
{"value": "elevenlabs", "label": "Eleven Labs"},
{"value": "coqui", "label": "Coqui Studio"},
],
value="tts",
label="API",
description="Which TTS API to use",
onchange="emit",
),
"voice_id": AgentActionConfig(
type="text",
value="default",
label="Narrator Voice",
description="Voice ID/Name to use for TTS",
choices=[]
),
"generate_for_player": AgentActionConfig(
type="bool",
value=False,
label="Generate for player",
description="Generate audio for player messages",
),
"generate_for_npc": AgentActionConfig(
type="bool",
value=True,
label="Generate for NPCs",
description="Generate audio for NPC messages",
),
"generate_for_narration": AgentActionConfig(
type="bool",
value=True,
label="Generate for narration",
description="Generate audio for narration messages",
),
"generate_chunks": AgentActionConfig(
type="bool",
value=False,
label="Split generation",
description="Generate audio chunks for each sentence - will be much more responsive but may loose context to inform inflection",
)
}
),
}
self.actions["_config"].model_dump()
handlers["config_saved"].connect(self.on_config_saved)
@property
def enabled(self):
return self.is_enabled
@property
def has_toggle(self):
return True
@property
def experimental(self):
return False
@property
def not_ready_reason(self) -> str:
"""
Returns a string explaining why the agent is not ready
"""
if self.ready:
return ""
if self.api == "tts":
if not TTS:
return "TTS not installed"
elif self.requires_token and not self.token:
return "No API token"
elif not self.default_voice_id:
return "No voice selected"
@property
def agent_details(self):
suffix = ""
if not self.ready:
suffix = f" - {self.not_ready_reason}"
else:
suffix = f" - {self.voice_id_to_label(self.default_voice_id)}"
api = self.api
choices = self.actions["_config"].config["api"].choices
api_label = api
for choice in choices:
if choice["value"] == api:
api_label = choice["label"]
break
return f"{api_label}{suffix}"
@property
def api(self):
return self.actions["_config"].config["api"].value
@property
def token(self):
api = self.api
return self.config.get(api,{}).get("api_key")
@property
def default_voice_id(self):
return self.actions["_config"].config["voice_id"].value
@property
def requires_token(self):
return self.api != "tts"
@property
def ready(self):
if self.api == "tts":
if not TTS:
return False
return True
return (not self.requires_token or self.token) and self.default_voice_id
@property
def status(self):
if not self.enabled:
return "disabled"
if self.ready:
return "active" if not getattr(self, "processing", False) else "busy"
if self.requires_token and not self.token:
return "error"
if self.api == "tts":
if not TTS:
return "error"
return "uninitialized"
@property
def max_generation_length(self):
if self.api == "elevenlabs":
return 1024
elif self.api == "coqui":
return 250
return 250
def apply_config(self, *args, **kwargs):
try:
api = kwargs["actions"]["_config"]["config"]["api"]["value"]
except KeyError:
api = self.api
api_changed = api != self.api
log.debug("apply_config", api=api, api_changed=api != self.api, current_api=self.api)
super().apply_config(*args, **kwargs)
if api_changed:
try:
self.actions["_config"].config["voice_id"].value = self.voices[api].voices[0].value
except IndexError:
self.actions["_config"].config["voice_id"].value = ""
def connect(self, scene):
super().connect(scene)
talemate.emit.async_signals.get("game_loop_new_message").connect(self.on_game_loop_new_message)
def on_config_saved(self, event):
config = event.data
self.config = config
instance.emit_agent_status(self.__class__, self)
async def on_game_loop_new_message(self, emission:GameLoopNewMessageEvent):
"""
Called when a conversation is generated
"""
if not self.enabled or not self.ready:
return
if not isinstance(emission.message, (CharacterMessage, NarratorMessage)):
return
if isinstance(emission.message, NarratorMessage) and not self.actions["_config"].config["generate_for_narration"].value:
return
if isinstance(emission.message, CharacterMessage):
if emission.message.source == "player" and not self.actions["_config"].config["generate_for_player"].value:
return
elif emission.message.source == "ai" and not self.actions["_config"].config["generate_for_npc"].value:
return
if isinstance(emission.message, CharacterMessage):
character_prefix = emission.message.split(":", 1)[0]
else:
character_prefix = ""
log.info("reactive tts", message=emission.message, character_prefix=character_prefix)
await self.generate(str(emission.message).replace(character_prefix+": ", ""))
def voice(self, voice_id:str) -> Union[Voice, None]:
for voice in self.voices[self.api].voices:
if voice.value == voice_id:
return voice
return None
def voice_id_to_label(self, voice_id:str):
for voice in self.voices[self.api].voices:
if voice.value == voice_id:
return voice.label
return None
def list_voices_sync(self):
loop = asyncio.get_event_loop()
return loop.run_until_complete(self.list_voices())
async def list_voices(self):
if self.requires_token and not self.token:
return []
library = self.voices[self.api]
# TODO: allow re-syncing voices
if library.last_synced:
return library.voices
list_fn = getattr(self, f"_list_voices_{self.api}")
log.info("Listing voices", api=self.api)
library.voices = await list_fn()
library.last_synced = time.time()
# if the current voice cannot be found, reset it
if not self.voice(self.default_voice_id):
self.actions["_config"].config["voice_id"].value = ""
# set loading to false
return library.voices
@set_processing
async def generate(self, text: str):
if not self.enabled or not self.ready or not text:
return
self.playback_done_event.set()
generate_fn = getattr(self, f"_generate_{self.api}")
if self.actions["_config"].config["generate_chunks"].value:
chunks = parse_chunks(text)
chunks = rejoin_chunks(chunks)
else:
chunks = parse_chunks(text)
chunks = rejoin_chunks(chunks, chunk_size=self.max_generation_length)
# Start generating audio chunks in the background
generation_task = asyncio.create_task(self.generate_chunks(generate_fn, chunks))
# Wait for both tasks to complete
await asyncio.gather(generation_task)
async def generate_chunks(self, generate_fn, chunks):
for chunk in chunks:
chunk = chunk.replace("*","").strip()
log.info("Generating audio", api=self.api, chunk=chunk)
audio_data = await generate_fn(chunk)
self.play_audio(audio_data)
def play_audio(self, audio_data):
# play audio through the python audio player
#play(audio_data)
emit("audio_queue", data={"audio_data": base64.b64encode(audio_data).decode("utf-8")})
self.playback_done_event.set() # Signal that playback is finished
# LOCAL
async def _generate_tts(self, text: str) -> Union[bytes, None]:
if not TTS:
return
tts_config = self.config.get("tts",{})
model = tts_config.get("model")
device = tts_config.get("device", "cpu")
log.debug("tts local", model=model, device=device)
if not hasattr(self, "tts_instance"):
self.tts_instance = TTS(model).to(device)
tts = self.tts_instance
loop = asyncio.get_event_loop()
voice = self.voice(self.default_voice_id)
with tempfile.TemporaryDirectory() as temp_dir:
file_path = os.path.join(temp_dir, f"tts-{uuid.uuid4()}.wav")
await loop.run_in_executor(None, functools.partial(tts.tts_to_file, text=text, speaker_wav=voice.value, language="en", file_path=file_path))
#tts.tts_to_file(text=text, speaker_wav=voice.value, language="en", file_path=file_path)
with open(file_path, "rb") as f:
return f.read()
async def _list_voices_tts(self) -> dict[str, str]:
return [Voice(**voice) for voice in self.config.get("tts",{}).get("voices",[])]
# ELEVENLABS
async def _generate_elevenlabs(self, text: str, chunk_size: int = 1024) -> Union[bytes, None]:
api_key = self.token
if not api_key:
return
async with httpx.AsyncClient() as client:
url = f"https://api.elevenlabs.io/v1/text-to-speech/{self.default_voice_id}"
headers = {
"Accept": "audio/mpeg",
"Content-Type": "application/json",
"xi-api-key": api_key,
}
data = {
"text": text,
"model_id": self.config.get("elevenlabs",{}).get("model"),
"voice_settings": {
"stability": 0.5,
"similarity_boost": 0.5
}
}
response = await client.post(url, json=data, headers=headers, timeout=300)
if response.status_code == 200:
bytes_io = io.BytesIO()
for chunk in response.iter_bytes(chunk_size=chunk_size):
if chunk:
bytes_io.write(chunk)
# Put the audio data in the queue for playback
return bytes_io.getvalue()
else:
log.error(f"Error generating audio: {response.text}")
async def _list_voices_elevenlabs(self) -> dict[str, str]:
url_voices = "https://api.elevenlabs.io/v1/voices"
voices = []
async with httpx.AsyncClient() as client:
headers = {
"Accept": "application/json",
"xi-api-key": self.token,
}
response = await client.get(url_voices, headers=headers, params={"per_page":1000})
speakers = response.json()["voices"]
voices.extend([Voice(value=speaker["voice_id"], label=speaker["name"]) for speaker in speakers])
# sort by name
voices.sort(key=lambda x: x.label)
return voices
# COQUI STUDIO
async def _generate_coqui(self, text: str) -> Union[bytes, None]:
api_key = self.token
if not api_key:
return
async with httpx.AsyncClient() as client:
url = "https://app.coqui.ai/api/v2/samples/xtts/render/"
headers = {
"Accept": "application/json",
"Content-Type": "application/json",
"Authorization": f"Bearer {api_key}"
}
data = {
"voice_id": self.default_voice_id,
"text": text,
"language": "en" # Assuming English language for simplicity; this could be parameterized
}
# Make the POST request to Coqui API
response = await client.post(url, json=data, headers=headers, timeout=300)
if response.status_code in [200, 201]:
# Parse the JSON response to get the audio URL
response_data = response.json()
audio_url = response_data.get('audio_url')
if audio_url:
# Make a GET request to download the audio file
audio_response = await client.get(audio_url)
if audio_response.status_code == 200:
# delete the sample from Coqui Studio
# await self._cleanup_coqui(response_data.get('id'))
return audio_response.content
else:
log.error(f"Error downloading audio: {audio_response.text}")
else:
log.error("No audio URL in response")
else:
log.error(f"Error generating audio: {response.text}")
async def _cleanup_coqui(self, sample_id: str):
api_key = self.token
if not api_key or not sample_id:
return
async with httpx.AsyncClient() as client:
url = f"https://app.coqui.ai/api/v2/samples/xtts/{sample_id}"
headers = {
"Authorization": f"Bearer {api_key}"
}
# Make the DELETE request to Coqui API
response = await client.delete(url, headers=headers)
if response.status_code == 204:
log.info(f"Successfully deleted sample with ID: {sample_id}")
else:
log.error(f"Error deleting sample with ID: {sample_id}: {response.text}")
async def _list_voices_coqui(self) -> dict[str, str]:
url_speakers = "https://app.coqui.ai/api/v2/speakers"
url_custom_voices = "https://app.coqui.ai/api/v2/voices"
voices = []
async with httpx.AsyncClient() as client:
headers = {
"Authorization": f"Bearer {self.token}"
}
response = await client.get(url_speakers, headers=headers, params={"per_page":1000})
speakers = response.json()["result"]
voices.extend([Voice(value=speaker["id"], label=speaker["name"]) for speaker in speakers])
response = await client.get(url_custom_voices, headers=headers, params={"per_page":1000})
custom_voices = response.json()["result"]
voices.extend([Voice(value=voice["id"], label=voice["name"]) for voice in custom_voices])
# sort by name
voices.sort(key=lambda x: x.label)
return voices

View File

@@ -17,7 +17,7 @@ import talemate.client.system_prompts as system_prompts
import talemate.util as util
from talemate.client.context import client_context_attribute
from talemate.client.model_prompts import model_prompt
from talemate.agents.context import active_agent
# Set up logging level for httpx to WARNING to suppress debug logs.
logging.getLogger('httpx').setLevel(logging.WARNING)
@@ -37,17 +37,17 @@ class ClientBase:
enabled: bool = True
current_status: str = None
max_token_length: int = 4096
randomizable_inference_parameters: list[str] = ["temperature"]
processing: bool = False
connected: bool = False
conversation_retries: int = 5
auto_break_repetition_enabled: bool = True
client_type = "base"
def __init__(
self,
api_url: str,
api_url: str = None,
name = None,
**kwargs,
):
@@ -74,6 +74,17 @@ class ClientBase:
return model_prompt(self.model_name, sys_msg, prompt)
def has_prompt_template(self):
if not self.model_name:
return False
return model_prompt.exists(self.model_name)
def prompt_template_example(self):
if not self.model_name:
return None
return model_prompt(self.model_name, "sysmsg", "prompt<|BOT|>{LLM coercion}")
def reconfigure(self, **kwargs):
"""
@@ -142,6 +153,8 @@ class ClientBase:
return system_prompts.EDITOR
if "world_state" in kind:
return system_prompts.WORLD_STATE
if "analyze_freeform" in kind:
return system_prompts.ANALYST_FREEFORM
if "analyst" in kind:
return system_prompts.ANALYST
if "analyze" in kind:
@@ -181,6 +194,10 @@ class ClientBase:
id=self.name,
details=model_name,
status=status,
data={
"prompt_template_example": self.prompt_template_example(),
"has_prompt_template": self.has_prompt_template(),
}
)
if status_change:
@@ -244,6 +261,10 @@ class ClientBase:
fn_tune_kind = getattr(self, f"tune_prompt_parameters_{kind}", None)
if fn_tune_kind:
fn_tune_kind(parameters)
agent_context = active_agent.get()
if agent_context.agent:
agent_context.agent.inject_prompt_paramters(parameters, kind, agent_context.action)
def tune_prompt_parameters_conversation(self, parameters:dict):
conversation_context = client_context_attribute("conversation")
@@ -275,7 +296,7 @@ class ClientBase:
return ""
async def send_prompt(
self, prompt: str, kind: str = "conversation", finalize: Callable = lambda x: x
self, prompt: str, kind: str = "conversation", finalize: Callable = lambda x: x, retries:int=2
) -> str:
"""
Send a prompt to the AI and return its response.
@@ -298,8 +319,14 @@ class ClientBase:
time_start = time.time()
extra_stopping_strings = prompt_param.pop("extra_stopping_strings", [])
self.log.debug("send_prompt", token_length=token_length, max_token_length=self.max_token_length, parameters=prompt_param)
response = await self.generate(finalized_prompt, prompt_param, kind)
self.log.debug("send_prompt", token_length=token_length, max_token_length=self.max_token_length, parameters=prompt_param)
response = await self.generate(
self.repetition_adjustment(finalized_prompt),
prompt_param,
kind
)
response, finalized_prompt = await self.auto_break_repetition(finalized_prompt, prompt_param, response, kind, retries)
time_end = time.time()
@@ -325,6 +352,125 @@ class ClientBase:
finally:
self.emit_status(processing=False)
async def auto_break_repetition(
self,
finalized_prompt:str,
prompt_param:dict,
response:str,
kind:str,
retries:int,
pad_max_tokens:int=32,
) -> str:
"""
If repetition breaking is enabled, this will retry the prompt if its
response is too similar to other messages in the prompt
This requires the agent to have the allow_repetition_break method
and the jiggle_enabled_for method and the client to have the
auto_break_repetition_enabled attribute set to True
Arguments:
- finalized_prompt: the prompt that was sent
- prompt_param: the parameters that were used
- response: the response that was received
- kind: the kind of generation
- retries: the number of retries left
- pad_max_tokens: increase response max_tokens by this amount per iteration
Returns:
- the response
"""
if not self.auto_break_repetition_enabled:
return response, finalized_prompt
agent_context = active_agent.get()
if self.jiggle_enabled_for(kind, auto=True):
# check if the response is a repetition
# using the default similarity threshold of 98, meaning it needs
# to be really similar to be considered a repetition
is_repetition, similarity_score, matched_line = util.similarity_score(
response,
finalized_prompt.split("\n"),
)
if not is_repetition:
# not a repetition, return the response
self.log.debug("send_prompt no similarity", similarity_score=similarity_score)
return response, finalized_prompt
while is_repetition and retries > 0:
# it's a repetition, retry the prompt with adjusted parameters
self.log.warn(
"send_prompt similarity retry",
agent=agent_context.agent.agent_type,
similarity_score=similarity_score,
retries=retries
)
# first we apply the client's randomness jiggle which will adjust
# parameters like temperature and repetition_penalty, depending
# on the client
#
# this is a cumulative adjustment, so it will add to the previous
# iteration's adjustment, this also means retries should be kept low
# otherwise it will get out of hand and start generating nonsense
self.jiggle_randomness(prompt_param, offset=0.5)
# then we pad the max_tokens by the pad_max_tokens amount
prompt_param["max_tokens"] += pad_max_tokens
# send the prompt again
# we use the repetition_adjustment method to further encourage
# the AI to break the repetition on its own as well.
finalized_prompt = self.repetition_adjustment(finalized_prompt, is_repetitive=True)
response = retried_response = await self.generate(
finalized_prompt,
prompt_param,
kind
)
self.log.debug("send_prompt dedupe sentences", response=response, matched_line=matched_line)
# a lot of the times the response will now contain the repetition + something new
# so we dedupe the response to remove the repetition on sentences level
response = util.dedupe_sentences(response, matched_line, similarity_threshold=85, debug=True)
self.log.debug("send_prompt dedupe sentences (after)", response=response)
# deduping may have removed the entire response, so we check for that
if not util.strip_partial_sentences(response).strip():
# if the response is empty, we set the response to the original
# and try again next loop
response = retried_response
# check if the response is a repetition again
is_repetition, similarity_score, matched_line = util.similarity_score(
response,
finalized_prompt.split("\n"),
)
retries -= 1
return response, finalized_prompt
def count_tokens(self, content:str):
return util.count_tokens(content)
@@ -338,12 +484,35 @@ class ClientBase:
min_offset = offset * 0.3
prompt_config["temperature"] = random.uniform(temp + min_offset, temp + offset)
def jiggle_enabled_for(self, kind:str):
def jiggle_enabled_for(self, kind:str, auto:bool=False) -> bool:
if kind in ["conversation", "story"]:
return True
agent_context = active_agent.get()
agent = agent_context.agent
if kind.startswith("narrate"):
return True
if not agent:
return False
return False
return agent.allow_repetition_break(kind, agent_context.action, auto=auto)
def repetition_adjustment(self, prompt:str, is_repetitive:bool=False):
"""
Breaks the prompt into lines and checkse each line for a match with
[$REPETITION|{repetition_adjustment}].
On match and if is_repetitive is True, the line is removed from the prompt and
replaced with the repetition_adjustment.
On match and if is_repetitive is False, the line is removed from the prompt.
"""
lines = prompt.split("\n")
new_lines = []
for line in lines:
if line.startswith("[$REPETITION|"):
if is_repetitive:
new_lines.append(line.split("|")[1][:-1])
else:
new_lines.append(line)
return "\n".join(new_lines)

View File

@@ -39,6 +39,9 @@ class ModelPrompt:
"set_response" : self.set_response
})
def exists(self, model_name:str):
return bool(self.get_template(model_name))
def set_response(self, prompt:str, response_str:str):
prompt = prompt.strip("\n").strip()

View File

@@ -6,7 +6,10 @@ from openai import AsyncOpenAI
from talemate.client.base import ClientBase
from talemate.client.registry import register
from talemate.emit import emit
from talemate.emit.signals import handlers
import talemate.emit.async_signals as async_signals
from talemate.config import load_config
import talemate.instance as instance
import talemate.client.system_prompts as system_prompts
import structlog
import tiktoken
@@ -75,39 +78,40 @@ class OpenAIClient(ClientBase):
client_type = "openai"
conversation_retries = 0
auto_break_repetition_enabled = False
def __init__(self, model="gpt-4-1106-preview", **kwargs):
self.model_name = model
self.api_key_status = None
self.config = load_config()
super().__init__(**kwargs)
# if os.environ.get("OPENAI_API_KEY") is not set, look in the config file
# and set it
if not os.environ.get("OPENAI_API_KEY"):
if self.config.get("openai", {}).get("api_key"):
os.environ["OPENAI_API_KEY"] = self.config["openai"]["api_key"]
self.set_client()
handlers["config_saved"].connect(self.on_config_saved)
@property
def openai_api_key(self):
return os.environ.get("OPENAI_API_KEY")
return self.config.get("openai",{}).get("api_key")
def emit_status(self, processing: bool = None):
if processing is not None:
self.processing = processing
if os.environ.get("OPENAI_API_KEY"):
if self.openai_api_key:
status = "busy" if self.processing else "idle"
model_name = self.model_name or "No model loaded"
model_name = self.model_name
else:
status = "error"
model_name = "No API key set"
if not self.model_name:
status = "error"
model_name = "No model loaded"
self.current_status = status
emit(
@@ -121,12 +125,17 @@ class OpenAIClient(ClientBase):
def set_client(self, max_token_length:int=None):
if not self.openai_api_key:
self.client = AsyncOpenAI(api_key="sk-1111")
log.error("No OpenAI API key set")
if self.api_key_status:
self.api_key_status = False
emit('request_client_status')
emit('request_agent_status')
return
model = self.model_name
self.client = AsyncOpenAI()
self.client = AsyncOpenAI(api_key=self.openai_api_key)
if model == "gpt-3.5-turbo":
self.max_token_length = min(max_token_length or 4096, 4096)
elif model == "gpt-4":
@@ -138,12 +147,27 @@ class OpenAIClient(ClientBase):
else:
self.max_token_length = max_token_length or 2048
if not self.api_key_status:
if self.api_key_status is False:
emit('request_client_status')
emit('request_agent_status')
self.api_key_status = True
log.info("openai set client")
def reconfigure(self, **kwargs):
if "model" in kwargs:
self.model_name = kwargs["model"]
self.set_client(kwargs.get("max_token_length"))
def on_config_saved(self, event):
config = event.data
self.config = config
self.set_client()
def count_tokens(self, content: str):
if not self.model_name:
return 0
return num_tokens_from_messages([{"content": content}], model=self.model_name)
async def status(self):
@@ -179,6 +203,9 @@ class OpenAIClient(ClientBase):
Generates text from the given prompt and parameters.
"""
if not self.openai_api_key:
raise Exception("No OpenAI API key set")
# only gpt-4-1106-preview supports json_object response coersion
supports_json_object = self.model_name in ["gpt-4-1106-preview"]
right = None
@@ -187,7 +214,7 @@ class OpenAIClient(ClientBase):
expected_response = right.strip()
if expected_response.startswith("{") and supports_json_object:
parameters["response_format"] = {"type": "json_object"}
except IndexError:
except (IndexError, ValueError):
pass
human_message = {'role': 'user', 'content': prompt.strip()}
@@ -208,5 +235,4 @@ class OpenAIClient(ClientBase):
return response
except Exception as e:
self.log.error("generate error", e=e)
return ""
raise

View File

@@ -27,6 +27,10 @@ class TextGeneratorWebuiClient(ClientBase):
raise Exception("Could not find model info (wrong api version?)")
response_data = response.json()
model_name = response_data.get("model_name")
if model_name == "None":
model_name = None
return model_name

View File

@@ -23,6 +23,7 @@ from .cmd_save_as import CmdSaveAs
from .cmd_save_characters import CmdSaveCharacters
from .cmd_setenv import CmdSetEnvironmentToScene, CmdSetEnvironmentToCreative
from .cmd_time_util import *
from .cmd_tts import *
from .cmd_world_state import CmdWorldState
from .cmd_run_helios_test import CmdHeliosTest
from .manager import Manager

View File

@@ -32,4 +32,5 @@ class CmdRebuildArchive(TalemateCommand):
if not more:
break
self.scene.sync_time()
await self.scene.commit_to_memory()

View File

@@ -0,0 +1,33 @@
import asyncio
import logging
from talemate.commands.base import TalemateCommand
from talemate.commands.manager import register
from talemate.prompts.base import set_default_sectioning_handler
from talemate.instance import get_agent
__all__ = [
"CmdTestTTS",
]
@register
class CmdTestTTS(TalemateCommand):
"""
Command class for the 'test_tts' command
"""
name = "test_tts"
description = "Test the TTS agent"
aliases = []
async def run(self):
tts_agent = get_agent("tts")
try:
last_message = str(self.scene.history[-1])
except IndexError:
last_message = "Welcome to talemate!"
await tts_agent.generate(last_message)

View File

@@ -6,6 +6,8 @@ import os
from pydantic import BaseModel
from typing import Optional, Dict, Union
from talemate.emit import emit
log = structlog.get_logger("talemate.config")
class Client(BaseModel):
@@ -20,7 +22,7 @@ class Client(BaseModel):
class AgentActionConfig(BaseModel):
value: Union[int, float, str, bool]
value: Union[int, float, str, bool, None] = None
class AgentAction(BaseModel):
enabled: bool = True
@@ -42,17 +44,17 @@ class Agent(BaseModel):
return super().model_dump(exclude_none=True)
class GamePlayerCharacter(BaseModel):
name: str
color: str
gender: str
description: Optional[str]
name: str = ""
color: str = "#3362bb"
gender: str = ""
description: Optional[str] = ""
class Config:
extra = "ignore"
class Game(BaseModel):
default_player_character: GamePlayerCharacter
default_player_character: GamePlayerCharacter = GamePlayerCharacter()
class Config:
extra = "ignore"
@@ -65,6 +67,22 @@ class OpenAIConfig(BaseModel):
class RunPodConfig(BaseModel):
api_key: Union[str,None]=None
class ElevenLabsConfig(BaseModel):
api_key: Union[str,None]=None
model: str = "eleven_turbo_v2"
class CoquiConfig(BaseModel):
api_key: Union[str,None]=None
class TTSVoiceSamples(BaseModel):
label:str
value:str
class TTSConfig(BaseModel):
device:str = "cuda"
model:str = "tts_models/multilingual/multi-dataset/xtts_v2"
voices: list[TTSVoiceSamples] = pydantic.Field(default_factory=list)
class ChromaDB(BaseModel):
instructor_device: str="cpu"
@@ -85,6 +103,12 @@ class Config(BaseModel):
chromadb: ChromaDB = ChromaDB()
elevenlabs: ElevenLabsConfig = ElevenLabsConfig()
coqui: CoquiConfig = CoquiConfig()
tts: TTSConfig = TTSConfig()
class Config:
extra = "ignore"
@@ -136,4 +160,6 @@ def save_config(config, file_path: str = "./config.yaml"):
return None
with open(file_path, "w") as file:
yaml.dump(config, file)
yaml.dump(config, file)
emit("config_saved", data=config)

View File

@@ -13,7 +13,9 @@ RequestInput = signal("request_input")
ReceiveInput = signal("receive_input")
ClientStatus = signal("client_status")
RequestClientStatus = signal("request_client_status")
AgentStatus = signal("agent_status")
RequestAgentStatus = signal("request_agent_status")
ClientBootstraps = signal("client_bootstraps")
PromptSent = signal("prompt_sent")
@@ -24,8 +26,12 @@ CommandStatus = signal("command_status")
WorldState = signal("world_state")
ArchivedHistory = signal("archived_history")
AudioQueue = signal("audio_queue")
MessageEdited = signal("message_edited")
ConfigSaved = signal("config_saved")
handlers = {
"system": SystemMessage,
"narrator": NarratorMessage,
@@ -36,7 +42,9 @@ handlers = {
"request_input": RequestInput,
"receive_input": ReceiveInput,
"client_status": ClientStatus,
"request_client_status": RequestClientStatus,
"agent_status": AgentStatus,
"request_agent_status": RequestAgentStatus,
"client_bootstraps": ClientBootstraps,
"clear_screen": ClearScreen,
"remove_message": RemoveMessage,
@@ -46,4 +54,6 @@ handlers = {
"archived_history": ArchivedHistory,
"message_edited": MessageEdited,
"prompt_sent": PromptSent,
"audio_queue": AudioQueue,
"config_saved": ConfigSaved,
}

View File

@@ -4,7 +4,7 @@ from dataclasses import dataclass
from typing import TYPE_CHECKING
if TYPE_CHECKING:
from talemate.tale_mate import Scene, Actor
from talemate.tale_mate import Scene, Actor, SceneMessage
__all__ = [
"Event",
@@ -46,4 +46,8 @@ class GameLoopStartEvent(GameLoopEvent):
@dataclass
class GameLoopActorIterEvent(GameLoopEvent):
actor: Actor
actor: Actor
@dataclass
class GameLoopNewMessageEvent(GameLoopEvent):
message: SceneMessage

View File

@@ -1,10 +1,11 @@
"""
Keep track of clients and agents
"""
import asyncio
import talemate.agents as agents
import talemate.client as clients
from talemate.emit import emit
from talemate.emit.signals import handlers
import talemate.client.bootstrap as bootstrap
import structlog
@@ -14,6 +15,8 @@ AGENTS = {}
CLIENTS = {}
def get_agent(typ: str, *create_args, **create_kwargs):
agent = AGENTS.get(typ)
@@ -94,11 +97,19 @@ async def emit_clients_status():
"""
Will emit status of all clients
"""
#log.debug("emit", type="client status")
for client in CLIENTS.values():
if client:
await client.status()
def _sync_emit_clients_status(*args, **kwargs):
"""
Will emit status of all clients
in synchronous mode
"""
loop = asyncio.get_event_loop()
loop.run_until_complete(emit_clients_status())
handlers["request_client_status"].connect(_sync_emit_clients_status)
def emit_client_bootstraps():
emit(
@@ -144,11 +155,13 @@ def emit_agent_status(cls, agent=None):
)
def emit_agents_status():
def emit_agents_status(*args, **kwargs):
"""
Will emit status of all agents
"""
#log.debug("emit", type="agent status")
for typ, cls in agents.AGENT_CLASSES.items():
agent = AGENTS.get(typ)
emit_agent_status(cls, agent)
handlers["request_agent_status"].connect(emit_agents_status)

View File

@@ -190,8 +190,11 @@ async def load_scene_from_data(
await scene.add_actor(actor)
if scene.environment != "creative":
await scene.world_state.request_update(initial_only=True)
try:
await scene.world_state.request_update(initial_only=True)
except Exception as e:
log.error("world_state.request_update", error=e)
# the scene has been saved before (since we just loaded it), so we set the saved flag to True
# as long as the scene has a memory_id.
scene.saved = "memory_id" in scene_data

View File

@@ -343,7 +343,7 @@ class Prompt:
parsed_text = env.from_string(prompt_text).render(self.vars)
if self.dedupe_enabled:
parsed_text = dedupe_string(parsed_text, debug=True)
parsed_text = dedupe_string(parsed_text, debug=False)
parsed_text = remove_extra_linebreaks(parsed_text)
@@ -395,7 +395,7 @@ class Prompt:
f"Answer: " + loop.run_until_complete(memory.query(query, **kwargs)),
])
else:
return loop.run_until_complete(memory.multi_query(query.split("\n"), **kwargs))
return loop.run_until_complete(memory.multi_query([q for q in query.split("\n") if q.strip()], **kwargs))
def instruct_text(self, instruction:str, text:str):
loop = asyncio.get_event_loop()
@@ -473,8 +473,6 @@ class Prompt:
# remove all duplicate whitespace
cleaned = re.sub(r"\s+", " ", cleaned)
print("set_json_response", cleaned)
return self.set_prepared_response(cleaned)
@@ -518,7 +516,7 @@ class Prompt:
log.warning("parse_json_response error on first attempt - sending to AI to fix", response=response, error=e)
fixed_response = await self.client.send_prompt(
f"fix the syntax errors in this JSON string, but keep the structure as is.\n\nError:{e}\n\n```json\n{response}\n```<|BOT|>"+"{",
f"fix the syntax errors in this JSON string, but keep the structure as is. Remove any comments.\n\nError:{e}\n\n```json\n{response}\n```<|BOT|>"+"{",
kind="analyze_long",
)
log.warning("parse_json_response error on first attempt - sending to AI to fix", response=response, error=e)

View File

@@ -33,8 +33,7 @@ You may chose to have {{ talking_character.name}} respond to the conversation, o
Use an informal and colloquial register with a conversational tone. Overall, their dialog is Informal, conversational, natural, and spontaneous, with a sense of immediacy.
Spoken word should be enclosed in double quotes, e.g. "Hello, how are you?"
Narration and actions should be enclosed in asterisks, e.g. *She smiles.*
Spoken words MUST be enclosed in double quotes, e.g. {{ talking_character.name}}: "spoken words.".
{{ extra_instructions }}
<|CLOSE_SECTION|>
{% if memory -%}

View File

@@ -8,12 +8,19 @@ Scenario Premise: {{ scene.description }}
{% endfor %}
{% endblock -%}
<|CLOSE_SECTION|>
{% for scene_context in scene.context_history(budget=max_tokens-200-count_tokens(self.rendered_context())) -%}
{% for scene_context in scene.context_history(budget=max_tokens-200-count_tokens(self.rendered_context()), min_dialogue=25) -%}
{{ scene_context }}
{% endfor %}
<|SECTION:TASK|>
Based on the previous line '{{ last_line }}', create the next line of narration. This line should focus solely on describing sensory details (like sounds, sights, smells, tactile sensations) or external actions that move the story forward. Avoid including any character's internal thoughts, feelings, or dialogue. Your narration should directly respond to '{{ last_line }}', either by elaborating on the immediate scene or by subtly advancing the plot. Generate exactly one sentence of new narration. If the character is trying to determine some state, truth or situation, try to answer as part of the narration.
Be creative and generate something new and interesting.
Be creative and generate something new and interesting, but stay true to the setting and context of the story so far.
Use an informal and colloquial register with a conversational tone. Overall, the narrative is Informal, conversational, natural, and spontaneous, with a sense of immediacy.
Narration style should be that of a 90s point and click adventure game. You are omniscient and can describe the scene in detail.
Only generate new narration. {{ extra_instructions }}
[$REPETITION|Narration is getting repetitive. Try to choose different words to break up the repetitive text.]
<|CLOSE_SECTION|>
{{ set_prepared_response('*') }}

View File

@@ -8,23 +8,23 @@ Last time we checked on {{ character.name }}:
{% endfor %}
<|CLOSE_SECTION|>
{% for scene_context in scene.context_history(budget=max_tokens-300, min_dialogue=30) -%}
{% for scene_context in scene.context_history(budget=max_tokens-300, min_dialogue=20) -%}
{{ scene_context }}
{% endfor %}
<|SECTION:INFORMATION|>
{{ query_memory("How old is {character.name}?") }}
{{ query_scene("Where is {character.name}?") }}
{{ query_scene("what is {character.name} doing?") }}
{{ query_scene("what is {character.name} wearing?") }}
{{ query_scene("Where is {character.name} and what is {character.name} doing?") }}
{{ query_scene("what is {character.name} wearing? Be explicit.") }}
<|CLOSE_SECTION|>
<|SECTION:TASK|>
Last line of dialogue: {{ scene.history[-1] }}
Questions: Where is {{ character.name}} currently and what are they doing? What is {{ character.name }}'s appearance at the end of the dialogue? What is {{ character.pronoun_2 }} wearing? What position is {{ character.pronoun_2 }} in?
Instruction: Answer the questions to describe {{ character.name }}'s appearance at the end of the dialogue and summarize into narrative description. Use the whole dialogue for context.
Instruction: Answer the questions to describe {{ character.name }}'s appearance at the end of the dialogue and summarize into narrative description. Use the whole dialogue for context. You must fill in gaps using imagination as long as it fits the existing context. You will provide a confident and decisive answer to the question.
Content Context: This is a specific scene from {{ scene.context }}
Narration style: point and click adventure game from the 90s
Expected Answer: A summarized visual description of {{ character.name }}'s appearance at the dialogue.
Expected Answer: A brief summarized visual description of {{ character.name }}'s appearance at the end of the dialogue. NEVER break the fourth wall. (2 to 3 sentences)
{{ extra_instructions }}
<|CLOSE_SECTION|>
Narrator answers: {{ bot_token }}At the end of the dialogue,
{{ bot_token }}At the end of the dialogue,

View File

@@ -1,3 +1,4 @@
{% block extra_context -%}
<|SECTION:CONTEXT|>
Scenario Premise: {{ scene.description }}
@@ -9,19 +10,24 @@ NPCs: {{ npc_names }}
Player Character: {{ player_character.name }}
Content Context: {{ scene.context }}
<|CLOSE_SECTION|>
{% for scene_context in scene.context_history(budget=max_tokens-300, min_dialogue=30, sections=False, dialogue_negative_offset=10) -%}
{% endblock -%}
{% for scene_context in scene.context_history(budget=max_tokens-300, min_dialogue=20, sections=False) -%}
{{ scene_context }}
{% endfor %}
<|SECTION:TASK|>
Continue the current dialogue by narrating the progression of the scene
Narration style: point and click adventure game from the 90s
Continue the current dialogue by narrating the progression of the scene.
If the scene is over, narrate the beginning of the next scene.
Be creative and generate something new and interesting, but stay true to the setting and context of the story so far.
Use an informal and colloquial register with a conversational tone. Overall, the narrative is informal, conversational, natural, and spontaneous, with a sense of immediacy.
Narration style should be that of a 90s point and click adventure game. You are omniscient and can describe the scene in detail.
Only generate new narration. Avoid including any character's internal thoughts or dialogue.
Write 2 to 4 sentences. {{ extra_instructions }}
<|CLOSE_SECTION|>
{{ bot_token }}
{% for row in scene.history[-10:] -%}
{{ row }}
{% endfor %}
{{
set_prepared_response_random(
npc_names.split(", ") + [

View File

@@ -6,15 +6,22 @@
{% endfor %}
<|SECTION:TASK|>
{% if query.endswith("?") -%}
Question: {{ query }}
Extra context: {{ query_memory(query, as_question_answer=False) }}
Instruction: Analyze Context, History and Dialogue. When evaluating both story and memory, story is more important. You can fill in gaps using imagination as long as it is based on the existing context. Respect the scene progression and answer in the context of the end of the dialogue.
Instruction: Analyze Context, History and Dialogue and then answer the question: "{{ query }}".
When evaluating both story and context, story is more important. You can fill in gaps using imagination as long as it is based on the existing context.
Respect the scene progression and answer in the context of the end of the dialogue.
Use your imagination to fill in gaps in order to answer the question in a confident and decisive manner. Avoid uncertainty and vagueness.
{% else -%}
Instruction: {{ query }}
Extra context: {{ query_memory(query, as_question_answer=False) }}
Answer based on Context, History and Dialogue. When evaluating both story and memory, story is more important. You can fill in gaps using imagination as long as it is based on the existing context.
Answer based on Context, History and Dialogue.
When evaluating both story and context, story is more important. You can fill in gaps using imagination as long as it is based on the existing context.
{% endif -%}
Content Context: This is a specific scene from {{ scene.context }}
Your answer should be in the style of short narration that fits the context of the scene.
Your answer should be in the style of short, concise narration that fits the context of the scene. (1 to 2 sentences)
{{ extra_instructions }}
<|CLOSE_SECTION|>
Narrator answers: {% if at_the_end %}{{ bot_token }}At the end of the dialogue, {% endif %}
{% if at_the_end %}{{ bot_token }}At the end of the dialogue, {% endif %}

View File

@@ -8,5 +8,6 @@ Scenario Premise: {{ scene.description }}
<|CLOSE_SECTION|>
<|SECTION:TASK|>
Provide a visual description of what is currently happening in the scene. Don't progress the scene.
{{ extra_instructions }}
<|CLOSE_SECTION|>
{{ bot_token }}At the end of the scene we currently see:

View File

@@ -12,5 +12,6 @@ Content Context: {{ scene.context }}
<|SECTION:TASK|>
Narrate the passage of time that just occured, subtly move the story forward, and set up the next scene.
Write 1 to 3 sentences.
{{ extra_instructions }}
<|CLOSE_SECTION|>
{{ bot_token }}{{ narrative }}:

View File

@@ -1,6 +1,6 @@
{% set questions = instruct_text("Ask the narrator three (3) questions to gather more context from the past for the continuation of this conversation. If a character is asking about a state, location or information about an item or another character, make sure to include question(s) that help gather context for this.", text) %}
<|SECTION:CONTEXT|>
{% for memory in query_memory(text, as_question_answer=False, max_tokens=max_tokens-500, iterate=20) -%}
{% for memory in query_memory(questions, as_question_answer=False, max_tokens=max_tokens-500, iterate=10) -%}
{{ memory }}
{% endfor -%}
@@ -8,7 +8,7 @@
<|SECTION:TASK|>
Answer the following questions:
{{ instruct_text("Ask the narrator three (3) questions to gather more context from the past for the continuation of this conversation. If a character is asking about a state, location or information about an item or another character, make sure to include question(s) that help gather context for this.", text) }}
{{ questions }}
You answers should be precise, truthful and short. Pay close attention to timestamps when retrieving information from the context.

View File

@@ -1 +1 @@
Instructions for a talented story analyst. The analyst will analyze parts of a story or dialogue and give truthful answers based on the dialogue or events given to him. The analyst will never make up facts or lie in his answers and will use crude, lewd and strong language if the content requires it.
Instructions for a talented story analyst. The analyst will analyze parts of a story or dialogue and give truthful answers based on the dialogue or events given to him. The analyst will never make up facts or lie in his answers and will use crude, lewd and strong language if the context requires it.

View File

@@ -56,7 +56,7 @@ async def websocket_endpoint(websocket, path):
await instance.sync_client_bootstraps()
except Exception as e:
log.error("send_client_bootstraps", error=e, traceback=traceback.format_exc())
await asyncio.sleep(60)
await asyncio.sleep(15)
send_client_bootstraps_task = asyncio.create_task(send_client_bootstraps())
@@ -110,7 +110,6 @@ async def websocket_endpoint(websocket, path):
elif action_type == "request_scenes_list":
query = data.get("query", "")
handler.request_scenes_list(query)
elif action_type == "configure_clients":
handler.configure_clients(data.get("clients"))
elif action_type == "configure_agents":

View File

@@ -1,5 +1,6 @@
import pydantic
import structlog
from talemate import VERSION
from talemate.config import Config as AppConfigData, load_config, save_config
@@ -8,6 +9,12 @@ log = structlog.get_logger("talemate.server.config")
class ConfigPayload(pydantic.BaseModel):
config: AppConfigData
class DefaultCharacterPayload(pydantic.BaseModel):
name: str
gender: str
description: str
color: str = "#3362bb"
class ConfigPlugin:
router = "config"
@@ -36,8 +43,38 @@ class ConfigPlugin:
save_config(current_config)
self.websocket_handler.config = current_config
self.websocket_handler.queue_put({
"type": "app_config",
"data": load_config(),
"version": VERSION
})
self.websocket_handler.queue_put({
"type": "config",
"action": "save_complete",
})
})
async def handle_save_default_character(self, data):
log.info("Saving default character", data=data["data"])
payload = DefaultCharacterPayload(**data["data"])
current_config = load_config()
current_config["game"]["default_player_character"] = payload.model_dump()
log.info("Saving default character", character=current_config["game"]["default_player_character"])
save_config(current_config)
self.websocket_handler.config = current_config
self.websocket_handler.queue_put({
"type": "app_config",
"data": load_config(),
"version": VERSION
})
self.websocket_handler.queue_put({
"type": "config",
"action": "save_default_character_complete",
})

View File

@@ -0,0 +1,26 @@
import structlog
import talemate.instance as instance
log = structlog.get_logger("talemate.server.tts")
class TTSPlugin:
router = "tts"
def __init__(self, websocket_handler):
self.websocket_handler = websocket_handler
self.tts = None
async def handle(self, data:dict):
action = data.get("action")
if action == "test":
return await self.handle_test(data)
async def handle_test(self, data:dict):
tts_agent = instance.get_agent("tts")
await tts_agent.generate("Welcome to talemate!")

View File

@@ -91,7 +91,7 @@ class WebsocketHandler(Receiver):
for agent_typ, agent_config in self.agents.items():
try:
client = self.llm_clients.get(agent_config.get("client"))["client"]
except TypeError:
except TypeError as e:
client = None
if not client:
@@ -167,11 +167,16 @@ class WebsocketHandler(Receiver):
log.info("Configuring clients", clients=clients)
for client in clients:
client.pop("status", None)
if client["type"] in ["textgenwebui", "lmstudio"]:
try:
max_token_length = int(client.get("max_token_length", 2048))
except ValueError:
continue
client.pop("model", None)
self.llm_clients[client["name"]] = {
"type": client["type"],
@@ -180,6 +185,10 @@ class WebsocketHandler(Receiver):
"max_token_length": max_token_length,
}
elif client["type"] == "openai":
client.pop("model_name", None)
client.pop("apiUrl", None)
self.llm_clients[client["name"]] = {
"type": "openai",
"name": client["name"],
@@ -213,16 +222,25 @@ class WebsocketHandler(Receiver):
def configure_agents(self, agents):
self.agents = {typ: {} for typ in instance.agent_types()}
log.debug("Configuring agents", agents=agents)
log.debug("Configuring agents")
for agent in agents:
name = agent["name"]
# special case for memory agent
if name == "memory":
if name == "memory" or name == "tts":
self.agents[name] = {
"name": name,
}
agent_instance = instance.get_agent(name, **self.agents[name])
if agent_instance.has_toggle:
self.agents[name]["enabled"] = agent["enabled"]
if getattr(agent_instance, "actions", None):
self.agents[name]["actions"] = agent.get("actions", {})
agent_instance.apply_config(**self.agents[name])
log.debug("Configured agent", name=name)
continue
if name not in self.agents:
@@ -419,6 +437,14 @@ class WebsocketHandler(Receiver):
}
)
def handle_audio_queue(self, emission: Emission):
self.queue_put(
{
"type": "audio_queue",
"data": emission.data,
}
)
def handle_request_input(self, emission: Emission):
self.waiting_for_input = True

View File

@@ -46,7 +46,7 @@ log = structlog.get_logger("talemate")
async_signals.register("game_loop_start")
async_signals.register("game_loop")
async_signals.register("game_loop_actor_iter")
async_signals.register("game_loop_new_message")
class Character:
"""
@@ -578,6 +578,7 @@ class Scene(Emitter):
"game_loop": async_signals.get("game_loop"),
"game_loop_start": async_signals.get("game_loop_start"),
"game_loop_actor_iter": async_signals.get("game_loop_actor_iter"),
"game_loop_new_message": async_signals.get("game_loop_new_message"),
}
self.setup_emitter(scene=self)
@@ -704,6 +705,12 @@ class Scene(Emitter):
messages=messages,
)
)
loop = asyncio.get_event_loop()
for message in messages:
loop.run_until_complete(self.signals["game_loop_new_message"].send(
events.GameLoopNewMessageEvent(scene=self, event_type="game_loop_new_message", message=message)
))
def push_archive(self, entry: data_objects.ArchiveEntry):
@@ -1177,7 +1184,7 @@ class Scene(Emitter):
},
)
self.log.debug("scene_status", scene=self.name, scene_time=self.ts, saved=self.saved)
self.log.debug("scene_status", scene=self.name, scene_time=self.ts, human_ts=util.iso8601_duration_to_human(self.ts, suffix=""), saved=self.saved)
def set_environment(self, environment: str):
"""
@@ -1190,6 +1197,7 @@ class Scene(Emitter):
"""
Accepts an iso6801 duration string and advances the scene's world state by that amount
"""
log.debug("advance_time", ts=ts, scene_ts=self.ts, duration=isodate.parse_duration(ts), scene_duration=isodate.parse_duration(self.ts))
self.ts = isodate.duration_isoformat(
isodate.parse_duration(self.ts) + isodate.parse_duration(ts)
@@ -1212,9 +1220,12 @@ class Scene(Emitter):
if self.archived_history[i].get("ts"):
self.ts = self.archived_history[i]["ts"]
break
end = self.archived_history[-1].get("end", 0)
else:
end = 0
for message in self.history:
for message in self.history[end:]:
if isinstance(message, TimePassageMessage):
self.advance_time(message.ts)

View File

@@ -6,10 +6,11 @@ import textwrap
import structlog
import isodate
import datetime
from typing import List
from typing import List, Union
from thefuzz import fuzz
from colorama import Back, Fore, Style, init
from PIL import Image
from nltk.tokenize import sent_tokenize
from talemate.scene_message import SceneMessage
log = structlog.get_logger("talemate.util")
@@ -490,30 +491,39 @@ def clean_attribute(attribute: str) -> str:
def duration_to_timedelta(duration):
"""Convert an isodate.Duration object to a datetime.timedelta object."""
"""Convert an isodate.Duration object or a datetime.timedelta object to a datetime.timedelta object."""
# Check if the duration is already a timedelta object
if isinstance(duration, datetime.timedelta):
return duration
# If it's an isodate.Duration object with separate year, month, day, hour, minute, second attributes
days = int(duration.years) * 365 + int(duration.months) * 30 + int(duration.days)
return datetime.timedelta(days=days)
seconds = duration.tdelta.seconds
return datetime.timedelta(days=days, seconds=seconds)
def timedelta_to_duration(delta):
"""Convert a datetime.timedelta object to an isodate.Duration object."""
# Extract days and convert to years, months, and days
days = delta.days
years = days // 365
days %= 365
months = days // 30
days %= 30
return isodate.duration.Duration(years=years, months=months, days=days)
# Convert remaining seconds to hours, minutes, and seconds
seconds = delta.seconds
hours = seconds // 3600
seconds %= 3600
minutes = seconds // 60
seconds %= 60
return isodate.Duration(years=years, months=months, days=days, hours=hours, minutes=minutes, seconds=seconds)
def parse_duration_to_isodate_duration(duration_str):
"""Parse ISO 8601 duration string and ensure the result is an isodate.Duration."""
parsed_duration = isodate.parse_duration(duration_str)
if isinstance(parsed_duration, datetime.timedelta):
days = parsed_duration.days
years = days // 365
days %= 365
months = days // 30
days %= 30
return isodate.duration.Duration(years=years, months=months, days=days)
return timedelta_to_duration(parsed_duration)
return parsed_duration
def iso8601_diff(duration_str1, duration_str2):
@@ -533,40 +543,50 @@ def iso8601_diff(duration_str1, duration_str2):
return difference
def iso8601_duration_to_human(iso_duration, suffix:str=" ago"):
# Parse the ISO8601 duration string into an isodate duration object
def iso8601_duration_to_human(iso_duration, suffix: str = " ago"):
if isinstance(iso_duration, isodate.Duration):
duration = iso_duration
else:
# Parse the ISO8601 duration string into an isodate duration object
if not isinstance(iso_duration, isodate.Duration):
duration = isodate.parse_duration(iso_duration)
else:
duration = iso_duration
# Extract years, months, days, and the time part as seconds
years, months, days, hours, minutes, seconds = 0, 0, 0, 0, 0, 0
if isinstance(duration, isodate.Duration):
years = duration.years
months = duration.months
days = duration.days
seconds = duration.tdelta.total_seconds()
else:
years, months = 0, 0
hours = duration.tdelta.seconds // 3600
minutes = (duration.tdelta.seconds % 3600) // 60
seconds = duration.tdelta.seconds % 60
elif isinstance(duration, datetime.timedelta):
days = duration.days
seconds = duration.total_seconds() - days * 86400 # Extract time-only part
hours = duration.seconds // 3600
minutes = (duration.seconds % 3600) // 60
seconds = duration.seconds % 60
hours, seconds = divmod(seconds, 3600)
minutes, seconds = divmod(seconds, 60)
# Adjust for cases where duration is a timedelta object
# Convert days to weeks and days if applicable
weeks, days = divmod(days, 7)
# Build the human-readable components
components = []
if years:
components.append(f"{years} Year{'s' if years > 1 else ''}")
if months:
components.append(f"{months} Month{'s' if months > 1 else ''}")
if weeks:
components.append(f"{weeks} Week{'s' if weeks > 1 else ''}")
if days:
components.append(f"{days} Day{'s' if days > 1 else ''}")
if hours:
components.append(f"{int(hours)} Hour{'s' if hours > 1 else ''}")
components.append(f"{hours} Hour{'s' if hours > 1 else ''}")
if minutes:
components.append(f"{int(minutes)} Minute{'s' if minutes > 1 else ''}")
components.append(f"{minutes} Minute{'s' if minutes > 1 else ''}")
if seconds:
components.append(f"{int(seconds)} Second{'s' if seconds > 1 else ''}")
components.append(f"{seconds} Second{'s' if seconds > 1 else ''}")
# Construct the human-readable string
if len(components) > 1:
@@ -576,7 +596,7 @@ def iso8601_duration_to_human(iso_duration, suffix:str=" ago"):
human_str = components[0]
else:
human_str = "Moments"
return f"{human_str}{suffix}"
def iso8601_diff_to_human(start, end):
@@ -584,6 +604,7 @@ def iso8601_diff_to_human(start, end):
return ""
diff = iso8601_diff(start, end)
return iso8601_duration_to_human(diff)
@@ -713,12 +734,91 @@ def extract_json(s):
json_object = json.loads(json_string)
return json_string, json_object
def similarity_score(line: str, lines: list[str], similarity_threshold: int = 95) -> tuple[bool, int, str]:
"""
Checks if a line is similar to any of the lines in the list of lines.
Arguments:
line (str): The line to check.
lines (list): The list of lines to check against.
threshold (int): The similarity threshold to use when comparing lines.
Returns:
bool: Whether a similar line was found.
int: The similarity score of the line. If no similar line was found, the highest similarity score is returned.
str: The similar line that was found. If no similar line was found, None is returned.
"""
highest_similarity = 0
for existing_line in lines:
similarity = fuzz.ratio(line, existing_line)
highest_similarity = max(highest_similarity, similarity)
#print("SIMILARITY", similarity, existing_line[:32]+"...")
if similarity >= similarity_threshold:
return True, similarity, existing_line
return False, highest_similarity, None
def dedupe_sentences(line_a:str, line_b:str, similarity_threshold:int=95, debug:bool=False, split_on_comma:bool=True) -> str:
"""
Will split both lines into sentences and then compare each sentence in line_a
against similar sentences in line_b. If a similar sentence is found, it will be
removed from line_a.
The similarity threshold is used to determine if two sentences are similar.
Arguments:
line_a (str): The first line.
line_b (str): The second line.
similarity_threshold (int): The similarity threshold to use when comparing sentences.
debug (bool): Whether to log debug messages.
split_on_comma (bool): Whether to split line_b sentences on commas as well.
Returns:
str: the cleaned line_a.
"""
line_a_sentences = sent_tokenize(line_a)
line_b_sentences = sent_tokenize(line_b)
cleaned_line_a_sentences = []
if split_on_comma:
# collect all sentences from line_b that contain a comma
line_b_sentences_with_comma = []
for line_b_sentence in line_b_sentences:
if "," in line_b_sentence:
line_b_sentences_with_comma.append(line_b_sentence)
# then split all sentences in line_b_sentences_with_comma on the comma
# and extend line_b_sentences with the split sentences, making sure
# to strip whitespace from the beginning and end of each sentence
for line_b_sentence in line_b_sentences_with_comma:
line_b_sentences.extend([s.strip() for s in line_b_sentence.split(",")])
for line_a_sentence in line_a_sentences:
similar_found = False
for line_b_sentence in line_b_sentences:
similarity = fuzz.ratio(line_a_sentence, line_b_sentence)
if similarity >= similarity_threshold:
if debug:
log.debug("DEDUPE SENTENCE", similarity=similarity, line_a_sentence=line_a_sentence, line_b_sentence=line_b_sentence)
similar_found = True
break
if not similar_found:
cleaned_line_a_sentences.append(line_a_sentence)
return " ".join(cleaned_line_a_sentences)
def dedupe_string(s: str, min_length: int = 32, similarity_threshold: int = 95, debug: bool = False) -> str:
"""
Removes duplicate lines from a string.
Parameters:
Arguments:
s (str): The input string.
min_length (int): The minimum length of a line to be checked for duplicates.
similarity_threshold (int): The similarity threshold to use when comparing lines.
@@ -837,6 +937,15 @@ def ensure_dialog_line_format(line:str):
elif segment_open is not None and segment_open != c:
# open segment is not the same as the current character
# opening - close the current segment and open a new one
# if we are at the last character we append the segment
if i == len(line)-1 and segment.strip():
segment += c
segments += [segment.strip()]
segment_open = None
segment = None
continue
segments += [segment.strip()]
segment_open = c
segment = c
@@ -852,14 +961,15 @@ def ensure_dialog_line_format(line:str):
segment += c
if segment is not None:
segments += [segment.strip()]
if segment.strip().strip("*").strip('"'):
segments += [segment.strip()]
for i in range(len(segments)):
segment = segments[i]
if segment in ['"', '*']:
if i > 0:
prev_segment = segments[i-1]
if prev_segment[-1] not in ['"', '*']:
if prev_segment and prev_segment[-1] not in ['"', '*']:
segments[i-1] = f"{prev_segment}{segment}"
segments[i] = ""
continue
@@ -900,4 +1010,27 @@ def ensure_dialog_line_format(line:str):
elif next_segment and next_segment[0] == '*':
segments[i] = f"\"{segment}\""
return " ".join(segment for segment in segments if segment)
for i in range(len(segments)):
segments[i] = clean_uneven_markers(segments[i], '"')
segments[i] = clean_uneven_markers(segments[i], '*')
return " ".join(segment for segment in segments if segment).strip()
def clean_uneven_markers(chunk:str, marker:str):
# if there is an uneven number of quotes, remove the last one if its
# at the end of the chunk. If its in the middle, add a quote to the endc
count = chunk.count(marker)
if count % 2 == 1:
if chunk.endswith(marker):
chunk = chunk[:-1]
elif chunk.startswith(marker):
chunk = chunk[1:]
elif count == 1:
chunk = chunk.replace(marker, "")
else:
chunk += marker
return chunk

View File

@@ -1,6 +1,7 @@
from pydantic import BaseModel
from talemate.emit import emit
import structlog
import traceback
from typing import Union
import talemate.instance as instance
@@ -59,7 +60,8 @@ class WorldState(BaseModel):
world_state = await self.agent.request_world_state()
except Exception as e:
self.emit()
raise e
log.error("world_state.request_update", error=e, traceback=traceback.format_exc())
return
previous_characters = self.characters
previous_items = self.items

View File

@@ -64,12 +64,13 @@
}
},
"node_modules/@babel/code-frame": {
"version": "7.22.5",
"resolved": "https://registry.npmmirror.com/@babel/code-frame/-/code-frame-7.22.5.tgz",
"integrity": "sha512-Xmwn266vad+6DAqEB2A6V/CcZVp62BbwVmcOJc2RPuwih1kw02TjQvWVWlcKGbBPd+8/0V5DEkOcizRGYsspYQ==",
"version": "7.23.5",
"resolved": "https://registry.npmjs.org/@babel/code-frame/-/code-frame-7.23.5.tgz",
"integrity": "sha512-CgH3s1a96LipHCmSUmYFPwY7MNx8C3avkq7i4Wl3cfa662ldtUe4VM1TPXX70pfmrlWTb6jLqTYrZyT2ZTJBgA==",
"dev": true,
"dependencies": {
"@babel/highlight": "^7.22.5"
"@babel/highlight": "^7.23.4",
"chalk": "^2.4.2"
},
"engines": {
"node": ">=6.9.0"
@@ -129,12 +130,12 @@
}
},
"node_modules/@babel/generator": {
"version": "7.22.7",
"resolved": "https://registry.npmmirror.com/@babel/generator/-/generator-7.22.7.tgz",
"integrity": "sha512-p+jPjMG+SI8yvIaxGgeW24u7q9+5+TGpZh8/CuB7RhBKd7RCy8FayNEFNNKrNK/eUcY/4ExQqLmyrvBXKsIcwQ==",
"version": "7.23.5",
"resolved": "https://registry.npmjs.org/@babel/generator/-/generator-7.23.5.tgz",
"integrity": "sha512-BPssCHrBD+0YrxviOa3QzpqwhNIXKEtOa2jQrm4FlmkC2apYgRnQcmPWiGZDlGxiNtltnUFolMe8497Esry+jA==",
"dev": true,
"dependencies": {
"@babel/types": "^7.22.5",
"@babel/types": "^7.23.5",
"@jridgewell/gen-mapping": "^0.3.2",
"@jridgewell/trace-mapping": "^0.3.17",
"jsesc": "^2.5.1"
@@ -243,22 +244,22 @@
}
},
"node_modules/@babel/helper-environment-visitor": {
"version": "7.22.5",
"resolved": "https://registry.npmmirror.com/@babel/helper-environment-visitor/-/helper-environment-visitor-7.22.5.tgz",
"integrity": "sha512-XGmhECfVA/5sAt+H+xpSg0mfrHq6FzNr9Oxh7PSEBBRUb/mL7Kz3NICXb194rCqAEdxkhPT1a88teizAFyvk8Q==",
"version": "7.22.20",
"resolved": "https://registry.npmjs.org/@babel/helper-environment-visitor/-/helper-environment-visitor-7.22.20.tgz",
"integrity": "sha512-zfedSIzFhat/gFhWfHtgWvlec0nqB9YEIVrpuwjruLlXfUSnA8cJB0miHKwqDnQ7d32aKo2xt88/xZptwxbfhA==",
"dev": true,
"engines": {
"node": ">=6.9.0"
}
},
"node_modules/@babel/helper-function-name": {
"version": "7.22.5",
"resolved": "https://registry.npmmirror.com/@babel/helper-function-name/-/helper-function-name-7.22.5.tgz",
"integrity": "sha512-wtHSq6jMRE3uF2otvfuD3DIvVhOsSNshQl0Qrd7qC9oQJzHvOL4qQXlQn2916+CXGywIjpGuIkoyZRRxHPiNQQ==",
"version": "7.23.0",
"resolved": "https://registry.npmjs.org/@babel/helper-function-name/-/helper-function-name-7.23.0.tgz",
"integrity": "sha512-OErEqsrxjZTJciZ4Oo+eoZqeW9UIiOcuYKRJA4ZAgV9myA+pOXhhmpfNCKjEH/auVfEYVFJ6y1Tc4r0eIApqiw==",
"dev": true,
"dependencies": {
"@babel/template": "^7.22.5",
"@babel/types": "^7.22.5"
"@babel/template": "^7.22.15",
"@babel/types": "^7.23.0"
},
"engines": {
"node": ">=6.9.0"
@@ -412,18 +413,18 @@
}
},
"node_modules/@babel/helper-string-parser": {
"version": "7.22.5",
"resolved": "https://registry.npmmirror.com/@babel/helper-string-parser/-/helper-string-parser-7.22.5.tgz",
"integrity": "sha512-mM4COjgZox8U+JcXQwPijIZLElkgEpO5rsERVDJTc2qfCDfERyob6k5WegS14SX18IIjv+XD+GrqNumY5JRCDw==",
"version": "7.23.4",
"resolved": "https://registry.npmjs.org/@babel/helper-string-parser/-/helper-string-parser-7.23.4.tgz",
"integrity": "sha512-803gmbQdqwdf4olxrX4AJyFBV/RTr3rSmOj0rKwesmzlfhYNDEs+/iOcznzpNWlJlIlTJC2QfPFcHB6DlzdVLQ==",
"dev": true,
"engines": {
"node": ">=6.9.0"
}
},
"node_modules/@babel/helper-validator-identifier": {
"version": "7.22.5",
"resolved": "https://registry.npmmirror.com/@babel/helper-validator-identifier/-/helper-validator-identifier-7.22.5.tgz",
"integrity": "sha512-aJXu+6lErq8ltp+JhkJUfk1MTGyuA4v7f3pA+BJ5HLfNC6nAQ0Cpi9uOquUj8Hehg0aUiHzWQbOVJGao6ztBAQ==",
"version": "7.22.20",
"resolved": "https://registry.npmjs.org/@babel/helper-validator-identifier/-/helper-validator-identifier-7.22.20.tgz",
"integrity": "sha512-Y4OZ+ytlatR8AI+8KZfKuL5urKp7qey08ha31L8b3BwewJAoJamTzyvxPR/5D+KkdJCGPq/+8TukHBlY10FX9A==",
"dev": true,
"engines": {
"node": ">=6.9.0"
@@ -468,13 +469,13 @@
}
},
"node_modules/@babel/highlight": {
"version": "7.22.5",
"resolved": "https://registry.npmmirror.com/@babel/highlight/-/highlight-7.22.5.tgz",
"integrity": "sha512-BSKlD1hgnedS5XRnGOljZawtag7H1yPfQp0tdNJCHoH6AZ+Pcm9VvkrK59/Yy593Ypg0zMxH2BxD1VPYUQ7UIw==",
"version": "7.23.4",
"resolved": "https://registry.npmjs.org/@babel/highlight/-/highlight-7.23.4.tgz",
"integrity": "sha512-acGdbYSfp2WheJoJm/EBBBLh/ID8KDc64ISZ9DYtBmC8/Q204PZJLHyzeB5qMzJ5trcOkybd78M4x2KWsUq++A==",
"dev": true,
"dependencies": {
"@babel/helper-validator-identifier": "^7.22.5",
"chalk": "^2.0.0",
"@babel/helper-validator-identifier": "^7.22.20",
"chalk": "^2.4.2",
"js-tokens": "^4.0.0"
},
"engines": {
@@ -482,9 +483,9 @@
}
},
"node_modules/@babel/parser": {
"version": "7.22.7",
"resolved": "https://registry.npmmirror.com/@babel/parser/-/parser-7.22.7.tgz",
"integrity": "sha512-7NF8pOkHP5o2vpmGgNGcfAeCvOYhGLyA3Z4eBQkT1RJlWu47n63bCs93QfJ2hIAFCil7L5P2IWhs1oToVgrL0Q==",
"version": "7.23.5",
"resolved": "https://registry.npmjs.org/@babel/parser/-/parser-7.23.5.tgz",
"integrity": "sha512-hOOqoiNXrmGdFbhgCzu6GiURxUgM27Xwd/aPuu8RfHEZPBzL1Z54okAHAQjXfcQNwvrlkAmAp4SlRTZ45vlthQ==",
"bin": {
"parser": "bin/babel-parser.js"
},
@@ -1773,33 +1774,33 @@
}
},
"node_modules/@babel/template": {
"version": "7.22.5",
"resolved": "https://registry.npmmirror.com/@babel/template/-/template-7.22.5.tgz",
"integrity": "sha512-X7yV7eiwAxdj9k94NEylvbVHLiVG1nvzCV2EAowhxLTwODV1jl9UzZ48leOC0sH7OnuHrIkllaBgneUykIcZaw==",
"version": "7.22.15",
"resolved": "https://registry.npmjs.org/@babel/template/-/template-7.22.15.tgz",
"integrity": "sha512-QPErUVm4uyJa60rkI73qneDacvdvzxshT3kksGqlGWYdOTIUOwJ7RDUL8sGqslY1uXWSL6xMFKEXDS3ox2uF0w==",
"dev": true,
"dependencies": {
"@babel/code-frame": "^7.22.5",
"@babel/parser": "^7.22.5",
"@babel/types": "^7.22.5"
"@babel/code-frame": "^7.22.13",
"@babel/parser": "^7.22.15",
"@babel/types": "^7.22.15"
},
"engines": {
"node": ">=6.9.0"
}
},
"node_modules/@babel/traverse": {
"version": "7.22.8",
"resolved": "https://registry.npmmirror.com/@babel/traverse/-/traverse-7.22.8.tgz",
"integrity": "sha512-y6LPR+wpM2I3qJrsheCTwhIinzkETbplIgPBbwvqPKc+uljeA5gP+3nP8irdYt1mjQaDnlIcG+dw8OjAco4GXw==",
"version": "7.23.5",
"resolved": "https://registry.npmjs.org/@babel/traverse/-/traverse-7.23.5.tgz",
"integrity": "sha512-czx7Xy5a6sapWWRx61m1Ke1Ra4vczu1mCTtJam5zRTBOonfdJ+S/B6HYmGYu3fJtr8GGET3si6IhgWVBhJ/m8w==",
"dev": true,
"dependencies": {
"@babel/code-frame": "^7.22.5",
"@babel/generator": "^7.22.7",
"@babel/helper-environment-visitor": "^7.22.5",
"@babel/helper-function-name": "^7.22.5",
"@babel/code-frame": "^7.23.5",
"@babel/generator": "^7.23.5",
"@babel/helper-environment-visitor": "^7.22.20",
"@babel/helper-function-name": "^7.23.0",
"@babel/helper-hoist-variables": "^7.22.5",
"@babel/helper-split-export-declaration": "^7.22.6",
"@babel/parser": "^7.22.7",
"@babel/types": "^7.22.5",
"@babel/parser": "^7.23.5",
"@babel/types": "^7.23.5",
"debug": "^4.1.0",
"globals": "^11.1.0"
},
@@ -1808,13 +1809,13 @@
}
},
"node_modules/@babel/types": {
"version": "7.22.5",
"resolved": "https://registry.npmmirror.com/@babel/types/-/types-7.22.5.tgz",
"integrity": "sha512-zo3MIHGOkPOfoRXitsgHLjEXmlDaD/5KU1Uzuc9GNiZPhSqVxVRtxuPaSBZDsYZ9qV88AjtMtWW7ww98loJ9KA==",
"version": "7.23.5",
"resolved": "https://registry.npmjs.org/@babel/types/-/types-7.23.5.tgz",
"integrity": "sha512-ON5kSOJwVO6xXVRTvOI0eOnWe7VdUcIpsovGo9U/Br4Ie4UVFQTboO2cYnDhAGU6Fp+UxSiT+pMft0SMHfuq6w==",
"dev": true,
"dependencies": {
"@babel/helper-string-parser": "^7.22.5",
"@babel/helper-validator-identifier": "^7.22.5",
"@babel/helper-string-parser": "^7.23.4",
"@babel/helper-validator-identifier": "^7.22.20",
"to-fast-properties": "^2.0.0"
},
"engines": {
@@ -3041,9 +3042,9 @@
},
"node_modules/@vue/vue-loader-v15": {
"name": "vue-loader",
"version": "15.10.1",
"resolved": "https://registry.npmmirror.com/vue-loader/-/vue-loader-15.10.1.tgz",
"integrity": "sha512-SaPHK1A01VrNthlix6h1hq4uJu7S/z0kdLUb6klubo738NeQoLbS6V9/d8Pv19tU0XdQKju3D1HSKuI8wJ5wMA==",
"version": "15.11.1",
"resolved": "https://registry.npmjs.org/vue-loader/-/vue-loader-15.11.1.tgz",
"integrity": "sha512-0iw4VchYLePqJfJu9s62ACWUXeSqM30SQqlIftbYWM3C+jpPcEHKSPUZBLjSF9au4HTHQ/naF6OGnO3Q/qGR3Q==",
"dev": true,
"dependencies": {
"@vue/component-compiler-utils": "^3.1.0",
@@ -3060,6 +3061,9 @@
"cache-loader": {
"optional": true
},
"prettier": {
"optional": true
},
"vue-template-compiler": {
"optional": true
}
@@ -8158,9 +8162,23 @@
}
},
"node_modules/postcss": {
"version": "8.4.25",
"resolved": "https://registry.npmmirror.com/postcss/-/postcss-8.4.25.tgz",
"integrity": "sha512-7taJ/8t2av0Z+sQEvNzCkpDynl0tX3uJMCODi6nT3PfASC7dYCWV9aQ+uiCf+KBD4SEFcu+GvJdGdwzQ6OSjCw==",
"version": "8.4.31",
"resolved": "https://registry.npmjs.org/postcss/-/postcss-8.4.31.tgz",
"integrity": "sha512-PS08Iboia9mts/2ygV3eLpY5ghnUcfLV/EXTOW1E2qYxJKGGBUtNjN76FYHnMs36RmARn41bC0AZmn+rR0OVpQ==",
"funding": [
{
"type": "opencollective",
"url": "https://opencollective.com/postcss/"
},
{
"type": "tidelift",
"url": "https://tidelift.com/funding/github/npm/postcss"
},
{
"type": "github",
"url": "https://github.com/sponsors/ai"
}
],
"dependencies": {
"nanoid": "^3.3.6",
"picocolors": "^1.0.0",
@@ -11210,12 +11228,13 @@
}
},
"@babel/code-frame": {
"version": "7.22.5",
"resolved": "https://registry.npmmirror.com/@babel/code-frame/-/code-frame-7.22.5.tgz",
"integrity": "sha512-Xmwn266vad+6DAqEB2A6V/CcZVp62BbwVmcOJc2RPuwih1kw02TjQvWVWlcKGbBPd+8/0V5DEkOcizRGYsspYQ==",
"version": "7.23.5",
"resolved": "https://registry.npmjs.org/@babel/code-frame/-/code-frame-7.23.5.tgz",
"integrity": "sha512-CgH3s1a96LipHCmSUmYFPwY7MNx8C3avkq7i4Wl3cfa662ldtUe4VM1TPXX70pfmrlWTb6jLqTYrZyT2ZTJBgA==",
"dev": true,
"requires": {
"@babel/highlight": "^7.22.5"
"@babel/highlight": "^7.23.4",
"chalk": "^2.4.2"
}
},
"@babel/compat-data": {
@@ -11259,12 +11278,12 @@
}
},
"@babel/generator": {
"version": "7.22.7",
"resolved": "https://registry.npmmirror.com/@babel/generator/-/generator-7.22.7.tgz",
"integrity": "sha512-p+jPjMG+SI8yvIaxGgeW24u7q9+5+TGpZh8/CuB7RhBKd7RCy8FayNEFNNKrNK/eUcY/4ExQqLmyrvBXKsIcwQ==",
"version": "7.23.5",
"resolved": "https://registry.npmjs.org/@babel/generator/-/generator-7.23.5.tgz",
"integrity": "sha512-BPssCHrBD+0YrxviOa3QzpqwhNIXKEtOa2jQrm4FlmkC2apYgRnQcmPWiGZDlGxiNtltnUFolMe8497Esry+jA==",
"dev": true,
"requires": {
"@babel/types": "^7.22.5",
"@babel/types": "^7.23.5",
"@jridgewell/gen-mapping": "^0.3.2",
"@jridgewell/trace-mapping": "^0.3.17",
"jsesc": "^2.5.1"
@@ -11343,19 +11362,19 @@
}
},
"@babel/helper-environment-visitor": {
"version": "7.22.5",
"resolved": "https://registry.npmmirror.com/@babel/helper-environment-visitor/-/helper-environment-visitor-7.22.5.tgz",
"integrity": "sha512-XGmhECfVA/5sAt+H+xpSg0mfrHq6FzNr9Oxh7PSEBBRUb/mL7Kz3NICXb194rCqAEdxkhPT1a88teizAFyvk8Q==",
"version": "7.22.20",
"resolved": "https://registry.npmjs.org/@babel/helper-environment-visitor/-/helper-environment-visitor-7.22.20.tgz",
"integrity": "sha512-zfedSIzFhat/gFhWfHtgWvlec0nqB9YEIVrpuwjruLlXfUSnA8cJB0miHKwqDnQ7d32aKo2xt88/xZptwxbfhA==",
"dev": true
},
"@babel/helper-function-name": {
"version": "7.22.5",
"resolved": "https://registry.npmmirror.com/@babel/helper-function-name/-/helper-function-name-7.22.5.tgz",
"integrity": "sha512-wtHSq6jMRE3uF2otvfuD3DIvVhOsSNshQl0Qrd7qC9oQJzHvOL4qQXlQn2916+CXGywIjpGuIkoyZRRxHPiNQQ==",
"version": "7.23.0",
"resolved": "https://registry.npmjs.org/@babel/helper-function-name/-/helper-function-name-7.23.0.tgz",
"integrity": "sha512-OErEqsrxjZTJciZ4Oo+eoZqeW9UIiOcuYKRJA4ZAgV9myA+pOXhhmpfNCKjEH/auVfEYVFJ6y1Tc4r0eIApqiw==",
"dev": true,
"requires": {
"@babel/template": "^7.22.5",
"@babel/types": "^7.22.5"
"@babel/template": "^7.22.15",
"@babel/types": "^7.23.0"
}
},
"@babel/helper-hoist-variables": {
@@ -11470,15 +11489,15 @@
}
},
"@babel/helper-string-parser": {
"version": "7.22.5",
"resolved": "https://registry.npmmirror.com/@babel/helper-string-parser/-/helper-string-parser-7.22.5.tgz",
"integrity": "sha512-mM4COjgZox8U+JcXQwPijIZLElkgEpO5rsERVDJTc2qfCDfERyob6k5WegS14SX18IIjv+XD+GrqNumY5JRCDw==",
"version": "7.23.4",
"resolved": "https://registry.npmjs.org/@babel/helper-string-parser/-/helper-string-parser-7.23.4.tgz",
"integrity": "sha512-803gmbQdqwdf4olxrX4AJyFBV/RTr3rSmOj0rKwesmzlfhYNDEs+/iOcznzpNWlJlIlTJC2QfPFcHB6DlzdVLQ==",
"dev": true
},
"@babel/helper-validator-identifier": {
"version": "7.22.5",
"resolved": "https://registry.npmmirror.com/@babel/helper-validator-identifier/-/helper-validator-identifier-7.22.5.tgz",
"integrity": "sha512-aJXu+6lErq8ltp+JhkJUfk1MTGyuA4v7f3pA+BJ5HLfNC6nAQ0Cpi9uOquUj8Hehg0aUiHzWQbOVJGao6ztBAQ==",
"version": "7.22.20",
"resolved": "https://registry.npmjs.org/@babel/helper-validator-identifier/-/helper-validator-identifier-7.22.20.tgz",
"integrity": "sha512-Y4OZ+ytlatR8AI+8KZfKuL5urKp7qey08ha31L8b3BwewJAoJamTzyvxPR/5D+KkdJCGPq/+8TukHBlY10FX9A==",
"dev": true
},
"@babel/helper-validator-option": {
@@ -11511,20 +11530,20 @@
}
},
"@babel/highlight": {
"version": "7.22.5",
"resolved": "https://registry.npmmirror.com/@babel/highlight/-/highlight-7.22.5.tgz",
"integrity": "sha512-BSKlD1hgnedS5XRnGOljZawtag7H1yPfQp0tdNJCHoH6AZ+Pcm9VvkrK59/Yy593Ypg0zMxH2BxD1VPYUQ7UIw==",
"version": "7.23.4",
"resolved": "https://registry.npmjs.org/@babel/highlight/-/highlight-7.23.4.tgz",
"integrity": "sha512-acGdbYSfp2WheJoJm/EBBBLh/ID8KDc64ISZ9DYtBmC8/Q204PZJLHyzeB5qMzJ5trcOkybd78M4x2KWsUq++A==",
"dev": true,
"requires": {
"@babel/helper-validator-identifier": "^7.22.5",
"chalk": "^2.0.0",
"@babel/helper-validator-identifier": "^7.22.20",
"chalk": "^2.4.2",
"js-tokens": "^4.0.0"
}
},
"@babel/parser": {
"version": "7.22.7",
"resolved": "https://registry.npmmirror.com/@babel/parser/-/parser-7.22.7.tgz",
"integrity": "sha512-7NF8pOkHP5o2vpmGgNGcfAeCvOYhGLyA3Z4eBQkT1RJlWu47n63bCs93QfJ2hIAFCil7L5P2IWhs1oToVgrL0Q=="
"version": "7.23.5",
"resolved": "https://registry.npmjs.org/@babel/parser/-/parser-7.23.5.tgz",
"integrity": "sha512-hOOqoiNXrmGdFbhgCzu6GiURxUgM27Xwd/aPuu8RfHEZPBzL1Z54okAHAQjXfcQNwvrlkAmAp4SlRTZ45vlthQ=="
},
"@babel/plugin-bugfix-safari-id-destructuring-collision-in-function-expression": {
"version": "7.22.5",
@@ -12382,42 +12401,42 @@
}
},
"@babel/template": {
"version": "7.22.5",
"resolved": "https://registry.npmmirror.com/@babel/template/-/template-7.22.5.tgz",
"integrity": "sha512-X7yV7eiwAxdj9k94NEylvbVHLiVG1nvzCV2EAowhxLTwODV1jl9UzZ48leOC0sH7OnuHrIkllaBgneUykIcZaw==",
"version": "7.22.15",
"resolved": "https://registry.npmjs.org/@babel/template/-/template-7.22.15.tgz",
"integrity": "sha512-QPErUVm4uyJa60rkI73qneDacvdvzxshT3kksGqlGWYdOTIUOwJ7RDUL8sGqslY1uXWSL6xMFKEXDS3ox2uF0w==",
"dev": true,
"requires": {
"@babel/code-frame": "^7.22.5",
"@babel/parser": "^7.22.5",
"@babel/types": "^7.22.5"
"@babel/code-frame": "^7.22.13",
"@babel/parser": "^7.22.15",
"@babel/types": "^7.22.15"
}
},
"@babel/traverse": {
"version": "7.22.8",
"resolved": "https://registry.npmmirror.com/@babel/traverse/-/traverse-7.22.8.tgz",
"integrity": "sha512-y6LPR+wpM2I3qJrsheCTwhIinzkETbplIgPBbwvqPKc+uljeA5gP+3nP8irdYt1mjQaDnlIcG+dw8OjAco4GXw==",
"version": "7.23.5",
"resolved": "https://registry.npmjs.org/@babel/traverse/-/traverse-7.23.5.tgz",
"integrity": "sha512-czx7Xy5a6sapWWRx61m1Ke1Ra4vczu1mCTtJam5zRTBOonfdJ+S/B6HYmGYu3fJtr8GGET3si6IhgWVBhJ/m8w==",
"dev": true,
"requires": {
"@babel/code-frame": "^7.22.5",
"@babel/generator": "^7.22.7",
"@babel/helper-environment-visitor": "^7.22.5",
"@babel/helper-function-name": "^7.22.5",
"@babel/code-frame": "^7.23.5",
"@babel/generator": "^7.23.5",
"@babel/helper-environment-visitor": "^7.22.20",
"@babel/helper-function-name": "^7.23.0",
"@babel/helper-hoist-variables": "^7.22.5",
"@babel/helper-split-export-declaration": "^7.22.6",
"@babel/parser": "^7.22.7",
"@babel/types": "^7.22.5",
"@babel/parser": "^7.23.5",
"@babel/types": "^7.23.5",
"debug": "^4.1.0",
"globals": "^11.1.0"
}
},
"@babel/types": {
"version": "7.22.5",
"resolved": "https://registry.npmmirror.com/@babel/types/-/types-7.22.5.tgz",
"integrity": "sha512-zo3MIHGOkPOfoRXitsgHLjEXmlDaD/5KU1Uzuc9GNiZPhSqVxVRtxuPaSBZDsYZ9qV88AjtMtWW7ww98loJ9KA==",
"version": "7.23.5",
"resolved": "https://registry.npmjs.org/@babel/types/-/types-7.23.5.tgz",
"integrity": "sha512-ON5kSOJwVO6xXVRTvOI0eOnWe7VdUcIpsovGo9U/Br4Ie4UVFQTboO2cYnDhAGU6Fp+UxSiT+pMft0SMHfuq6w==",
"dev": true,
"requires": {
"@babel/helper-string-parser": "^7.22.5",
"@babel/helper-validator-identifier": "^7.22.5",
"@babel/helper-string-parser": "^7.23.4",
"@babel/helper-validator-identifier": "^7.22.20",
"to-fast-properties": "^2.0.0"
}
},
@@ -13458,9 +13477,9 @@
"integrity": "sha512-7OjdcV8vQ74eiz1TZLzZP4JwqM5fA94K6yntPS5Z25r9HDuGNzaGdgvwKYq6S+MxwF0TFRwe50fIR/MYnakdkQ=="
},
"@vue/vue-loader-v15": {
"version": "npm:vue-loader@15.10.1",
"resolved": "https://registry.npmmirror.com/vue-loader/-/vue-loader-15.10.1.tgz",
"integrity": "sha512-SaPHK1A01VrNthlix6h1hq4uJu7S/z0kdLUb6klubo738NeQoLbS6V9/d8Pv19tU0XdQKju3D1HSKuI8wJ5wMA==",
"version": "npm:vue-loader@15.11.1",
"resolved": "https://registry.npmjs.org/vue-loader/-/vue-loader-15.11.1.tgz",
"integrity": "sha512-0iw4VchYLePqJfJu9s62ACWUXeSqM30SQqlIftbYWM3C+jpPcEHKSPUZBLjSF9au4HTHQ/naF6OGnO3Q/qGR3Q==",
"dev": true,
"requires": {
"@vue/component-compiler-utils": "^3.1.0",
@@ -17572,9 +17591,9 @@
}
},
"postcss": {
"version": "8.4.25",
"resolved": "https://registry.npmmirror.com/postcss/-/postcss-8.4.25.tgz",
"integrity": "sha512-7taJ/8t2av0Z+sQEvNzCkpDynl0tX3uJMCODi6nT3PfASC7dYCWV9aQ+uiCf+KBD4SEFcu+GvJdGdwzQ6OSjCw==",
"version": "8.4.31",
"resolved": "https://registry.npmjs.org/postcss/-/postcss-8.4.31.tgz",
"integrity": "sha512-PS08Iboia9mts/2ygV3eLpY5ghnUcfLV/EXTOW1E2qYxJKGGBUtNjN76FYHnMs36RmARn41bC0AZmn+rR0OVpQ==",
"requires": {
"nanoid": "^3.3.6",
"picocolors": "^1.0.0",

View File

@@ -7,18 +7,19 @@
size="14"></v-progress-circular>
<v-icon v-else-if="agent.status === 'uninitialized'" color="orange" size="14">mdi-checkbox-blank-circle</v-icon>
<v-icon v-else-if="agent.status === 'disabled'" color="grey-darken-2" size="14">mdi-checkbox-blank-circle</v-icon>
<v-icon v-else-if="agent.status === 'error'" color="red" size="14">mdi-checkbox-blank-circle</v-icon>
<v-icon v-else color="green" size="14">mdi-checkbox-blank-circle</v-icon>
<span class="ml-1" v-if="agent.label"> {{ agent.label }}</span>
<span class="ml-1" v-else> {{ agent.name }}</span>
</v-list-item-title>
<v-list-item-subtitle>
<v-list-item-subtitle class="text-caption">
{{ agent.client }}
</v-list-item-subtitle>
<v-chip class="mr-1" v-if="agent.status === 'disabled'" size="x-small">Disabled</v-chip>
<v-chip v-if="agent.data.experimental" color="warning" size="x-small">experimental</v-chip>
</v-list-item>
</v-list>
<AgentModal :dialog="dialog" :formTitle="formTitle" @save="saveAgent" @update:dialog="updateDialog"></AgentModal>
<AgentModal :dialog="state.dialog" :formTitle="state.formTitle" @save="saveAgent" @update:dialog="updateDialog"></AgentModal>
</div>
</template>
@@ -65,7 +66,10 @@ export default {
for(let i = 0; i < this.state.agents.length; i++) {
let agent = this.state.agents[i];
if(agent.status === 'warning' || agent.status === 'error') {
if(!agent.data.requires_llm_client)
continue
if(agent.status === 'warning' || agent.status === 'error' || agent.status === 'uninitialized') {
console.log("agents: configuration required (1)", agent.status)
return true;
}
@@ -99,7 +103,6 @@ export default {
} else {
this.state.agents[index] = agent;
}
this.state.dialog = false;
this.$emit('agents-updated', this.state.agents);
},
editAgent(index) {

View File

@@ -21,7 +21,7 @@
{{ client.type }}
<v-chip label size="x-small" variant="outlined" class="ml-1">ctx {{ client.max_token_length }}</v-chip>
</v-list-item-subtitle>
<v-list-item-content density="compact">
<div density="compact">
<v-slider
hide-details
v-model="client.max_token_length"
@@ -32,9 +32,15 @@
@click.stop
density="compact"
></v-slider>
</v-list-item-content>
</div>
<v-list-item-subtitle class="text-center">
<v-tooltip text="No LLM prompt template for this model. Using default. Templates can be added in ./templates/llm-prompt" v-if="client.status === 'idle' && client.data && !client.data.has_prompt_template" max-width="200">
<template v-slot:activator="{ props }">
<v-icon x-size="14" class="mr-1" v-bind="props" color="orange">mdi-alert</v-icon>
</template>
</v-tooltip>
<v-tooltip text="Edit client">
<template v-slot:activator="{ props }">
<v-btn size="x-small" class="mr-1" v-bind="props" variant="tonal" density="comfortable" rounded="sm" @click.stop="editClient(index)" icon="mdi-cogs"></v-btn>
@@ -56,7 +62,7 @@
</v-list-item-subtitle>
</v-list-item>
</v-list>
<ClientModal :dialog="dialog" :formTitle="formTitle" @save="saveClient" @update:dialog="updateDialog"></ClientModal>
<ClientModal :dialog="state.dialog" :formTitle="state.formTitle" @save="saveClient" @error="propagateError" @update:dialog="updateDialog"></ClientModal>
<v-alert type="warning" variant="tonal" v-if="state.clients.length === 0">You have no LLM clients configured. Add one.</v-alert>
<v-btn @click="openModal" prepend-icon="mdi-plus-box">Add client</v-btn>
</div>
@@ -81,6 +87,9 @@ export default {
apiUrl: '',
model_name: '',
max_token_length: 2048,
data: {
has_prompt_template: false,
}
}, // Add a new field to store the model name
formTitle: ''
}
@@ -90,7 +99,6 @@ export default {
'getWebsocket',
'registerMessageHandler',
'isConnected',
'chekcingStatus',
'getAgents',
],
provide() {
@@ -123,10 +131,16 @@ export default {
apiUrl: 'http://localhost:5000',
model_name: '',
max_token_length: 4096,
data: {
has_prompt_template: false,
}
};
this.state.formTitle = 'Add Client';
this.state.dialog = true;
},
propagateError(error) {
this.$emit('error', error);
},
saveClient(client) {
const index = this.state.clients.findIndex(c => c.name === client.name);
if (index === -1) {
@@ -153,10 +167,13 @@ export default {
let agents = this.getAgents();
let client = this.state.clients[index];
this.saveClient(client);
for (let i = 0; i < agents.length; i++) {
agents[i].client = client.name;
this.$emit('client-assigned', agents);
console.log("Assigning client", client.name, "to agent", agents[i].name);
}
this.$emit('client-assigned', agents);
},
updateDialog(newVal) {
this.state.dialog = newVal;
@@ -175,6 +192,7 @@ export default {
client.status = data.status;
client.max_token_length = data.max_token_length;
client.apiUrl = data.apiUrl;
client.data = data.data;
} else {
console.log("Adding new client", data);
this.state.clients.push({
@@ -184,6 +202,7 @@ export default {
status: data.status,
max_token_length: data.max_token_length,
apiUrl: data.apiUrl,
data: data.data,
});
// sort the clients by name
this.state.clients.sort((a, b) => (a.name > b.name) ? 1 : -1);

View File

@@ -10,7 +10,7 @@
</v-col>
<v-col cols="3" class="text-right">
<v-checkbox :label="enabledLabel()" hide-details density="compact" color="green" v-model="agent.enabled"
v-if="agent.data.has_toggle"></v-checkbox>
v-if="agent.data.has_toggle" @update:modelValue="save(false)"></v-checkbox>
</v-col>
</v-row>
@@ -18,7 +18,7 @@
</v-card-title>
<v-card-text class="scrollable-content">
<v-select v-model="agent.client" :items="agent.data.client" label="Client"></v-select>
<v-select v-if="agent.data.requires_llm_client" v-model="agent.client" :items="agent.data.client" label="Client" @update:modelValue="save(false)"></v-select>
<v-alert type="warning" variant="tonal" density="compact" v-if="agent.data.experimental">
This agent is currently experimental and may significantly decrease performance and / or require
@@ -27,27 +27,25 @@
<v-card v-for="(action, key) in agent.actions" :key="key" density="compact">
<v-card-subtitle>
<v-checkbox :label="agent.data.actions[key].label" hide-details density="compact" color="green" v-model="action.enabled"></v-checkbox>
<v-checkbox v-if="!actionAlwaysEnabled(key)" :label="agent.data.actions[key].label" hide-details density="compact" color="green" v-model="action.enabled" @update:modelValue="save(false)"></v-checkbox>
</v-card-subtitle>
<v-card-text>
{{ agent.data.actions[key].description }}
<div v-if="!actionAlwaysEnabled(key)">
{{ agent.data.actions[key].description }}
</div>
<div v-for="(action_config, config_key) in agent.data.actions[key].config" :key="config_key">
<div v-if="action.enabled">
<!-- render config widgets based on action_config.type (int, str, bool, float) -->
<v-text-field v-if="action_config.type === 'text'" v-model="action.config[config_key].value" :label="action_config.label" :hint="action_config.description" density="compact"></v-text-field>
<v-slider v-if="action_config.type === 'number' && action_config.step !== null" v-model="action.config[config_key].value" :label="action_config.label" :hint="action_config.description" :min="action_config.min" :max="action_config.max" :step="action_config.step" density="compact" thumb-label></v-slider>
<v-checkbox v-if="action_config.type === 'bool'" v-model="action.config[config_key].value" :label="action_config.label" :hint="action_config.description" density="compact"></v-checkbox>
<v-text-field v-if="action_config.type === 'text' && action_config.choices === null" v-model="action.config[config_key].value" :label="action_config.label" :hint="action_config.description" density="compact" @update:modelValue="save(true)"></v-text-field>
<v-autocomplete v-else-if="action_config.type === 'text' && action_config.choices !== null" v-model="action.config[config_key].value" :items="action_config.choices" :label="action_config.label" :hint="action_config.description" density="compact" item-title="label" item-value="value" @update:modelValue="save(false)"></v-autocomplete>
<v-slider v-if="action_config.type === 'number' && action_config.step !== null" v-model="action.config[config_key].value" :label="action_config.label" :hint="action_config.description" :min="action_config.min" :max="action_config.max" :step="action_config.step" density="compact" thumb-label @update:modelValue="save(true)"></v-slider>
<v-checkbox v-if="action_config.type === 'bool'" v-model="action.config[config_key].value" :label="action_config.label" :hint="action_config.description" density="compact" @update:modelValue="save(false)"></v-checkbox>
</div>
</div>
</v-card-text>
</v-card>
</v-card-text>
<v-card-actions>
<v-spacer></v-spacer>
<v-btn color="primary" @click="close">Close</v-btn>
<v-btn color="primary" @click="save">Save</v-btn>
</v-card-actions>
</v-card>
</v-dialog>
</template>
@@ -58,9 +56,10 @@ export default {
dialog: Boolean,
formTitle: String
},
inject: ['state'],
inject: ['state', 'getWebsocket'],
data() {
return {
saveTimeout: null,
localDialog: this.state.dialog,
agent: { ...this.state.currentAgent }
};
@@ -90,12 +89,32 @@ export default {
return 'Disabled';
}
},
actionAlwaysEnabled(action) {
if (action.charAt(0) === '_') {
return true;
} else {
return false;
}
},
close() {
this.$emit('update:dialog', false);
},
save() {
this.$emit('save', this.agent);
this.close();
save(delayed = false) {
console.log("save", delayed);
if(!delayed) {
this.$emit('save', this.agent);
return;
}
if(this.saveTimeout !== null)
clearTimeout(this.saveTimeout);
this.saveTimeout = setTimeout(() => {
this.$emit('save', this.agent);
}, 500);
//this.$emit('save', this.agent);
}
}
}

View File

@@ -1,73 +1,209 @@
<template>
<v-dialog v-model="dialog" scrollable max-width="50%">
<v-dialog v-model="dialog" scrollable max-width="960px">
<v-card v-if="app_config !== null">
<v-card-title><v-icon class="mr-1">mdi-cog</v-icon>Settings</v-card-title>
<v-tabs color="primary" v-model="tab">
<v-tab value="game">
<v-icon start>mdi-gamepad-square</v-icon>
Game
</v-tab>
<v-tab value="application">
<v-icon start>mdi-application</v-icon>
Application
</v-tab>
<v-tab value="creator">
<v-icon start>mdi-palette-outline</v-icon>
Creator
</v-tab>
</v-tabs>
<v-window v-model="tab">
<!-- GAME -->
<v-window-item value="game">
<v-card flat>
<v-card-title>
Default player character
<v-tooltip location="top" max-width="500" text="This will be default player character that will be added to a game if the game does not come with a defined player character. Essentially this is relevant for when you load character-cards that aren't in the talemate scene format.">
<template v-slot:activator="{ props }">
<v-icon size="x-small" v-bind="props" v-on="on">mdi-help</v-icon>
</template>
</v-tooltip>
</v-card-title>
<v-card-text>
<v-row>
<v-col cols="6">
<v-text-field v-model="app_config.game.default_player_character.name" label="Name"></v-text-field>
<v-col cols="4">
<v-list>
<v-list-item @click="gamePageSelected=item.value" :prepend-icon="item.icon" v-for="(item, index) in navigation.game" :key="index">
<v-list-item-title>{{ item.title }}</v-list-item-title>
</v-list-item>
</v-list>
</v-col>
<v-col cols="6">
<v-text-field v-model="app_config.game.default_player_character.gender" label="Gender"></v-text-field>
</v-col>
</v-row>
<v-row>
<v-col>
<v-textarea v-model="app_config.game.default_player_character.description" auto-grow label="Description"></v-textarea>
</v-col>
<v-col>
<v-color-picker v-model="app_config.game.default_player_character.color" hide-inputs label="Color" elevation="0"></v-color-picker>
<v-col cols="8">
<div v-if="gamePageSelected === 'character'">
<v-alert color="white" variant="text" icon="mdi-human-edit" density="compact">
<v-alert-title>Default player character</v-alert-title>
<div class="text-grey">
This will be default player character that will be added to a game if the game does not come with a defined player character. Essentially this is relevant for when you load character-cards that aren't in the talemate scene format.
</div>
</v-alert>
<v-divider class="mb-2"></v-divider>
<v-row>
<v-col cols="6">
<v-text-field v-model="app_config.game.default_player_character.name"
label="Name"></v-text-field>
</v-col>
<v-col cols="6">
<v-text-field v-model="app_config.game.default_player_character.gender"
label="Gender"></v-text-field>
</v-col>
</v-row>
<v-row>
<v-col cols="12">
<v-textarea v-model="app_config.game.default_player_character.description"
auto-grow label="Description"></v-textarea>
</v-col>
</v-row>
</div>
</v-col>
</v-row>
</v-card-text>
</v-card>
</v-window-item>
<!-- APPLICATION -->
<v-window-item value="application">
<v-card flat>
<v-card-text>
<v-row>
<v-col cols="4">
<v-list>
<v-list-subheader>Third Party APIs</v-list-subheader>
<v-list-item @click="applicationPageSelected=item.value" :prepend-icon="item.icon" v-for="(item, index) in navigation.application" :key="index">
<v-list-item-title>{{ item.title }}</v-list-item-title>
</v-list-item>
</v-list>
</v-col>
<v-col cols="8">
<!-- OPENAI API -->
<div v-if="applicationPageSelected === 'openai_api'">
<v-alert color="white" variant="text" icon="mdi-api" density="compact">
<v-alert-title>OpenAI</v-alert-title>
<div class="text-grey">
Configure your OpenAI API key here. You can get one from <a href="https://platform.openai.com/" target="_blank">https://platform.openai.com/</a>
</div>
</v-alert>
<v-divider class="mb-2"></v-divider>
<v-row>
<v-col cols="12">
<v-text-field type="password" v-model="app_config.openai.api_key"
label="OpenAI API Key"></v-text-field>
</v-col>
</v-row>
</div>
<!-- ELEVENLABS API -->
<div v-if="applicationPageSelected === 'elevenlabs_api'">
<v-alert color="white" variant="text" icon="mdi-api" density="compact">
<v-alert-title>ElevenLabs</v-alert-title>
<div class="text-grey">
<p class="mb-1">Generate realistic speech with the most advanced AI voice model ever.</p>
Configure your ElevenLabs API key here. You can get one from <a href="https://elevenlabs.io/?from=partnerewing2048" target="_blank">https://elevenlabs.io</a> <span class="text-caption">(affiliate link)</span>
</div>
</v-alert>
<v-divider class="mb-2"></v-divider>
<v-row>
<v-col cols="12">
<v-text-field type="password" v-model="app_config.elevenlabs.api_key"
label="ElevenLabs API Key"></v-text-field>
</v-col>
</v-row>
</div>
<!-- COQUI API -->
<div v-if="applicationPageSelected === 'coqui_api'">
<v-alert color="white" variant="text" icon="mdi-api" density="compact">
<v-alert-title>Coqui Studio</v-alert-title>
<div class="text-grey">
<p class="mb-1">Realistic, emotive text-to-speech through generative AI.</p>
Configure your Coqui API key here. You can get one from <a href="https://app.coqui.ai/account" target="_blank">https://app.coqui.ai/account</a>
</div>
</v-alert>
<v-divider class="mb-2"></v-divider>
<v-row>
<v-col cols="12">
<v-text-field type="password" v-model="app_config.coqui.api_key"
label="Coqui API Key"></v-text-field>
</v-col>
</v-row>
</div>
<!-- RUNPOD API -->
<div v-if="applicationPageSelected === 'runpod_api'">
<v-alert color="white" variant="text" icon="mdi-api" density="compact">
<v-alert-title>RunPod</v-alert-title>
<div class="text-grey">
<p class="mb-1">Launch a GPU instance in seconds.</p>
Configure your RunPod API key here. You can get one from <a href="https://runpod.io?ref=gma8kdu0" target="_blank">https://runpod.io/</a> <span class="text-caption">(affiliate link)</span>
</div>
</v-alert>
<v-divider class="mb-2"></v-divider>
<v-row>
<v-col cols="12">
<v-text-field type="password" v-model="app_config.runpod.api_key"
label="RunPod API Key"></v-text-field>
</v-col>
</v-row>
</div>
</v-col>
</v-row>
</v-card-text>
</v-card>
</v-window-item>
<!-- CREATOR -->
<v-window-item value="creator">
<v-card flat>
<v-card-title>
Content context
<v-tooltip location="top" max-width="500" text="Available choices when generating characters or scenarios within talemate.">
<template v-slot:activator="{ props }">
<v-icon size="x-small" v-bind="props" v-on="on">mdi-help</v-icon>
</template>
</v-tooltip>
</v-card-title>
<v-card-text style="max-height:600px; overflow-y:scroll;">
<v-list density="compact">
<v-list-item v-for="(value, index) in app_config.creator.content_context" :key="index">
<v-list-item-title><v-icon color="red">mdi-delete</v-icon>{{ value }}</v-list-item-title>
</v-list-item>
</v-list>
<v-text-field v-model="content_context_input" label="Add content context" @keyup.enter="app_config.creator.content_context.push(content_context_input); app_config.creator.content_context_input = ''"></v-text-field>
<v-card-text>
<v-row>
<v-col cols="4">
<v-list>
<v-list-item @click="creatorPageSelected=item.value" :prepend-icon="item.icon" v-for="(item, index) in navigation.creator" :key="index">
<v-list-item-title>{{ item.title }}</v-list-item-title>
</v-list-item>
</v-list>
</v-col>
<v-col cols="8">
<div v-if="creatorPageSelected === 'content_context'">
<!-- Content for Content context will go here -->
<v-alert color="white" variant="text" icon="mdi-cube-scan" density="compact">
<v-alert-title>Content context</v-alert-title>
<div class="text-grey">
Available content-context choices when generating characters or scenarios. This can strongly influence the content that is generated.
</div>
</v-alert>
<v-divider class="mb-2"></v-divider>
<v-row>
<v-col cols="12">
<v-list density="compact">
<v-list-item v-for="(value, index) in app_config.creator.content_context" :key="index">
<v-list-item-title><v-icon color="red" class="mr-2" @click="contentContextRemove(index)">mdi-delete</v-icon>{{ value }}</v-list-item-title>
</v-list-item>
</v-list>
<v-divider></v-divider>
<v-text-field v-model="content_context_input" label="Add content context (Press enter to add)"
@keyup.enter="app_config.creator.content_context.push(content_context_input); content_context_input = ''"></v-text-field>
</v-col>
</v-row>
</div>
</v-col>
</v-row>
</v-card-text>
</v-card>
</v-window-item>
</v-window>
<v-card-actions>
<v-btn color="primary" text @click="saveConfig">Save</v-btn>
<v-spacer></v-spacer>
<v-btn color="primary" text @click="saveConfig" prepend-icon="mdi-check-circle-outline">Save</v-btn>
</v-card-actions>
</v-card>
<v-card v-else>
@@ -78,7 +214,7 @@
<v-progress-circular indeterminate color="primary" size="20"></v-progress-circular>
</v-card-text>
</v-card>
</v-dialog>
</v-dialog>
</template>
<script>
@@ -90,6 +226,23 @@ export default {
dialog: false,
app_config: null,
content_context_input: '',
navigation: {
game: [
{title: 'Default Character', icon: 'mdi-human-edit', value: 'character'},
],
application: [
{title: 'OpenAI', icon: 'mdi-api', value: 'openai_api'},
{title: 'ElevenLabs', icon: 'mdi-api', value: 'elevenlabs_api'},
{title: 'Coqui Studio', icon: 'mdi-api', value: 'coqui_api'},
{title: 'RunPod', icon: 'mdi-api', value: 'runpod_api'},
],
creator: [
{title: 'Content Context', icon: 'mdi-cube-scan', value: 'content_context'},
]
},
gamePageSelected: 'character',
applicationPageSelected: 'openai_api',
creatorPageSelected: 'content_context',
}
},
inject: ['getWebsocket', 'registerMessageHandler', 'setWaitingForInput', 'requestSceneAssets', 'requestAppConfig'],
@@ -104,6 +257,10 @@ export default {
this.dialog = false
},
contentContextRemove(index) {
this.app_config.creator.content_context.splice(index, 1);
},
handleMessage(message) {
if (message.type == "app_config") {
this.app_config = message.data;
@@ -111,7 +268,7 @@ export default {
}
if (message.type == 'config') {
if(message.action == 'save_complete') {
if (message.action == 'save_complete') {
this.exit();
}
}
@@ -138,5 +295,4 @@ export default {
</script>
<style scoped>
</style>
<style scoped></style>

View File

@@ -0,0 +1,96 @@
<template>
<div class="audio-queue">
<span>{{ queue.length }} sound(s) queued</span>
<v-icon :color="isPlaying ? 'green' : ''" v-if="!isMuted" @click="toggleMute">mdi-volume-high</v-icon>
<v-icon :color="isPlaying ? 'red' : ''" v-else @click="toggleMute">mdi-volume-off</v-icon>
<v-icon class="ml-1" @click="stopAndClear">mdi-stop-circle-outline</v-icon>
</div>
</template>
<script>
export default {
name: 'AudioQueue',
data() {
return {
queue: [],
audioContext: null,
isPlaying: false,
isMuted: false,
currentSource: null
};
},
inject: ['getWebsocket', 'registerMessageHandler'],
created() {
this.audioContext = new (window.AudioContext || window.webkitAudioContext)();
this.registerMessageHandler(this.handleMessage);
},
methods: {
handleMessage(data) {
if (data.type === 'audio_queue') {
console.log('Received audio queue message', data)
this.addToQueue(data.data.audio_data);
}
},
addToQueue(base64Sound) {
const soundBuffer = this.base64ToArrayBuffer(base64Sound);
this.queue.push(soundBuffer);
this.playNextSound();
},
base64ToArrayBuffer(base64) {
const binaryString = window.atob(base64);
const len = binaryString.length;
const bytes = new Uint8Array(len);
for (let i = 0; i < len; i++) {
bytes[i] = binaryString.charCodeAt(i);
}
return bytes.buffer;
},
playNextSound() {
if (this.isPlaying || this.queue.length === 0) {
return;
}
this.isPlaying = true;
const soundBuffer = this.queue.shift();
this.audioContext.decodeAudioData(soundBuffer, (buffer) => {
const source = this.audioContext.createBufferSource();
source.buffer = buffer;
this.currentSource = source;
if (!this.isMuted) {
source.connect(this.audioContext.destination);
}
source.onended = () => {
this.isPlaying = false;
this.playNextSound();
};
source.start(0);
}, (error) => {
console.error('Error with decoding audio data', error);
});
},
toggleMute() {
this.isMuted = !this.isMuted;
if (this.isMuted && this.currentSource) {
this.currentSource.disconnect(this.audioContext.destination);
} else if (this.currentSource) {
this.currentSource.connect(this.audioContext.destination);
}
},
stopAndClear() {
if (this.currentSource) {
this.currentSource.stop();
this.currentSource.disconnect();
this.currentSource = null;
}
this.queue = [];
this.isPlaying = false;
}
}
};
</script>
<style scoped>
.audio-queue {
display: flex;
align-items: center;
}
</style>

View File

@@ -44,10 +44,10 @@
<v-window-item value="details">
<v-card-text style="max-height:600px; overflow-y:scroll;">
<v-list-item v-for="(value, key) in base_attributes" :key="key">
<v-list-item-content>
<div>
<v-list-item-title>{{ key }}</v-list-item-title>
<v-list-item-subtitle>{{ value }}</v-list-item-subtitle>
</v-list-item-content>
</div>
</v-list-item>
</v-card-text>
</v-window-item>

View File

@@ -2,36 +2,48 @@
<v-dialog v-model="localDialog" persistent max-width="600px">
<v-card>
<v-card-title>
<span class="headline">{{ formTitle }}</span>
<v-icon>mdi-network-outline</v-icon>
<span class="headline">{{ title() }}</span>
</v-card-title>
<v-card-text>
<v-container>
<v-row>
<v-col cols="6">
<v-select v-model="client.type" :items="['openai', 'textgenwebui', 'lmstudio']" label="Client Type"></v-select>
</v-col>
<v-col cols="6">
<v-text-field v-model="client.name" label="Client Name"></v-text-field>
</v-col>
<v-row>
<v-col cols="6">
<v-select v-model="client.type" :disabled="!typeEditable()" :items="['openai', 'textgenwebui', 'lmstudio']" label="Client Type" @update:model-value="resetToDefaults"></v-select>
</v-col>
<v-col cols="6">
<v-text-field v-model="client.name" label="Client Name"></v-text-field>
</v-col>
</v-row>
<v-row>
<v-col cols="12">
<v-text-field v-model="client.apiUrl" v-if="isLocalApiClient(client)" label="API URL"></v-text-field>
<v-select v-model="client.model" v-if="client.type === 'openai'" :items="['gpt-4-1106-preview', 'gpt-4', 'gpt-3.5-turbo', 'gpt-3.5-turbo-16k']" label="Model"></v-select>
</v-col>
</v-row>
<v-row>
<v-col cols="6">
<v-text-field v-model="client.max_token_length" v-if="isLocalApiClient(client)" type="number" label="Context Length"></v-text-field>
</v-col>
</v-row>
</v-row>
<v-row>
<v-col cols="12">
<v-text-field v-model="client.apiUrl" v-if="isLocalApiClient(client)" label="API URL"></v-text-field>
<v-select v-model="client.model" v-if="client.type === 'openai'" :items="['gpt-4-1106-preview', 'gpt-4', 'gpt-3.5-turbo', 'gpt-3.5-turbo-16k']" label="Model"></v-select>
</v-col>
</v-row>
<v-row>
<v-col cols="4">
<v-text-field v-model="client.max_token_length" v-if="isLocalApiClient(client)" type="number" label="Context Length"></v-text-field>
</v-col>
<v-col cols="8" v-if="!typeEditable() && client.data && client.data.prompt_template_example !== null">
<v-card elevation="3" :color="(client.data.has_prompt_template ? 'primary' : 'warning')" variant="tonal">
<v-card-title>Prompt Template</v-card-title>
<v-card-text>
<div class="text-caption" v-if="!client.data.has_prompt_template">No matching LLM prompt template found. Using default.</div>
<pre>{{ client.data.prompt_template_example }}</pre>
</v-card-text>
</v-card>
</v-col>
</v-row>
</v-container>
</v-card-text>
<v-card-actions>
<v-spacer></v-spacer>
<v-btn color="blue darken-1" text @click="close">Close</v-btn>
<v-btn color="blue darken-1" text @click="save">Save</v-btn>
<v-btn color="primary" text @click="close" prepend-icon="mdi-cancel">Cancel</v-btn>
<v-btn color="primary" text @click="save" prepend-icon="mdi-check-circle-outline">Save</v-btn>
</v-card-actions>
</v-card>
</v-dialog>
@@ -47,7 +59,25 @@ export default {
data() {
return {
localDialog: this.state.dialog,
client: { ...this.state.currentClient } // Define client data property
client: { ...this.state.currentClient },
defaultValuesByCLientType: {
// when client type is changed in the modal, these values will be used
// to populate the form
'textgenwebui': {
apiUrl: 'http://localhost:5000',
max_token_length: 4096,
name_prefix: 'TextGenWebUI',
},
'openai': {
model: 'gpt-4-1106-preview',
name_prefix: 'OpenAI',
},
'lmstudio': {
apiUrl: 'http://localhost:1234',
max_token_length: 4096,
name_prefix: 'LMStudio',
}
}
};
},
watch: {
@@ -68,10 +98,48 @@ export default {
}
},
methods: {
resetToDefaults() {
const defaults = this.defaultValuesByCLientType[this.client.type];
if (defaults) {
this.client.model = defaults.model || '';
this.client.apiUrl = defaults.apiUrl || '';
this.client.max_token_length = defaults.max_token_length || 4096;
// loop and build name from prefix, checking against current clients
let name = defaults.name_prefix;
let i = 2;
while (this.state.clients.find(c => c.name === name)) {
name = `${defaults.name_prefix} ${i}`;
i++;
}
this.client.name = name;
this.client.data = {};
}
},
validateName() {
// if we are editing a client, we should exclude the current client from the check
if(!this.typeEditable()) {
return this.state.clients.findIndex(c => c.name === this.client.name && c.name !== this.state.currentClient.name) === -1;
}
return this.state.clients.findIndex(c => c.name === this.client.name) === -1;
},
typeEditable() {
return this.state.formTitle === 'Add Client';
},
title() {
return this.typeEditable() ? 'Add Client' : 'Edit Client';
},
close() {
this.$emit('update:dialog', false);
},
save() {
if(!this.validateName()) {
this.$emit('error', 'Client name already exists');
return;
}
this.$emit('save', this.client); // Emit save event with client object
this.close();
},

View File

@@ -0,0 +1,92 @@
<template>
<v-dialog v-model="showModal" max-width="800px">
<v-card>
<v-card-title class="headline">Your Character</v-card-title>
<v-card-text>
<v-alert type="info" variant="tonal" v-if="defaultCharacter.name === ''" density="compact">You have not yet
configured a default player character. This character will be used when a scenario is loaded that does not come
with a pre-defined player character.</v-alert>
<v-container>
<v-row>
<v-col cols="12" sm="6">
<v-text-field v-model="defaultCharacter.name" label="Name" :rules="[rules.required]"></v-text-field>
</v-col>
<v-col cols="12" sm="6">
<v-text-field v-model="defaultCharacter.gender" label="Gender" :rules="[rules.required]"></v-text-field>
</v-col>
</v-row>
<v-row>
<v-col cols="12">
<v-textarea v-model="defaultCharacter.description" label="Description" auto-grow></v-textarea>
</v-col>
</v-row>
</v-container>
</v-card-text>
<v-card-actions>
<v-spacer></v-spacer>
<v-btn color="primary" v-if="!saving" text @click="cancel" prepend-icon="mdi-cancel">Cancel</v-btn>
<v-progress-circular v-else indeterminate color="primary" size="20"></v-progress-circular>
<v-btn color="primary" text :disabled="saving" @click="saveDefaultCharacter" prepend-icon="mdi-check-circle-outline">Continue</v-btn>
</v-card-actions>
</v-card>
</v-dialog>
</template>
<script>
export default {
name: 'DefaultCharacter',
inject: ['getWebsocket', 'registerMessageHandler'],
data() {
return {
showModal: false,
saving: false,
defaultCharacter: {
name: '',
gender: '',
description: '',
color: '#3362bb'
},
rules: {
required: value => !!value || 'Required.'
}
};
},
methods: {
saveDefaultCharacter() {
// Send the new default character data to the server
this.saving = true;
this.getWebsocket().send(JSON.stringify({
type: 'config',
action: 'save_default_character',
data: this.defaultCharacter
}));
},
cancel() {
this.$emit("cancel");
this.closeModal();
},
open() {
this.saving = false;
this.showModal = true;
},
closeModal() {
this.showModal = false;
},
handleMessage(message) {
if (message.type == 'config') {
if (message.action == 'save_default_character_complete') {
this.closeModal();
this.$emit("save");
}
}
},
},
created() {
this.registerMessageHandler(this.handleMessage);
},
};
</script>
<style scoped>
/* Add any specific styles for your DefaultCharacter modal here */
</style>

View File

@@ -1,12 +1,15 @@
<template>
<v-list-subheader @click="toggle()" class="text-uppercase"><v-icon>mdi-script-text-outline</v-icon> Load
<v-list-subheader v-if="appConfig !== null" @click="toggle()" class="text-uppercase"><v-icon>mdi-script-text-outline</v-icon> Load
<v-progress-circular v-if="loading" indeterminate color="primary" size="20"></v-progress-circular>
<v-icon v-if="expanded" icon="mdi-chevron-down"></v-icon>
<v-icon v-else icon="mdi-chevron-up"></v-icon>
</v-list-subheader>
<v-list-item-group v-if="!loading && isConnected() && expanded && !configurationRequired()">
<v-list-subheader class="text-uppercase" v-else>
<v-progress-circular indeterminate color="primary" size="20"></v-progress-circular> Waiting for config...
</v-list-subheader>
<div v-if="!loading && isConnected() && expanded && !configurationRequired() && appConfig !== null">
<v-list-item>
<v-list-item-content class="mb-3">
<div class="mb-3">
<!-- Toggle buttons for switching between file upload and path input -->
<v-btn-toggle density="compact" class="mb-3" v-model="inputMethod" mandatory>
<v-btn value="file">
@@ -37,18 +40,24 @@
<v-icon left>mdi-palette-outline</v-icon>
Creative Mode
</v-btn>
</v-list-item-content>
</div>
</v-list-item>
</v-list-item-group>
</div>
<div v-else-if="configurationRequired()">
<v-alert type="warning" variant="tonal">You need to configure a Talemate client before you can load scenes.</v-alert>
</div>
<DefaultCharacter ref="defaultCharacterModal" @save="loadScene" @cancel="loadCanceled"></DefaultCharacter>
</template>
<script>
import DefaultCharacter from './DefaultCharacter.vue';
export default {
name: 'LoadScene',
components: {
DefaultCharacter,
},
data() {
return {
loading: false,
@@ -60,10 +69,19 @@ export default {
sceneSearchLoading: false,
sceneSaved: null,
expanded: true,
appConfig: null, // Store the app configuration
}
},
emits: {
loading: null,
},
inject: ['getWebsocket', 'registerMessageHandler', 'isConnected', 'configurationRequired'],
methods: {
// Method to show the DefaultCharacter modal
showDefaultCharacterModal() {
this.$refs.defaultCharacterModal.open();
},
toggle() {
this.expanded = !this.expanded;
},
@@ -80,9 +98,21 @@ export default {
this.getWebsocket().send(JSON.stringify({ type: 'request_scenes_list', query: this.sceneSearchInput }));
},
loadCreative() {
if(this.sceneSaved === false) {
if(!confirm("The current scene is not saved. Are you sure you want to load a new scene?")) {
return;
}
}
this.loading = true;
this.getWebsocket().send(JSON.stringify({ type: 'load_scene', file_path: "environment:creative" }));
},
loadCanceled() {
console.log("Load canceled");
this.loading = false;
this.sceneFile = [];
},
loadScene() {
if(this.sceneSaved === false) {
@@ -91,13 +121,26 @@ export default {
}
}
this.loading = true;
this.sceneSaved = null;
if (this.inputMethod === 'file' && this.sceneFile.length > 0) { // Check if the input method is "file" and there is at least one file
// if file is image check if default character is set
if(this.sceneFile[0].type.startsWith("image/")) {
if(!this.appConfig.game.default_player_character.name) {
this.showDefaultCharacterModal();
return;
}
}
this.loading = true;
// Convert the uploaded file to base64
const reader = new FileReader();
reader.readAsDataURL(this.sceneFile[0]); // Access the first file in the array
reader.onload = () => {
//const base64File = reader.result.split(',')[1];
this.$emit("loading", true)
this.getWebsocket().send(JSON.stringify({
type: 'load_scene',
scene_data: reader.result,
@@ -106,11 +149,28 @@ export default {
this.sceneFile = [];
};
} else if (this.inputMethod === 'path' && this.sceneInput) { // Check if the input method is "path" and the scene input is not empty
// if path ends with .png/jpg/webp check if default character is set
if(this.sceneInput.endsWith(".png") || this.sceneInput.endsWith(".jpg") || this.sceneInput.endsWith(".webp")) {
if(!this.appConfig.game.default_player_character.name) {
this.showDefaultCharacterModal();
return;
}
}
this.loading = true;
this.$emit("loading", true)
this.getWebsocket().send(JSON.stringify({ type: 'load_scene', file_path: this.sceneInput }));
this.sceneInput = '';
}
},
handleMessage(data) {
// Handle app configuration
if (data.type === 'app_config') {
this.appConfig = data.data;
console.log("App config", this.appConfig);
}
// Scene loaded
if (data.type === "system") {
@@ -133,10 +193,11 @@ export default {
return;
}
}
},
},
created() {
this.registerMessageHandler(this.handleMessage);
//this.getWebsocket().send(JSON.stringify({ type: 'request_config' })); // Request the current app configuration
},
mounted() {
console.log("Websocket", this.getWebsocket()); // Check if websocket is available

View File

@@ -18,7 +18,7 @@
<v-tooltip v-if="isEnvironment('scene')" :disabled="isInputDisabled()" location="top"
text="Redo most recent AI message">
<template v-slot:activator="{ props }">
<v-btn class="hotkey" v-bind="props" v-on="on" :disabled="isInputDisabled()"
<v-btn class="hotkey" v-bind="props" :disabled="isInputDisabled()"
@click="sendHotButtonMessage('!rerun')" color="primary" icon>
<v-icon>mdi-refresh</v-icon>
</v-btn>
@@ -28,7 +28,7 @@
<v-tooltip v-if="isEnvironment('scene')" :disabled="isInputDisabled()" location="top"
text="Redo most recent AI message (Nuke Option - use this to attempt to break out of repetition)">
<template v-slot:activator="{ props }">
<v-btn class="hotkey" v-bind="props" v-on="on" :disabled="isInputDisabled()"
<v-btn class="hotkey" v-bind="props" :disabled="isInputDisabled()"
@click="sendHotButtonMessage('!rerun:0.5')" color="primary" icon>
<v-icon>mdi-nuke</v-icon>
</v-btn>
@@ -39,7 +39,7 @@
<v-tooltip v-if="commandActive" location="top"
text="Abort / end action.">
<template v-slot:activator="{ props }">
<v-btn class="hotkey mr-3" v-bind="props" v-on="on" :disabled="!isWaitingForInput()"
<v-btn class="hotkey mr-3" v-bind="props" :disabled="!isWaitingForInput()"
@click="sendHotButtonMessage('!abort')" color="primary" icon>
<v-icon>mdi-cancel</v-icon>
@@ -56,7 +56,7 @@
<v-card-actions>
<v-tooltip :disabled="isInputDisabled()" location="top" text="Narrate: Progress Story">
<template v-slot:activator="{ props }">
<v-btn class="hotkey mx-3" v-bind="props" v-on="on" :disabled="isInputDisabled()"
<v-btn class="hotkey mx-3" v-bind="props" :disabled="isInputDisabled()"
@click="sendHotButtonMessage('!narrate_progress')" color="primary" icon>
<v-icon>mdi-script-text-play</v-icon>
</v-btn>
@@ -64,7 +64,7 @@
</v-tooltip>
<v-tooltip :disabled="isInputDisabled()" location="top" text="Narrate: Scene">
<template v-slot:activator="{ props }">
<v-btn class="hotkey mx-3" v-bind="props" v-on="on" :disabled="isInputDisabled()"
<v-btn class="hotkey mx-3" v-bind="props" :disabled="isInputDisabled()"
@click="sendHotButtonMessage('!narrate')" color="primary" icon>
<v-icon>mdi-script-text</v-icon>
</v-btn>
@@ -72,7 +72,7 @@
</v-tooltip>
<v-tooltip :disabled="isInputDisabled()" location="top" text="Narrate: Character">
<template v-slot:activator="{ props }">
<v-btn class="hotkey mx-3" v-bind="props" v-on="on" :disabled="isInputDisabled()"
<v-btn class="hotkey mx-3" v-bind="props" :disabled="isInputDisabled()"
@click="sendHotButtonMessage('!narrate_c')" color="primary" icon>
<v-icon>mdi-account-voice</v-icon>
</v-btn>
@@ -80,7 +80,7 @@
</v-tooltip>
<v-tooltip :disabled="isInputDisabled()" location="top" text="Narrate: Query">
<template v-slot:activator="{ props }">
<v-btn class="hotkey mx-3" v-bind="props" v-on="on" :disabled="isInputDisabled()"
<v-btn class="hotkey mx-3" v-bind="props" :disabled="isInputDisabled()"
@click="sendHotButtonMessage('!narrate_q')" color="primary" icon>
<v-icon>mdi-crystal-ball</v-icon>
</v-btn>
@@ -103,7 +103,7 @@
<v-divider vertical></v-divider>
<v-tooltip :disabled="isInputDisabled()" location="top" text="Direct a character">
<template v-slot:activator="{ props }">
<v-btn class="hotkey mx-3" v-bind="props" v-on="on" :disabled="isInputDisabled()"
<v-btn class="hotkey mx-3" v-bind="props" :disabled="isInputDisabled()"
@click="sendHotButtonMessage('!director')" color="primary" icon>
<v-icon>mdi-bullhorn</v-icon>
</v-btn>
@@ -118,7 +118,7 @@
<v-tooltip :disabled="isInputDisabled()" location="top" text="Save">
<template v-slot:activator="{ props }">
<v-btn class="hotkey mx-3" v-bind="props" v-on="on" :disabled="isInputDisabled()"
<v-btn class="hotkey mx-3" v-bind="props" :disabled="isInputDisabled()"
@click="sendHotButtonMessage('!save')" color="primary" icon>
<v-icon>mdi-content-save</v-icon>
</v-btn>
@@ -127,7 +127,7 @@
<v-tooltip :disabled="isInputDisabled()" location="top" text="Save As">
<template v-slot:activator="{ props }">
<v-btn class="hotkey mx-3" v-bind="props" v-on="on" :disabled="isInputDisabled()"
<v-btn class="hotkey mx-3" v-bind="props" :disabled="isInputDisabled()"
@click="sendHotButtonMessage('!save_as')" color="primary" icon>
<v-icon>mdi-content-save-all</v-icon>
</v-btn>
@@ -136,7 +136,7 @@
<v-tooltip v-if="isEnvironment('scene')" :disabled="isInputDisabled()" location="top" text="Switch to creative mode">
<template v-slot:activator="{ props }">
<v-btn class="hotkey mx-3" v-bind="props" v-on="on" :disabled="isInputDisabled()"
<v-btn class="hotkey mx-3" v-bind="props" :disabled="isInputDisabled()"
@click="sendHotButtonMessage('!setenv_creative')" color="primary" icon>
<v-icon>mdi-palette-outline</v-icon>
</v-btn>
@@ -145,7 +145,7 @@
<v-tooltip v-else-if="isEnvironment('creative')" :disabled="isInputDisabled()" location="top" text="Switch to game mode">
<template v-slot:activator="{ props }">
<v-btn class="hotkey mx-3" v-bind="props" v-on="on" :disabled="isInputDisabled()"
<v-btn class="hotkey mx-3" v-bind="props" :disabled="isInputDisabled()"
@click="sendHotButtonMessage('!setenv_scene')" color="primary" icon>
<v-icon>mdi-gamepad-square</v-icon>
</v-btn>

View File

@@ -11,7 +11,7 @@
Make sure the backend process is running.
</p>
</v-alert>
<LoadScene ref="loadScene" />
<LoadScene ref="loadScene" @loading="sceneStartedLoading" />
<v-divider></v-divider>
<div :style="(sceneActive && scene.environment === 'scene' ? 'display:block' : 'display:none')">
<!-- <GameOptions v-if="sceneActive" ref="gameOptions" /> -->
@@ -37,18 +37,14 @@
<v-list>
<v-list-subheader class="text-uppercase"><v-icon>mdi-network-outline</v-icon>
Clients</v-list-subheader>
<v-list-item-group>
<v-list-item>
<AIClient ref="aiClient" @save="saveClients" @clients-updated="saveClients" @client-assigned="saveAgents"></AIClient>
</v-list-item>
</v-list-item-group>
<v-list-item>
<AIClient ref="aiClient" @save="saveClients" @error="uxErrorHandler" @clients-updated="saveClients" @client-assigned="saveAgents"></AIClient>
</v-list-item>
<v-divider></v-divider>
<v-list-subheader class="text-uppercase"><v-icon>mdi-transit-connection-variant</v-icon> Agents</v-list-subheader>
<v-list-item-group>
<v-list-item>
<AIAgent ref="aiAgent" @save="saveAgents" @agents-updated="saveAgents"></AIAgent>
</v-list-item>
</v-list-item-group>
<v-list-item>
<AIAgent ref="aiAgent" @save="saveAgents" @agents-updated="saveAgents"></AIAgent>
</v-list-item>
<!-- More sections can be added here -->
</v-list>
</v-navigation-drawer>
@@ -75,6 +71,8 @@
<span v-if="connecting" class="ml-1"><v-icon class="mr-1">mdi-progress-helper</v-icon>connecting</span>
<span v-else-if="connected" class="ml-1"><v-icon class="mr-1" color="green" size="14">mdi-checkbox-blank-circle</v-icon>connected</span>
<span v-else class="ml-1"><v-icon class="mr-1">mdi-progress-close</v-icon>disconnected</span>
<v-divider class="ml-1 mr-1" vertical></v-divider>
<AudioQueue ref="audioQueue" />
<v-spacer></v-spacer>
<span v-if="version !== null">v{{ version }}</span>
<span v-if="configurationRequired()">
@@ -95,13 +93,12 @@
<v-chip size="x-small" v-else-if="scene.environment === 'scene'" class="ml-1"><v-icon text="Play" size="14"
class="mr-1">mdi-gamepad-square</v-icon>Game Mode</v-chip>
<v-btn v-if="scene.environment === 'scene'" class="ml-1" @click="openSceneHistory()"><v-icon size="14"
class="mr-1">mdi-playlist-star</v-icon>History</v-btn>
<v-chip size="x-small" v-if="scene.scene_time !== undefined">
<v-icon>mdi-clock</v-icon>
{{ scene.scene_time }}
</v-chip>
<v-tooltip :text="scene.scene_time" v-if="scene.scene_time !== undefined">
<template v-slot:activator="{ props }">
<v-btn v-bind="props" v-if="scene.environment === 'scene'" class="ml-1" @click="openSceneHistory()"><v-icon size="14"
class="mr-1">mdi-clock</v-icon>History</v-btn>
</template>
</v-tooltip>
</v-toolbar-title>
<v-toolbar-title v-else>
@@ -161,6 +158,7 @@ import SceneHistory from './SceneHistory.vue';
import CreativeEditor from './CreativeEditor.vue';
import AppConfig from './AppConfig.vue';
import DebugTools from './DebugTools.vue';
import AudioQueue from './AudioQueue.vue';
export default {
components: {
@@ -177,6 +175,7 @@ export default {
CreativeEditor,
AppConfig,
DebugTools,
AudioQueue,
},
name: 'TalemateApp',
data() {
@@ -425,7 +424,7 @@ export default {
let agent = this.$refs.aiAgent.getActive();
if (agent) {
return agent.name;
return agent.label;
}
return null;
},
@@ -444,6 +443,14 @@ export default {
openAppConfig() {
this.$refs.appConfig.show();
},
uxErrorHandler(error) {
this.errorNotification = true;
this.errorMessage = error;
},
sceneStartedLoading() {
this.loading = true;
this.sceneActive = false;
}
}
}
</script>

View File

@@ -2,10 +2,8 @@
<v-list-subheader class="text-uppercase">
<v-icon class="mr-1">mdi-earth</v-icon>World
<v-progress-circular class="ml-1 mr-3" size="14" v-if="requesting" indeterminate color="primary"></v-progress-circular>
<v-btn v-else size="x-small" class="mr-1" v-bind="props" variant="tonal" density="comfortable" rounded="sm" @click.stop="refresh()" icon="mdi-refresh"></v-btn>
<v-btn v-else :disabled="isInputDisabled()" size="x-small" class="mr-1" variant="tonal" density="comfortable" rounded="sm" @click.stop="refresh()" icon="mdi-refresh"></v-btn>
</v-list-subheader>
<div ref="charactersContainer">
<v-expansion-panels density="compact" v-for="(character,name) in characters" :key="name">
@@ -85,6 +83,7 @@ export default {
items: {},
location: null,
requesting: false,
sceneTime: null,
}
},
@@ -94,6 +93,7 @@ export default {
'setWaitingForInput',
'openCharacterSheet',
'characterSheet',
'isInputDisabled',
],
methods: {
@@ -127,6 +127,8 @@ export default {
this.items = data.data.items;
this.location = data.data.location;
this.requesting = (data.status==="requested")
} else if (data.type == "scene_status") {
this.sceneTime = data.data.scene_time;
}
},
},

View File

@@ -0,0 +1,2 @@
SYSTEM: {{ system_message }}
USER: {{ set_response(prompt, "\nASSISTANT: ") }}

View File

@@ -0,0 +1,4 @@
{{ system_message }}
### Instruction:
{{ set_response(prompt, "\n\n### Response:\n") }}

View File

@@ -0,0 +1,4 @@
<|im_start|>system
{{ system_message }}<|im_end|>
<|im_start|>user
{{ set_response(prompt, "<|im_end|>\n<|im_start|>assistant\n") }}

View File

@@ -0,0 +1 @@
Human: {{ system_message }} {{ set_response(prompt, "\n\nAssistant:") }}

View File

@@ -0,0 +1,2 @@
SYSTEM: {{ system_message }}
USER: {{ set_response(prompt, "\nASSISTANT: ") }}

View File

@@ -0,0 +1,2 @@
SYSTEM: {{ system_message }}
USER: {{ set_response(prompt, "\nASSISTANT: ") }}

View File

@@ -0,0 +1,4 @@
{{ system_message }}
### Instruction:
{{ set_response(prompt, "\n\n### Response:\n") }}

View File

@@ -0,0 +1 @@
User: {{ system_message }} {{ set_response(prompt, "\nAssistant: ") }}

View File

@@ -0,0 +1,4 @@
<|im_start|>system
{{ system_message }}<|im_end|>
<|im_start|>user
{{ set_response(prompt, "<|im_end|>\n<|im_start|>assistant\n") }}

View File

@@ -0,0 +1 @@
GPT4 Correct System: {{ system_message }}<|end_of_turn|>GPT4 Correct User: {{ set_response(prompt, "<|end_of_turn|>GPT4 Correct Assistant:") }}

0
tests/conftest.py Normal file
View File

View File

@@ -0,0 +1,21 @@
import pytest
from talemate.util import ensure_dialog_format
@pytest.mark.parametrize("input, expected", [
('Hello how are you?', 'Hello how are you?'),
('"Hello how are you?"', '"Hello how are you?"'),
('"Hello how are you?" he asks "I am fine"', '"Hello how are you?" *he asks* "I am fine"'),
('Hello how are you? *he asks* I am fine', '"Hello how are you?" *he asks* "I am fine"'),
('Hello how are you?" *he asks* I am fine', '"Hello how are you?" *he asks* "I am fine"'),
('Hello how are you?" *he asks I am fine', '"Hello how are you?" *he asks I am fine*'),
('Hello how are you?" *he asks* "I am fine" *', '"Hello how are you?" *he asks* "I am fine"'),
('"Hello how are you *he asks* I am fine"', '"Hello how are you" *he asks* "I am fine"'),
('This is a string without any markers', 'This is a string without any markers'),
('This is a string with an ending quote"', '"This is a string with an ending quote"'),
('This is a string with an ending asterisk*', '*This is a string with an ending asterisk*'),
('"Mixed markers*', '*Mixed markers*'),
])
def test_dialogue_cleanup(input, expected):
assert ensure_dialog_format(input) == expected

38
tests/test_isodate.py Normal file
View File

@@ -0,0 +1,38 @@
from talemate.util import (
iso8601_add,
iso8601_correct_duration,
iso8601_diff,
iso8601_diff_to_human,
iso8601_duration_to_human,
parse_duration_to_isodate_duration,
timedelta_to_duration,
duration_to_timedelta,
isodate
)
def test_isodate_utils():
date1 = "P11MT15M"
date2 = "PT1S"
duration1= parse_duration_to_isodate_duration(date1)
assert duration1.months == 11
assert duration1.tdelta.seconds == 900
duration2 = parse_duration_to_isodate_duration(date2)
assert duration2.seconds == 1
timedelta1 = duration_to_timedelta(duration1)
assert timedelta1.seconds == 900
assert timedelta1.days == 11*30, timedelta1.days
timedelta2 = duration_to_timedelta(duration2)
assert timedelta2.seconds == 1
parsed = parse_duration_to_isodate_duration("P11MT14M59S")
assert iso8601_diff(date1, date2) == parsed, parsed
assert iso8601_duration_to_human(date1) == "11 Months and 15 Minutes ago", iso8601_duration_to_human(date1)
assert iso8601_duration_to_human(date2) == "1 Second ago", iso8601_duration_to_human(date2)
assert iso8601_duration_to_human(iso8601_diff(date1, date2)) == "11 Months, 14 Minutes and 59 Seconds ago", iso8601_duration_to_human(iso8601_diff(date1, date2))