mirror of
https://github.com/vegu-ai/talemate.git
synced 2025-12-24 23:49:28 +01:00
Compare commits
27 Commits
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
077ef965ed | ||
|
|
9866244cb1 | ||
|
|
fa7377e7b9 | ||
|
|
ddcd442821 | ||
|
|
4f23a404aa | ||
|
|
99d9cddccd | ||
|
|
556fc0a551 | ||
|
|
f79c40eee3 | ||
|
|
ab432cf664 | ||
|
|
e753728f5f | ||
|
|
fd65d30bdf | ||
|
|
879d82bc04 | ||
|
|
bcea53f0b2 | ||
|
|
dd4603092e | ||
|
|
7c6e728eaa | ||
|
|
64bf133b89 | ||
|
|
e65a3f907f | ||
|
|
49f2eb06ea | ||
|
|
6b231b1010 | ||
|
|
693180d127 | ||
|
|
9c11737554 | ||
|
|
6c8425cec8 | ||
|
|
c84cd4ac8f | ||
|
|
157dd63c48 | ||
|
|
73328f1a06 | ||
|
|
919e65319c | ||
|
|
cc1b7c447e |
50
README.md
50
README.md
@@ -2,32 +2,24 @@
|
||||
|
||||
Allows you to play roleplay scenarios with large language models.
|
||||
|
||||
It does not run any large language models itself but relies on existing APIs. Currently supports **text-generation-webui** and **openai**.
|
||||
|
||||
|||
|
||||
|------------------------------------------|------------------------------------------|
|
||||
This means you need to either have an openai api key or know how to setup [oobabooga/text-generation-webui](https://github.com/oobabooga/text-generation-webui) (locally or remotely via gpu renting. `--api` flag needs to be set)
|
||||
|
||||
> :warning: **It does not run any large language models itself but relies on existing APIs. Currently supports OpenAI, text-generation-webui and LMStudio.**
|
||||
|
||||
This means you need to either have:
|
||||
- an [OpenAI](https://platform.openai.com/overview) api key
|
||||
- OR setup local (or remote via runpod) LLM inference via one of these options:
|
||||
- [oobabooga/text-generation-webui](https://github.com/oobabooga/text-generation-webui)
|
||||
- [LMStudio](https://lmstudio.ai/)
|
||||

|
||||

|
||||
|
||||
## Current features
|
||||
|
||||
- responive modern ui
|
||||
- agents
|
||||
- conversation: handles character dialogue
|
||||
- narration: handles narrative exposition
|
||||
- summarization: handles summarization to compress context while maintain history
|
||||
- director: can be used to direct the story / characters
|
||||
- editor: improves AI responses (very hit and miss at the moment)
|
||||
- world state: generates world snapshot and handles passage of time (objects and characters)
|
||||
- creator: character / scenario creator
|
||||
- tts: text to speech via elevenlabs, coqui studio, coqui local
|
||||
- multi-client support (agents can be connected to separate APIs)
|
||||
- long term memory
|
||||
- conversation
|
||||
- narration
|
||||
- summarization
|
||||
- director
|
||||
- creative
|
||||
- multi-client (agents can be connected to separate APIs)
|
||||
- long term memory (experimental)
|
||||
- chromadb integration
|
||||
- passage of time
|
||||
- narrative world state
|
||||
@@ -44,7 +36,6 @@ Kinda making it up as i go along, but i want to lean more into gameplay through
|
||||
|
||||
In no particular order:
|
||||
|
||||
|
||||
- Extension support
|
||||
- modular agents and clients
|
||||
- Improved world state
|
||||
@@ -58,19 +49,19 @@ In no particular order:
|
||||
- objectives
|
||||
- quests
|
||||
- win / lose conditions
|
||||
- Automatic1111 client for in place visual generation
|
||||
- Automatic1111 client
|
||||
|
||||
# Quickstart
|
||||
|
||||
## Installation
|
||||
|
||||
Post [here](https://github.com/vegu-ai/talemate/issues/17) if you run into problems during installation.
|
||||
Post [here](https://github.com/final-wombat/talemate/issues/17) if you run into problems during installation.
|
||||
|
||||
### Windows
|
||||
|
||||
1. Download and install Python 3.10 or higher from the [official Python website](https://www.python.org/downloads/windows/).
|
||||
1. Download and install Node.js from the [official Node.js website](https://nodejs.org/en/download/). This will also install npm.
|
||||
1. Download the Talemate project to your local machine. Download from [the Releases page](https://github.com/vegu-ai/talemate/releases).
|
||||
1. Download the Talemate project to your local machine. Download from [the Releases page](https://github.com/final-wombat/talemate/releases).
|
||||
1. Unpack the download and run `install.bat` by double clicking it. This will set up the project on your local machine.
|
||||
1. Once the installation is complete, you can start the backend and frontend servers by running `start.bat`.
|
||||
1. Navigate your browser to http://localhost:8080
|
||||
@@ -79,7 +70,7 @@ Post [here](https://github.com/vegu-ai/talemate/issues/17) if you run into probl
|
||||
|
||||
`python 3.10` or higher is required.
|
||||
|
||||
1. `git clone git@github.com:vegu-ai/talemate`
|
||||
1. `git clone git@github.com:final-wombat/talemate`
|
||||
1. `cd talemate`
|
||||
1. `source install.sh`
|
||||
1. Start the backend: `python src/talemate/server/run.py runserver --host 0.0.0.0 --port 5050`.
|
||||
@@ -126,11 +117,9 @@ On the right hand side click the "Add Client" button. If there is no button, you
|
||||
|
||||
### Text-generation-webui
|
||||
|
||||
> :warning: As of version 0.13.0 the legacy text-generator-webui API `--extension api` is no longer supported, please use their new `--extension openai` api implementation instead.
|
||||
|
||||
In the modal if you're planning to connect to text-generation-webui, you can likely leave everything as is and just click Save.
|
||||
|
||||

|
||||

|
||||
|
||||
### OpenAI
|
||||
|
||||
@@ -166,10 +155,7 @@ Make sure you save the scene after the character is loaded as it can then be loa
|
||||
|
||||
## Further documentation
|
||||
|
||||
Please read the documents in the `docs` folder for more advanced configuration and usage.
|
||||
|
||||
- [Prompt template overrides](docs/templates.md)
|
||||
- [Text-to-Speech (TTS)](docs/tts.md)
|
||||
- Creative mode (docs WIP)
|
||||
- Prompt template overrides
|
||||
- [ChromaDB (long term memory)](docs/chromadb.md)
|
||||
- Runpod Integration
|
||||
- Creative mode
|
||||
@@ -7,34 +7,20 @@ creator:
|
||||
- a thrilling action story aimed at an adult audience.
|
||||
- a mysterious adventure aimed at an adult audience.
|
||||
- an epic sci-fi adventure aimed at an adult audience.
|
||||
game: {}
|
||||
|
||||
## Long-term memory
|
||||
game:
|
||||
default_player_character:
|
||||
color: '#6495ed'
|
||||
description: a young man with a penchant for adventure.
|
||||
gender: male
|
||||
name: Elmer
|
||||
|
||||
#chromadb:
|
||||
# embeddings: instructor
|
||||
# instructor_device: cuda
|
||||
# instructor_model: hkunlp/instructor-xl
|
||||
|
||||
## Remote LLMs
|
||||
|
||||
#openai:
|
||||
# api_key: <API_KEY>
|
||||
|
||||
#runpod:
|
||||
# api_key: <API_KEY>
|
||||
|
||||
## TTS (Text-to-Speech)
|
||||
|
||||
#elevenlabs:
|
||||
# api_key: <API_KEY>
|
||||
|
||||
#coqui:
|
||||
# api_key: <API_KEY>
|
||||
|
||||
#tts:
|
||||
# device: cuda
|
||||
# model: tts_models/multilingual/multi-dataset/xtts_v2
|
||||
# voices:
|
||||
# - label: <name>
|
||||
# value: <path to .wav for voice sample>
|
||||
# api_key: <API_KEY>
|
||||
Binary file not shown.
|
Before Width: | Height: | Size: 551 KiB |
Binary file not shown.
|
Before Width: | Height: | Size: 14 KiB |
@@ -1,82 +0,0 @@
|
||||
# Template Overrides in Talemate
|
||||
|
||||
## Introduction to Templates
|
||||
|
||||
In Talemate, templates are used to generate dynamic content for various agents involved in roleplaying scenarios. These templates leverage the Jinja2 templating engine, allowing for the inclusion of variables, conditional logic, and custom functions to create rich and interactive narratives.
|
||||
|
||||
## Template Structure
|
||||
|
||||
A typical template in Talemate consists of several sections, each enclosed within special section tags (`<|SECTION:NAME|>` and `<|CLOSE_SECTION|>`). These sections can include character details, dialogue examples, scenario overviews, tasks, and additional context. Templates utilize loops and blocks to iterate over data and render content conditionally based on the task requirements.
|
||||
|
||||
## Overriding Templates
|
||||
|
||||
Users can customize the behavior of Talemate by overriding the default templates. To override a template, create a new template file with the same name in the `./templates/prompts/{agent}/` directory. When a custom template is present, Jinja2 will prioritize it over the default template located in the `./src/talemate/prompts/templates/{agent}/` directory.
|
||||
|
||||
## Creator Agent Templates
|
||||
|
||||
The creator agent templates allow for the creation of new characters within the character creator. Following the naming convention `character-attributes-*.jinja2`, `character-details-*.jinja2`, and `character-example-dialogue-*.jinja2`, users can add new templates that will be available for selection in the character creator.
|
||||
|
||||
### Requirements for Creator Templates
|
||||
|
||||
- All three types (`attributes`, `details`, `example-dialogue`) need to be available for a choice to be valid in the character creator.
|
||||
- Users can check the human templates for an understanding of how to structure these templates.
|
||||
|
||||
### Example Templates
|
||||
|
||||
- [Character Attributes Human Template](src/talemate/prompts/templates/creator/character-attributes-human.jinja2)
|
||||
- [Character Details Human Template](src/talemate/prompts/templates/creator/character-details-human.jinja2)
|
||||
- [Character Example Dialogue Human Template](src/talemate/prompts/templates/creator/character-example-dialogue-human.jinja2)
|
||||
|
||||
These example templates can serve as a guide for users to create their own custom templates for the character creator.
|
||||
|
||||
### Extending Existing Templates
|
||||
|
||||
Jinja2's template inheritance feature allows users to extend existing templates and add extra information. By using the `{% extends "template-name.jinja2" %}` tag, a new template can inherit everything from an existing template and then add or override specific blocks of content.
|
||||
|
||||
#### Example
|
||||
|
||||
To add a description of a character's hairstyle to the human character details template, you could create a new template like this:
|
||||
|
||||
```jinja2
|
||||
{% extends "character-details-human.jinja2" %}
|
||||
{% block questions %}
|
||||
{% if character_details.q("what does "+character.name+"'s hair look like?") -%}
|
||||
Briefly describe {{ character.name }}'s hair-style using a narrative writing style that reminds of mid 90s point and click adventure games. (2 - 3 sentences).
|
||||
{% endif %}
|
||||
{% endblock %}
|
||||
```
|
||||
|
||||
This example shows how to extend the `character-details-human.jinja2` template and add a block for questions about the character's hair. The `{% block questions %}` tag is used to define a section where additional questions can be inserted or existing ones can be overridden.
|
||||
|
||||
## Advanced Template Topics
|
||||
|
||||
### Jinja2 Functions in Talemate
|
||||
|
||||
Talemate exposes several functions to the Jinja2 template environment, providing utilities for data manipulation, querying, and controlling content flow. Here's a list of available functions:
|
||||
|
||||
1. `set_prepared_response(response, prepend)`: Sets the prepared response with an optional prepend string. This function allows the template to specify the beginning of the LLM response when processing the rendered template. For example, `set_prepared_response("Certainly!")` will ensure that the LLM's response starts with "Certainly!".
|
||||
2. `set_prepared_response_random(responses, prefix)`: Chooses a random response from a list and sets it as the prepared response with an optional prefix.
|
||||
3. `set_eval_response(empty)`: Prepares the response for evaluation, optionally initializing a counter for an empty string.
|
||||
4. `set_json_response(initial_object, instruction, cutoff)`: Prepares for a JSON response with an initial object and optional instruction and cutoff.
|
||||
5. `set_question_eval(question, trigger, counter, weight)`: Sets up a question for evaluation with a trigger, counter, and weight.
|
||||
6. `disable_dedupe()`: Disables deduplication of the response text.
|
||||
7. `random(min, max)`: Generates a random integer between the specified minimum and maximum.
|
||||
8. `query_scene(query, at_the_end, as_narrative)`: Queries the scene with a question and returns the formatted response.
|
||||
9. `query_text(query, text, as_question_answer)`: Queries a text with a question and returns the formatted response.
|
||||
10. `query_memory(query, as_question_answer, **kwargs)`: Queries the memory with a question and returns the formatted response.
|
||||
11. `instruct_text(instruction, text)`: Instructs the text with a command and returns the result.
|
||||
12. `retrieve_memories(lines, goal)`: Retrieves memories based on the provided lines and an optional goal.
|
||||
13. `uuidgen()`: Generates a UUID string.
|
||||
14. `to_int(x)`: Converts the given value to an integer.
|
||||
15. `config`: Accesses the configuration settings.
|
||||
16. `len(x)`: Returns the length of the given object.
|
||||
17. `count_tokens(x)`: Counts the number of tokens in the given text.
|
||||
18. `print(x)`: Prints the given object (mainly for debugging purposes).
|
||||
|
||||
These functions enhance the capabilities of templates, allowing for dynamic and interactive content generation.
|
||||
|
||||
### Error Handling
|
||||
|
||||
Errors encountered during template rendering are logged and propagated to the user interface. This ensures that users are informed of any issues that may arise, allowing them to troubleshoot and resolve problems effectively.
|
||||
|
||||
By following these guidelines, users can create custom templates that tailor the Talemate experience to their specific storytelling needs.# Template Overrides in Talemate
|
||||
84
docs/tts.md
84
docs/tts.md
@@ -1,84 +0,0 @@
|
||||
# Talemate Text-to-Speech (TTS) Configuration
|
||||
|
||||
Talemate supports Text-to-Speech (TTS) functionality, allowing users to convert text into spoken audio. This document outlines the steps required to configure TTS for Talemate using different providers, including ElevenLabs, Coqui, and a local TTS API.
|
||||
|
||||
## Configuring ElevenLabs TTS
|
||||
|
||||
To use ElevenLabs TTS with Talemate, follow these steps:
|
||||
|
||||
1. Visit [ElevenLabs](https://elevenlabs.com) and create an account if you don't already have one.
|
||||
2. Click on your profile in the upper right corner of the Eleven Labs website to access your API key.
|
||||
3. In the `config.yaml` file, under the `elevenlabs` section, set the `api_key` field with your ElevenLabs API key.
|
||||
|
||||
Example configuration snippet:
|
||||
|
||||
```yaml
|
||||
elevenlabs:
|
||||
api_key: <YOUR_ELEVENLABS_API_KEY>
|
||||
```
|
||||
|
||||
## Configuring Coqui TTS
|
||||
|
||||
To use Coqui TTS with Talemate, follow these steps:
|
||||
|
||||
1. Visit [Coqui](https://app.coqui.ai) and sign up for an account.
|
||||
2. Go to the [account page](https://app.coqui.ai/account) and scroll to the bottom to find your API key.
|
||||
3. In the `config.yaml` file, under the `coqui` section, set the `api_key` field with your Coqui API key.
|
||||
|
||||
Example configuration snippet:
|
||||
|
||||
```yaml
|
||||
coqui:
|
||||
api_key: <YOUR_COQUI_API_KEY>
|
||||
```
|
||||
|
||||
## Configuring Local TTS API
|
||||
|
||||
For running a local TTS API, Talemate requires specific dependencies to be installed.
|
||||
|
||||
### Windows Installation
|
||||
|
||||
Run `install-local-tts.bat` to install the necessary requirements.
|
||||
|
||||
### Linux Installation
|
||||
|
||||
Execute the following command:
|
||||
|
||||
```bash
|
||||
pip install TTS
|
||||
```
|
||||
|
||||
### Model and Device Configuration
|
||||
|
||||
1. Choose a TTS model from the [Coqui TTS model list](https://github.com/coqui-ai/TTS).
|
||||
2. Decide whether to use `cuda` or `cpu` for the device setting.
|
||||
3. The first time you run TTS through the local API, it will download the specified model. Please note that this may take some time, and the download progress will be visible in the Talemate backend output.
|
||||
|
||||
Example configuration snippet:
|
||||
|
||||
```yaml
|
||||
tts:
|
||||
device: cuda # or 'cpu'
|
||||
model: tts_models/multilingual/multi-dataset/xtts_v2
|
||||
```
|
||||
|
||||
### Voice Samples Configuration
|
||||
|
||||
Configure voice samples by setting the `value` field to the path of a .wav file voice sample. Official samples can be downloaded from [Coqui XTTS-v2 samples](https://huggingface.co/coqui/XTTS-v2/tree/main/samples).
|
||||
|
||||
Example configuration snippet:
|
||||
|
||||
```yaml
|
||||
tts:
|
||||
voices:
|
||||
- label: English Male
|
||||
value: path/to/english_male.wav
|
||||
- label: English Female
|
||||
value: path/to/english_female.wav
|
||||
```
|
||||
|
||||
## Saving the Configuration
|
||||
|
||||
After configuring the `config.yaml` file, save your changes. Talemate will use the updated settings the next time it starts.
|
||||
|
||||
For more detailed information on configuring Talemate, refer to the `config.py` file in the Talemate source code and the `config.example.yaml` file for a barebone configuration example.
|
||||
@@ -1,4 +0,0 @@
|
||||
REM activate the virtual environment
|
||||
call talemate_env\Scripts\activate
|
||||
|
||||
call pip install "TTS>=0.21.1"
|
||||
1900
poetry.lock
generated
1900
poetry.lock
generated
File diff suppressed because it is too large
Load Diff
@@ -4,7 +4,7 @@ build-backend = "poetry.masonry.api"
|
||||
|
||||
[tool.poetry]
|
||||
name = "talemate"
|
||||
version = "0.15.0"
|
||||
version = "0.13.0"
|
||||
description = "AI-backed roleplay and narrative tools"
|
||||
authors = ["FinalWombat"]
|
||||
license = "GNU Affero General Public License v3.0"
|
||||
@@ -37,7 +37,6 @@ nest_asyncio = "^1.5.7"
|
||||
isodate = ">=0.6.1"
|
||||
thefuzz = ">=0.20.0"
|
||||
tiktoken = ">=0.5.1"
|
||||
nltk = ">=3.8.1"
|
||||
|
||||
# ChromaDB
|
||||
chromadb = ">=0.4.17,<1"
|
||||
|
||||
@@ -2,4 +2,4 @@ from .agents import Agent
|
||||
from .client import TextGeneratorWebuiClient
|
||||
from .tale_mate import *
|
||||
|
||||
VERSION = "0.15.0"
|
||||
VERSION = "0.13.0"
|
||||
|
||||
@@ -8,5 +8,4 @@ from .narrator import NarratorAgent
|
||||
from .registry import AGENT_CLASSES, get_agent_class, register
|
||||
from .summarize import SummarizeAgent
|
||||
from .editor import EditorAgent
|
||||
from .world_state import WorldStateAgent
|
||||
from .tts import TTSAgent
|
||||
from .world_state import WorldStateAgent
|
||||
@@ -23,31 +23,16 @@ __all__ = [
|
||||
|
||||
log = structlog.get_logger("talemate.agents.base")
|
||||
|
||||
class CallableConfigValue:
|
||||
def __init__(self, fn):
|
||||
self.fn = fn
|
||||
|
||||
def __str__(self):
|
||||
return "CallableConfigValue"
|
||||
|
||||
def __repr__(self):
|
||||
return "CallableConfigValue"
|
||||
|
||||
class AgentActionConfig(pydantic.BaseModel):
|
||||
type: str
|
||||
label: str
|
||||
description: str = ""
|
||||
value: Union[int, float, str, bool, None] = None
|
||||
value: Union[int, float, str, bool]
|
||||
default_value: Union[int, float, str, bool] = None
|
||||
max: Union[int, float, None] = None
|
||||
min: Union[int, float, None] = None
|
||||
step: Union[int, float, None] = None
|
||||
scope: str = "global"
|
||||
choices: Union[list[dict[str, str]], None] = None
|
||||
|
||||
class Config:
|
||||
arbitrary_types_allowed = True
|
||||
|
||||
|
||||
class AgentAction(pydantic.BaseModel):
|
||||
enabled: bool = True
|
||||
@@ -55,6 +40,7 @@ class AgentAction(pydantic.BaseModel):
|
||||
description: str = ""
|
||||
config: Union[dict[str, AgentActionConfig], None] = None
|
||||
|
||||
|
||||
def set_processing(fn):
|
||||
"""
|
||||
decorator that emits the agent status as processing while the function
|
||||
@@ -84,7 +70,6 @@ class Agent(ABC):
|
||||
agent_type = "agent"
|
||||
verbose_name = None
|
||||
set_processing = set_processing
|
||||
requires_llm_client = True
|
||||
|
||||
@property
|
||||
def agent_details(self):
|
||||
@@ -104,7 +89,7 @@ class Agent(ABC):
|
||||
if not getattr(self.client, "enabled", True):
|
||||
return False
|
||||
|
||||
if self.client and self.client.current_status in ["error", "warning"]:
|
||||
if self.client.current_status in ["error", "warning"]:
|
||||
return False
|
||||
|
||||
return self.client is not None
|
||||
@@ -150,7 +135,6 @@ class Agent(ABC):
|
||||
"enabled": agent.enabled if agent else True,
|
||||
"has_toggle": agent.has_toggle if agent else False,
|
||||
"experimental": agent.experimental if agent else False,
|
||||
"requires_llm_client": cls.requires_llm_client,
|
||||
}
|
||||
actions = getattr(agent, "actions", None)
|
||||
|
||||
|
||||
@@ -406,7 +406,7 @@ class ConversationAgent(Agent):
|
||||
|
||||
context = await memory.multi_query(history, max_tokens=500, iterate=5)
|
||||
|
||||
self.current_memory_context = "\n\n".join(context)
|
||||
self.current_memory_context = "\n".join(context)
|
||||
|
||||
return self.current_memory_context
|
||||
|
||||
|
||||
@@ -10,7 +10,7 @@ import talemate.emit.async_signals
|
||||
from talemate.prompts import Prompt
|
||||
from talemate.scene_message import DirectorMessage, TimePassageMessage
|
||||
|
||||
from .base import Agent, set_processing, AgentAction, AgentActionConfig
|
||||
from .base import Agent, set_processing, AgentAction
|
||||
from .registry import register
|
||||
|
||||
import structlog
|
||||
@@ -21,7 +21,6 @@ import re
|
||||
if TYPE_CHECKING:
|
||||
from talemate.tale_mate import Actor, Character, Scene
|
||||
from talemate.agents.conversation import ConversationAgentEmission
|
||||
from talemate.agents.narrator import NarratorAgentEmission
|
||||
|
||||
log = structlog.get_logger("talemate.agents.editor")
|
||||
|
||||
@@ -41,9 +40,7 @@ class EditorAgent(Agent):
|
||||
self.is_enabled = True
|
||||
self.actions = {
|
||||
"edit_dialogue": AgentAction(enabled=False, label="Edit dialogue", description="Will attempt to improve the quality of dialogue based on the character and scene. Runs automatically after each AI dialogue."),
|
||||
"fix_exposition": AgentAction(enabled=True, label="Fix exposition", description="Will attempt to fix exposition and emotes, making sure they are displayed in italics. Runs automatically after each AI dialogue.", config={
|
||||
"narrator": AgentActionConfig(type="bool", label="Fix narrator messages", description="Will attempt to fix exposition issues in narrator messages", value=True),
|
||||
}),
|
||||
"fix_exposition": AgentAction(enabled=True, label="Fix exposition", description="Will attempt to fix exposition and emotes, making sure they are displayed in italics. Runs automatically after each AI dialogue."),
|
||||
"add_detail": AgentAction(enabled=False, label="Add detail", description="Will attempt to add extra detail and exposition to the dialogue. Runs automatically after each AI dialogue.")
|
||||
}
|
||||
|
||||
@@ -62,7 +59,6 @@ class EditorAgent(Agent):
|
||||
def connect(self, scene):
|
||||
super().connect(scene)
|
||||
talemate.emit.async_signals.get("agent.conversation.generated").connect(self.on_conversation_generated)
|
||||
talemate.emit.async_signals.get("agent.narrator.generated").connect(self.on_narrator_generated)
|
||||
|
||||
async def on_conversation_generated(self, emission:ConversationAgentEmission):
|
||||
"""
|
||||
@@ -97,24 +93,6 @@ class EditorAgent(Agent):
|
||||
|
||||
emission.generation = edited
|
||||
|
||||
async def on_narrator_generated(self, emission:NarratorAgentEmission):
|
||||
"""
|
||||
Called when a narrator message is generated
|
||||
"""
|
||||
|
||||
if not self.enabled:
|
||||
return
|
||||
|
||||
log.info("editing narrator", emission=emission)
|
||||
|
||||
edited = []
|
||||
|
||||
for text in emission.generation:
|
||||
edit = await self.fix_exposition_on_narrator(text)
|
||||
edited.append(edit)
|
||||
|
||||
emission.generation = edited
|
||||
|
||||
|
||||
@set_processing
|
||||
async def edit_conversation(self, content:str, character:Character):
|
||||
@@ -149,19 +127,12 @@ class EditorAgent(Agent):
|
||||
if not self.actions["fix_exposition"].enabled:
|
||||
return content
|
||||
|
||||
if not character.is_player:
|
||||
if '"' not in content and '*' not in content:
|
||||
content = util.strip_partial_sentences(content)
|
||||
character_prefix = f"{character.name}: "
|
||||
message = content.split(character_prefix)[1]
|
||||
content = f"{character_prefix}*{message.strip('*')}*"
|
||||
return content
|
||||
elif '"' in content:
|
||||
# if both are present we strip the * and add them back later
|
||||
# through ensure_dialog_format - right now most LLMs aren't
|
||||
# smart enough to do quotes and italics at the same time consistently
|
||||
# especially throughout long conversations
|
||||
content = content.replace('*', '')
|
||||
#response = await Prompt.request("editor.fix-exposition", self.client, "edit_fix_exposition", vars={
|
||||
# "content": content,
|
||||
# "character": character,
|
||||
# "scene": self.scene,
|
||||
# "max_length": self.client.max_token_length
|
||||
#})
|
||||
|
||||
content = util.clean_dialogue(content, main_name=character.name)
|
||||
content = util.strip_partial_sentences(content)
|
||||
@@ -169,24 +140,6 @@ class EditorAgent(Agent):
|
||||
|
||||
return content
|
||||
|
||||
@set_processing
|
||||
async def fix_exposition_on_narrator(self, content:str):
|
||||
|
||||
if not self.actions["fix_exposition"].enabled:
|
||||
return content
|
||||
|
||||
if not self.actions["fix_exposition"].config["narrator"].value:
|
||||
return content
|
||||
|
||||
content = util.strip_partial_sentences(content)
|
||||
|
||||
if '"' not in content:
|
||||
content = f"*{content.strip('*')}*"
|
||||
else:
|
||||
content = util.ensure_dialog_format(content)
|
||||
|
||||
return content
|
||||
|
||||
@set_processing
|
||||
async def add_detail(self, content:str, character:Character):
|
||||
"""
|
||||
|
||||
@@ -206,7 +206,6 @@ from .registry import register
|
||||
@register(condition=lambda: chromadb is not None)
|
||||
class ChromaDBMemoryAgent(MemoryAgent):
|
||||
|
||||
requires_llm_client = False
|
||||
|
||||
@property
|
||||
def ready(self):
|
||||
@@ -223,7 +222,7 @@ class ChromaDBMemoryAgent(MemoryAgent):
|
||||
@property
|
||||
def agent_details(self):
|
||||
return f"ChromaDB: {self.embeddings}"
|
||||
|
||||
|
||||
@property
|
||||
def embeddings(self):
|
||||
"""
|
||||
@@ -410,7 +409,7 @@ class ChromaDBMemoryAgent(MemoryAgent):
|
||||
id = uid or f"__narrator__-{self.memory_tracker['__narrator__']}"
|
||||
ids = [id]
|
||||
|
||||
#log.debug("chromadb agent add", text=text, meta=meta, id=id)
|
||||
log.debug("chromadb agent add", text=text, meta=meta, id=id)
|
||||
|
||||
self.db.upsert(documents=[text], metadatas=metadatas, ids=ids)
|
||||
|
||||
@@ -480,10 +479,9 @@ class ChromaDBMemoryAgent(MemoryAgent):
|
||||
if distance < 1:
|
||||
|
||||
try:
|
||||
log.debug("chromadb agent get", ts=ts, scene_ts=self.scene.ts)
|
||||
date_prefix = util.iso8601_diff_to_human(ts, self.scene.ts)
|
||||
except Exception as e:
|
||||
log.error("chromadb agent", error="failed to get date prefix", details=e, ts=ts, scene_ts=self.scene.ts)
|
||||
except Exception:
|
||||
log.error("chromadb agent", error="failed to get date prefix", ts=ts, scene_ts=self.scene.ts)
|
||||
date_prefix = None
|
||||
|
||||
if date_prefix:
|
||||
|
||||
@@ -1,14 +1,13 @@
|
||||
from __future__ import annotations
|
||||
|
||||
from typing import TYPE_CHECKING, Callable, List, Optional, Union
|
||||
import dataclasses
|
||||
import structlog
|
||||
import random
|
||||
import talemate.util as util
|
||||
from talemate.emit import emit
|
||||
import talemate.emit.async_signals
|
||||
from talemate.prompts import Prompt
|
||||
from talemate.agents.base import set_processing as _set_processing, Agent, AgentAction, AgentActionConfig, AgentEmission
|
||||
from talemate.agents.base import set_processing, Agent, AgentAction, AgentActionConfig
|
||||
from talemate.agents.world_state import TimePassageEmission
|
||||
from talemate.scene_message import NarratorMessage
|
||||
from talemate.events import GameLoopActorIterEvent
|
||||
@@ -21,33 +20,6 @@ if TYPE_CHECKING:
|
||||
|
||||
log = structlog.get_logger("talemate.agents.narrator")
|
||||
|
||||
@dataclasses.dataclass
|
||||
class NarratorAgentEmission(AgentEmission):
|
||||
generation: list[str] = dataclasses.field(default_factory=list)
|
||||
|
||||
talemate.emit.async_signals.register(
|
||||
"agent.narrator.generated"
|
||||
)
|
||||
|
||||
def set_processing(fn):
|
||||
|
||||
"""
|
||||
Custom decorator that emits the agent status as processing while the function
|
||||
is running and then emits the result of the function as a NarratorAgentEmission
|
||||
"""
|
||||
|
||||
@_set_processing
|
||||
async def wrapper(self, *args, **kwargs):
|
||||
response = await fn(self, *args, **kwargs)
|
||||
emission = NarratorAgentEmission(
|
||||
agent=self,
|
||||
generation=[response],
|
||||
)
|
||||
await talemate.emit.async_signals.get("agent.narrator.generated").send(emission)
|
||||
return emission.generation[0]
|
||||
wrapper.__name__ = fn.__name__
|
||||
return wrapper
|
||||
|
||||
@register()
|
||||
class NarratorAgent(Agent):
|
||||
|
||||
@@ -92,12 +64,6 @@ class NarratorAgent(Agent):
|
||||
max=1.0,
|
||||
step=0.1,
|
||||
),
|
||||
"generate_dialogue": AgentActionConfig(
|
||||
type="bool",
|
||||
label="Allow Dialogue in Narration",
|
||||
description="Allow the narrator to generate dialogue in narration",
|
||||
value=False,
|
||||
),
|
||||
}
|
||||
),
|
||||
}
|
||||
@@ -407,12 +373,5 @@ class NarratorAgent(Agent):
|
||||
|
||||
response = self.clean_result(response.strip().strip("*"))
|
||||
response = f"*{response}*"
|
||||
|
||||
allow_dialogue = self.actions["narrate_dialogue"].config["generate_dialogue"].value
|
||||
|
||||
if not allow_dialogue:
|
||||
response = response.split('"')[0].strip()
|
||||
response = response.replace("*", "")
|
||||
response = f"*{response}*"
|
||||
|
||||
return response
|
||||
@@ -5,13 +5,11 @@ import traceback
|
||||
from typing import TYPE_CHECKING, Callable, List, Optional, Union
|
||||
|
||||
import talemate.data_objects as data_objects
|
||||
import talemate.emit.async_signals
|
||||
import talemate.util as util
|
||||
from talemate.prompts import Prompt
|
||||
from talemate.scene_message import DirectorMessage, TimePassageMessage
|
||||
from talemate.events import GameLoopEvent
|
||||
|
||||
from .base import Agent, set_processing, AgentAction, AgentActionConfig
|
||||
from .base import Agent, set_processing
|
||||
from .registry import register
|
||||
|
||||
import structlog
|
||||
@@ -36,40 +34,14 @@ class SummarizeAgent(Agent):
|
||||
|
||||
def __init__(self, client, **kwargs):
|
||||
self.client = client
|
||||
|
||||
self.actions = {
|
||||
"archive": AgentAction(
|
||||
enabled=True,
|
||||
label="Summarize to long-term memory archive",
|
||||
description="Automatically summarize scene dialogue when the number of tokens in the history exceeds a threshold. This helps keep the context history from growing too large.",
|
||||
config={
|
||||
"threshold": AgentActionConfig(
|
||||
type="number",
|
||||
label="Token Threshold",
|
||||
description="Will summarize when the number of tokens in the history exceeds this threshold",
|
||||
min=512,
|
||||
max=8192,
|
||||
step=256,
|
||||
value=1536,
|
||||
)
|
||||
}
|
||||
)
|
||||
}
|
||||
|
||||
|
||||
|
||||
def on_history_add(self, event):
|
||||
asyncio.ensure_future(self.build_archive(event.scene))
|
||||
|
||||
def connect(self, scene):
|
||||
super().connect(scene)
|
||||
talemate.emit.async_signals.get("game_loop").connect(self.on_game_loop)
|
||||
|
||||
|
||||
async def on_game_loop(self, emission:GameLoopEvent):
|
||||
"""
|
||||
Called when a conversation is generated
|
||||
"""
|
||||
scene.signals["history_add"].connect(self.on_history_add)
|
||||
|
||||
await self.build_archive(self.scene)
|
||||
|
||||
|
||||
def clean_result(self, result):
|
||||
if "#" in result:
|
||||
result = result.split("#")[0]
|
||||
@@ -81,31 +53,21 @@ class SummarizeAgent(Agent):
|
||||
return result
|
||||
|
||||
@set_processing
|
||||
async def build_archive(self, scene):
|
||||
async def build_archive(self, scene, token_threshold:int=1500):
|
||||
end = None
|
||||
|
||||
if not self.actions["archive"].enabled:
|
||||
return
|
||||
|
||||
|
||||
if not scene.archived_history:
|
||||
start = 0
|
||||
recent_entry = None
|
||||
else:
|
||||
recent_entry = scene.archived_history[-1]
|
||||
if "end" not in recent_entry:
|
||||
# permanent historical archive entry, not tied to any specific history entry
|
||||
# meaning we are still at the beginning of the scene
|
||||
start = 0
|
||||
else:
|
||||
start = recent_entry.get("end", 0)+1
|
||||
start = recent_entry.get("end", 0) + 1
|
||||
|
||||
tokens = 0
|
||||
dialogue_entries = []
|
||||
ts = "PT0S"
|
||||
time_passage_termination = False
|
||||
|
||||
token_threshold = self.actions["archive"].config["threshold"].value
|
||||
|
||||
log.debug("build_archive", start=start, recent_entry=recent_entry)
|
||||
|
||||
if recent_entry:
|
||||
@@ -113,9 +75,6 @@ class SummarizeAgent(Agent):
|
||||
|
||||
for i in range(start, len(scene.history)):
|
||||
dialogue = scene.history[i]
|
||||
|
||||
#log.debug("build_archive", idx=i, content=str(dialogue)[:64]+"...")
|
||||
|
||||
if isinstance(dialogue, DirectorMessage):
|
||||
if i == start:
|
||||
start += 1
|
||||
@@ -172,7 +131,7 @@ class SummarizeAgent(Agent):
|
||||
break
|
||||
adjusted_dialogue.append(line)
|
||||
dialogue_entries = adjusted_dialogue
|
||||
end = start + len(dialogue_entries)-1
|
||||
end = start + len(dialogue_entries)
|
||||
|
||||
if dialogue_entries:
|
||||
summarized = await self.summarize(
|
||||
|
||||
@@ -1,610 +0,0 @@
|
||||
from __future__ import annotations
|
||||
|
||||
from typing import Union
|
||||
import asyncio
|
||||
import httpx
|
||||
import io
|
||||
import os
|
||||
import pydantic
|
||||
import nltk
|
||||
import tempfile
|
||||
import base64
|
||||
import uuid
|
||||
import functools
|
||||
from nltk.tokenize import sent_tokenize
|
||||
|
||||
import talemate.config as config
|
||||
import talemate.emit.async_signals
|
||||
from talemate.emit import emit
|
||||
from talemate.events import GameLoopNewMessageEvent
|
||||
from talemate.scene_message import CharacterMessage, NarratorMessage
|
||||
|
||||
from .base import Agent, set_processing, AgentAction, AgentActionConfig
|
||||
from .registry import register
|
||||
|
||||
import structlog
|
||||
|
||||
import time
|
||||
|
||||
try:
|
||||
from TTS.api import TTS
|
||||
except ImportError:
|
||||
TTS = None
|
||||
|
||||
log = structlog.get_logger("talemate.agents.tts")#
|
||||
|
||||
if not TTS:
|
||||
# TTS installation is massive and requires a lot of dependencies
|
||||
# so we don't want to require it unless the user wants to use it
|
||||
log.info("TTS (local) requires the TTS package, please install with `pip install TTS` if you want to use the local api")
|
||||
|
||||
|
||||
def parse_chunks(text):
|
||||
|
||||
text = text.replace("...", "__ellipsis__")
|
||||
|
||||
chunks = sent_tokenize(text)
|
||||
cleaned_chunks = []
|
||||
|
||||
for chunk in chunks:
|
||||
chunk = chunk.replace("*","")
|
||||
if not chunk:
|
||||
continue
|
||||
cleaned_chunks.append(chunk)
|
||||
|
||||
|
||||
for i, chunk in enumerate(cleaned_chunks):
|
||||
chunk = chunk.replace("__ellipsis__", "...")
|
||||
|
||||
cleaned_chunks[i] = chunk
|
||||
|
||||
return cleaned_chunks
|
||||
|
||||
def clean_quotes(chunk:str):
|
||||
|
||||
# if there is an uneven number of quotes, remove the last one if its
|
||||
# at the end of the chunk. If its in the middle, add a quote to the end
|
||||
if chunk.count('"') % 2 == 1:
|
||||
|
||||
if chunk.endswith('"'):
|
||||
chunk = chunk[:-1]
|
||||
else:
|
||||
chunk += '"'
|
||||
|
||||
return chunk
|
||||
|
||||
|
||||
def rejoin_chunks(chunks:list[str], chunk_size:int=250):
|
||||
|
||||
"""
|
||||
Will combine chunks split by punctuation into a single chunk until
|
||||
max chunk size is reached
|
||||
"""
|
||||
|
||||
joined_chunks = []
|
||||
|
||||
current_chunk = ""
|
||||
|
||||
for chunk in chunks:
|
||||
|
||||
if len(current_chunk) + len(chunk) > chunk_size:
|
||||
joined_chunks.append(clean_quotes(current_chunk))
|
||||
current_chunk = ""
|
||||
|
||||
current_chunk += chunk
|
||||
|
||||
if current_chunk:
|
||||
joined_chunks.append(clean_quotes(current_chunk))
|
||||
return joined_chunks
|
||||
|
||||
|
||||
class Voice(pydantic.BaseModel):
|
||||
value:str
|
||||
label:str
|
||||
|
||||
class VoiceLibrary(pydantic.BaseModel):
|
||||
|
||||
api: str
|
||||
voices: list[Voice] = pydantic.Field(default_factory=list)
|
||||
last_synced: float = None
|
||||
|
||||
|
||||
@register()
|
||||
class TTSAgent(Agent):
|
||||
|
||||
"""
|
||||
Text to speech agent
|
||||
"""
|
||||
|
||||
agent_type = "tts"
|
||||
verbose_name = "Voice"
|
||||
requires_llm_client = False
|
||||
|
||||
@classmethod
|
||||
def config_options(cls, agent=None):
|
||||
config_options = super().config_options(agent=agent)
|
||||
|
||||
if agent:
|
||||
config_options["actions"]["_config"]["config"]["voice_id"]["choices"] = [
|
||||
voice.model_dump() for voice in agent.list_voices_sync()
|
||||
]
|
||||
|
||||
return config_options
|
||||
|
||||
def __init__(self, **kwargs):
|
||||
|
||||
self.is_enabled = False
|
||||
nltk.download("punkt", quiet=True)
|
||||
|
||||
self.voices = {
|
||||
"elevenlabs": VoiceLibrary(api="elevenlabs"),
|
||||
"coqui": VoiceLibrary(api="coqui"),
|
||||
"tts": VoiceLibrary(api="tts"),
|
||||
}
|
||||
self.config = config.load_config()
|
||||
self.playback_done_event = asyncio.Event()
|
||||
self.actions = {
|
||||
"_config": AgentAction(
|
||||
enabled=True,
|
||||
label="Configure",
|
||||
description="TTS agent configuration",
|
||||
config={
|
||||
"api": AgentActionConfig(
|
||||
type="text",
|
||||
choices=[
|
||||
# TODO at local TTS support
|
||||
{"value": "tts", "label": "TTS (Local)"},
|
||||
{"value": "elevenlabs", "label": "Eleven Labs"},
|
||||
{"value": "coqui", "label": "Coqui Studio"},
|
||||
],
|
||||
value="tts",
|
||||
label="API",
|
||||
description="Which TTS API to use",
|
||||
onchange="emit",
|
||||
),
|
||||
"voice_id": AgentActionConfig(
|
||||
type="text",
|
||||
value="default",
|
||||
label="Narrator Voice",
|
||||
description="Voice ID/Name to use for TTS",
|
||||
choices=[]
|
||||
),
|
||||
"generate_for_player": AgentActionConfig(
|
||||
type="bool",
|
||||
value=False,
|
||||
label="Generate for player",
|
||||
description="Generate audio for player messages",
|
||||
),
|
||||
"generate_for_npc": AgentActionConfig(
|
||||
type="bool",
|
||||
value=True,
|
||||
label="Generate for NPCs",
|
||||
description="Generate audio for NPC messages",
|
||||
),
|
||||
"generate_for_narration": AgentActionConfig(
|
||||
type="bool",
|
||||
value=True,
|
||||
label="Generate for narration",
|
||||
description="Generate audio for narration messages",
|
||||
),
|
||||
"generate_chunks": AgentActionConfig(
|
||||
type="bool",
|
||||
value=False,
|
||||
label="Split generation",
|
||||
description="Generate audio chunks for each sentence - will be much more responsive but may loose context to inform inflection",
|
||||
)
|
||||
}
|
||||
),
|
||||
}
|
||||
|
||||
self.actions["_config"].model_dump()
|
||||
|
||||
|
||||
@property
|
||||
def enabled(self):
|
||||
return self.is_enabled
|
||||
|
||||
@property
|
||||
def has_toggle(self):
|
||||
return True
|
||||
|
||||
@property
|
||||
def experimental(self):
|
||||
return False
|
||||
|
||||
@property
|
||||
def not_ready_reason(self) -> str:
|
||||
"""
|
||||
Returns a string explaining why the agent is not ready
|
||||
"""
|
||||
|
||||
if self.ready:
|
||||
return ""
|
||||
|
||||
if self.api == "tts":
|
||||
if not TTS:
|
||||
return "TTS not installed"
|
||||
|
||||
elif self.requires_token and not self.token:
|
||||
return "No API token"
|
||||
|
||||
elif not self.default_voice_id:
|
||||
return "No voice selected"
|
||||
|
||||
@property
|
||||
def agent_details(self):
|
||||
suffix = ""
|
||||
|
||||
if not self.ready:
|
||||
suffix = f" - {self.not_ready_reason}"
|
||||
else:
|
||||
suffix = f" - {self.voice_id_to_label(self.default_voice_id)}"
|
||||
|
||||
api = self.api
|
||||
choices = self.actions["_config"].config["api"].choices
|
||||
api_label = api
|
||||
for choice in choices:
|
||||
if choice["value"] == api:
|
||||
api_label = choice["label"]
|
||||
break
|
||||
|
||||
return f"{api_label}{suffix}"
|
||||
|
||||
@property
|
||||
def api(self):
|
||||
return self.actions["_config"].config["api"].value
|
||||
|
||||
@property
|
||||
def token(self):
|
||||
api = self.api
|
||||
return self.config.get(api,{}).get("api_key")
|
||||
|
||||
@property
|
||||
def default_voice_id(self):
|
||||
return self.actions["_config"].config["voice_id"].value
|
||||
|
||||
@property
|
||||
def requires_token(self):
|
||||
return self.api != "tts"
|
||||
|
||||
@property
|
||||
def ready(self):
|
||||
|
||||
if self.api == "tts":
|
||||
if not TTS:
|
||||
return False
|
||||
return True
|
||||
|
||||
return (not self.requires_token or self.token) and self.default_voice_id
|
||||
|
||||
@property
|
||||
def status(self):
|
||||
if not self.enabled:
|
||||
return "disabled"
|
||||
if self.ready:
|
||||
return "active" if not getattr(self, "processing", False) else "busy"
|
||||
if self.requires_token and not self.token:
|
||||
return "error"
|
||||
if self.api == "tts":
|
||||
if not TTS:
|
||||
return "error"
|
||||
return "uninitialized"
|
||||
|
||||
@property
|
||||
def max_generation_length(self):
|
||||
if self.api == "elevenlabs":
|
||||
return 1024
|
||||
elif self.api == "coqui":
|
||||
return 250
|
||||
|
||||
return 250
|
||||
|
||||
def apply_config(self, *args, **kwargs):
|
||||
|
||||
try:
|
||||
api = kwargs["actions"]["_config"]["config"]["api"]["value"]
|
||||
except KeyError:
|
||||
api = self.api
|
||||
|
||||
api_changed = api != self.api
|
||||
|
||||
log.debug("apply_config", api=api, api_changed=api != self.api, current_api=self.api)
|
||||
|
||||
super().apply_config(*args, **kwargs)
|
||||
|
||||
|
||||
if api_changed:
|
||||
try:
|
||||
self.actions["_config"].config["voice_id"].value = self.voices[api].voices[0].value
|
||||
except IndexError:
|
||||
self.actions["_config"].config["voice_id"].value = ""
|
||||
|
||||
|
||||
def connect(self, scene):
|
||||
super().connect(scene)
|
||||
talemate.emit.async_signals.get("game_loop_new_message").connect(self.on_game_loop_new_message)
|
||||
|
||||
async def on_game_loop_new_message(self, emission:GameLoopNewMessageEvent):
|
||||
"""
|
||||
Called when a conversation is generated
|
||||
"""
|
||||
|
||||
if not self.enabled or not self.ready:
|
||||
return
|
||||
|
||||
if not isinstance(emission.message, (CharacterMessage, NarratorMessage)):
|
||||
return
|
||||
|
||||
if isinstance(emission.message, NarratorMessage) and not self.actions["_config"].config["generate_for_narration"].value:
|
||||
return
|
||||
|
||||
if isinstance(emission.message, CharacterMessage):
|
||||
|
||||
if emission.message.source == "player" and not self.actions["_config"].config["generate_for_player"].value:
|
||||
return
|
||||
elif emission.message.source == "ai" and not self.actions["_config"].config["generate_for_npc"].value:
|
||||
return
|
||||
|
||||
if isinstance(emission.message, CharacterMessage):
|
||||
character_prefix = emission.message.split(":", 1)[0]
|
||||
else:
|
||||
character_prefix = ""
|
||||
|
||||
log.info("reactive tts", message=emission.message, character_prefix=character_prefix)
|
||||
|
||||
await self.generate(str(emission.message).replace(character_prefix+": ", ""))
|
||||
|
||||
|
||||
def voice(self, voice_id:str) -> Union[Voice, None]:
|
||||
for voice in self.voices[self.api].voices:
|
||||
if voice.value == voice_id:
|
||||
return voice
|
||||
return None
|
||||
|
||||
def voice_id_to_label(self, voice_id:str):
|
||||
for voice in self.voices[self.api].voices:
|
||||
if voice.value == voice_id:
|
||||
return voice.label
|
||||
return None
|
||||
|
||||
def list_voices_sync(self):
|
||||
loop = asyncio.get_event_loop()
|
||||
return loop.run_until_complete(self.list_voices())
|
||||
|
||||
async def list_voices(self):
|
||||
if self.requires_token and not self.token:
|
||||
return []
|
||||
|
||||
library = self.voices[self.api]
|
||||
|
||||
log.info("Listing voices", api=self.api, last_synced=library.last_synced)
|
||||
|
||||
# TODO: allow re-syncing voices
|
||||
if library.last_synced:
|
||||
return library.voices
|
||||
|
||||
list_fn = getattr(self, f"_list_voices_{self.api}")
|
||||
log.info("Listing voices", api=self.api)
|
||||
library.voices = await list_fn()
|
||||
library.last_synced = time.time()
|
||||
|
||||
# if the current voice cannot be found, reset it
|
||||
if not self.voice(self.default_voice_id):
|
||||
self.actions["_config"].config["voice_id"].value = ""
|
||||
|
||||
# set loading to false
|
||||
return library.voices
|
||||
|
||||
@set_processing
|
||||
async def generate(self, text: str):
|
||||
if not self.enabled or not self.ready or not text:
|
||||
return
|
||||
|
||||
|
||||
self.playback_done_event.set()
|
||||
|
||||
generate_fn = getattr(self, f"_generate_{self.api}")
|
||||
|
||||
if self.actions["_config"].config["generate_chunks"].value:
|
||||
chunks = parse_chunks(text)
|
||||
chunks = rejoin_chunks(chunks)
|
||||
else:
|
||||
chunks = parse_chunks(text)
|
||||
chunks = rejoin_chunks(chunks, chunk_size=self.max_generation_length)
|
||||
|
||||
# Start generating audio chunks in the background
|
||||
generation_task = asyncio.create_task(self.generate_chunks(generate_fn, chunks))
|
||||
|
||||
# Wait for both tasks to complete
|
||||
await asyncio.gather(generation_task)
|
||||
|
||||
async def generate_chunks(self, generate_fn, chunks):
|
||||
for chunk in chunks:
|
||||
chunk = chunk.replace("*","").strip()
|
||||
log.info("Generating audio", api=self.api, chunk=chunk)
|
||||
audio_data = await generate_fn(chunk)
|
||||
self.play_audio(audio_data)
|
||||
|
||||
def play_audio(self, audio_data):
|
||||
# play audio through the python audio player
|
||||
#play(audio_data)
|
||||
|
||||
emit("audio_queue", data={"audio_data": base64.b64encode(audio_data).decode("utf-8")})
|
||||
|
||||
self.playback_done_event.set() # Signal that playback is finished
|
||||
|
||||
# LOCAL
|
||||
|
||||
async def _generate_tts(self, text: str) -> Union[bytes, None]:
|
||||
|
||||
if not TTS:
|
||||
return
|
||||
|
||||
tts_config = self.config.get("tts",{})
|
||||
model = tts_config.get("model")
|
||||
device = tts_config.get("device", "cpu")
|
||||
|
||||
log.debug("tts local", model=model, device=device)
|
||||
|
||||
if not hasattr(self, "tts_instance"):
|
||||
self.tts_instance = TTS(model).to(device)
|
||||
|
||||
tts = self.tts_instance
|
||||
|
||||
loop = asyncio.get_event_loop()
|
||||
|
||||
voice = self.voice(self.default_voice_id)
|
||||
|
||||
|
||||
with tempfile.TemporaryDirectory() as temp_dir:
|
||||
file_path = os.path.join(temp_dir, f"tts-{uuid.uuid4()}.wav")
|
||||
|
||||
await loop.run_in_executor(None, functools.partial(tts.tts_to_file, text=text, speaker_wav=voice.value, language="en", file_path=file_path))
|
||||
#tts.tts_to_file(text=text, speaker_wav=voice.value, language="en", file_path=file_path)
|
||||
|
||||
|
||||
with open(file_path, "rb") as f:
|
||||
return f.read()
|
||||
|
||||
|
||||
async def _list_voices_tts(self) -> dict[str, str]:
|
||||
return [Voice(**voice) for voice in self.config.get("tts",{}).get("voices",[])]
|
||||
|
||||
# ELEVENLABS
|
||||
|
||||
async def _generate_elevenlabs(self, text: str, chunk_size: int = 1024) -> Union[bytes, None]:
|
||||
api_key = self.token
|
||||
if not api_key:
|
||||
return
|
||||
|
||||
async with httpx.AsyncClient() as client:
|
||||
url = f"https://api.elevenlabs.io/v1/text-to-speech/{self.default_voice_id}"
|
||||
headers = {
|
||||
"Accept": "audio/mpeg",
|
||||
"Content-Type": "application/json",
|
||||
"xi-api-key": api_key,
|
||||
}
|
||||
data = {
|
||||
"text": text,
|
||||
"model_id": self.config.get("elevenlabs",{}).get("model"),
|
||||
"voice_settings": {
|
||||
"stability": 0.5,
|
||||
"similarity_boost": 0.5
|
||||
}
|
||||
}
|
||||
|
||||
response = await client.post(url, json=data, headers=headers, timeout=300)
|
||||
|
||||
if response.status_code == 200:
|
||||
bytes_io = io.BytesIO()
|
||||
for chunk in response.iter_bytes(chunk_size=chunk_size):
|
||||
if chunk:
|
||||
bytes_io.write(chunk)
|
||||
|
||||
# Put the audio data in the queue for playback
|
||||
return bytes_io.getvalue()
|
||||
else:
|
||||
log.error(f"Error generating audio: {response.text}")
|
||||
|
||||
async def _list_voices_elevenlabs(self) -> dict[str, str]:
|
||||
|
||||
url_voices = "https://api.elevenlabs.io/v1/voices"
|
||||
|
||||
voices = []
|
||||
|
||||
async with httpx.AsyncClient() as client:
|
||||
headers = {
|
||||
"Accept": "application/json",
|
||||
"xi-api-key": self.token,
|
||||
}
|
||||
response = await client.get(url_voices, headers=headers, params={"per_page":1000})
|
||||
speakers = response.json()["voices"]
|
||||
voices.extend([Voice(value=speaker["voice_id"], label=speaker["name"]) for speaker in speakers])
|
||||
|
||||
# sort by name
|
||||
voices.sort(key=lambda x: x.label)
|
||||
|
||||
return voices
|
||||
|
||||
# COQUI STUDIO
|
||||
|
||||
async def _generate_coqui(self, text: str) -> Union[bytes, None]:
|
||||
api_key = self.token
|
||||
if not api_key:
|
||||
return
|
||||
|
||||
async with httpx.AsyncClient() as client:
|
||||
url = "https://app.coqui.ai/api/v2/samples/xtts/render/"
|
||||
headers = {
|
||||
"Accept": "application/json",
|
||||
"Content-Type": "application/json",
|
||||
"Authorization": f"Bearer {api_key}"
|
||||
}
|
||||
data = {
|
||||
"voice_id": self.default_voice_id,
|
||||
"text": text,
|
||||
"language": "en" # Assuming English language for simplicity; this could be parameterized
|
||||
}
|
||||
|
||||
# Make the POST request to Coqui API
|
||||
response = await client.post(url, json=data, headers=headers, timeout=300)
|
||||
if response.status_code in [200, 201]:
|
||||
# Parse the JSON response to get the audio URL
|
||||
response_data = response.json()
|
||||
audio_url = response_data.get('audio_url')
|
||||
if audio_url:
|
||||
# Make a GET request to download the audio file
|
||||
audio_response = await client.get(audio_url)
|
||||
if audio_response.status_code == 200:
|
||||
# delete the sample from Coqui Studio
|
||||
# await self._cleanup_coqui(response_data.get('id'))
|
||||
return audio_response.content
|
||||
else:
|
||||
log.error(f"Error downloading audio: {audio_response.text}")
|
||||
else:
|
||||
log.error("No audio URL in response")
|
||||
else:
|
||||
log.error(f"Error generating audio: {response.text}")
|
||||
|
||||
async def _cleanup_coqui(self, sample_id: str):
|
||||
api_key = self.token
|
||||
if not api_key or not sample_id:
|
||||
return
|
||||
|
||||
async with httpx.AsyncClient() as client:
|
||||
url = f"https://app.coqui.ai/api/v2/samples/xtts/{sample_id}"
|
||||
headers = {
|
||||
"Authorization": f"Bearer {api_key}"
|
||||
}
|
||||
|
||||
# Make the DELETE request to Coqui API
|
||||
response = await client.delete(url, headers=headers)
|
||||
|
||||
if response.status_code == 204:
|
||||
log.info(f"Successfully deleted sample with ID: {sample_id}")
|
||||
else:
|
||||
log.error(f"Error deleting sample with ID: {sample_id}: {response.text}")
|
||||
|
||||
async def _list_voices_coqui(self) -> dict[str, str]:
|
||||
|
||||
url_speakers = "https://app.coqui.ai/api/v2/speakers"
|
||||
url_custom_voices = "https://app.coqui.ai/api/v2/voices"
|
||||
|
||||
voices = []
|
||||
|
||||
async with httpx.AsyncClient() as client:
|
||||
headers = {
|
||||
"Authorization": f"Bearer {self.token}"
|
||||
}
|
||||
response = await client.get(url_speakers, headers=headers, params={"per_page":1000})
|
||||
speakers = response.json()["result"]
|
||||
voices.extend([Voice(value=speaker["id"], label=speaker["name"]) for speaker in speakers])
|
||||
|
||||
response = await client.get(url_custom_voices, headers=headers, params={"per_page":1000})
|
||||
custom_voices = response.json()["result"]
|
||||
voices.extend([Voice(value=voice["id"], label=voice["name"]) for voice in custom_voices])
|
||||
|
||||
# sort by name
|
||||
voices.sort(key=lambda x: x.label)
|
||||
|
||||
return voices
|
||||
@@ -47,7 +47,7 @@ class ClientBase:
|
||||
|
||||
def __init__(
|
||||
self,
|
||||
api_url: str = None,
|
||||
api_url: str,
|
||||
name = None,
|
||||
**kwargs,
|
||||
):
|
||||
@@ -142,8 +142,6 @@ class ClientBase:
|
||||
return system_prompts.EDITOR
|
||||
if "world_state" in kind:
|
||||
return system_prompts.WORLD_STATE
|
||||
if "analyze_freeform" in kind:
|
||||
return system_prompts.ANALYST_FREEFORM
|
||||
if "analyst" in kind:
|
||||
return system_prompts.ANALYST
|
||||
if "analyze" in kind:
|
||||
|
||||
@@ -187,7 +187,7 @@ class OpenAIClient(ClientBase):
|
||||
expected_response = right.strip()
|
||||
if expected_response.startswith("{") and supports_json_object:
|
||||
parameters["response_format"] = {"type": "json_object"}
|
||||
except (IndexError, ValueError):
|
||||
except IndexError:
|
||||
pass
|
||||
|
||||
human_message = {'role': 'user', 'content': prompt.strip()}
|
||||
|
||||
@@ -27,10 +27,6 @@ class TextGeneratorWebuiClient(ClientBase):
|
||||
raise Exception("Could not find model info (wrong api version?)")
|
||||
response_data = response.json()
|
||||
model_name = response_data.get("model_name")
|
||||
|
||||
if model_name == "None":
|
||||
model_name = None
|
||||
|
||||
return model_name
|
||||
|
||||
|
||||
|
||||
@@ -23,7 +23,6 @@ from .cmd_save_as import CmdSaveAs
|
||||
from .cmd_save_characters import CmdSaveCharacters
|
||||
from .cmd_setenv import CmdSetEnvironmentToScene, CmdSetEnvironmentToCreative
|
||||
from .cmd_time_util import *
|
||||
from .cmd_tts import *
|
||||
from .cmd_world_state import CmdWorldState
|
||||
from .cmd_run_helios_test import CmdHeliosTest
|
||||
from .manager import Manager
|
||||
@@ -32,5 +32,4 @@ class CmdRebuildArchive(TalemateCommand):
|
||||
if not more:
|
||||
break
|
||||
|
||||
self.scene.sync_time()
|
||||
await self.scene.commit_to_memory()
|
||||
|
||||
@@ -1,33 +0,0 @@
|
||||
import asyncio
|
||||
import logging
|
||||
|
||||
from talemate.commands.base import TalemateCommand
|
||||
from talemate.commands.manager import register
|
||||
from talemate.prompts.base import set_default_sectioning_handler
|
||||
from talemate.instance import get_agent
|
||||
|
||||
__all__ = [
|
||||
"CmdTestTTS",
|
||||
]
|
||||
|
||||
@register
|
||||
class CmdTestTTS(TalemateCommand):
|
||||
"""
|
||||
Command class for the 'test_tts' command
|
||||
"""
|
||||
|
||||
name = "test_tts"
|
||||
description = "Test the TTS agent"
|
||||
aliases = []
|
||||
|
||||
async def run(self):
|
||||
tts_agent = get_agent("tts")
|
||||
|
||||
try:
|
||||
last_message = str(self.scene.history[-1])
|
||||
except IndexError:
|
||||
last_message = "Welcome to talemate!"
|
||||
|
||||
|
||||
await tts_agent.generate(last_message)
|
||||
|
||||
@@ -20,7 +20,7 @@ class Client(BaseModel):
|
||||
|
||||
|
||||
class AgentActionConfig(BaseModel):
|
||||
value: Union[int, float, str, bool, None] = None
|
||||
value: Union[int, float, str, bool]
|
||||
|
||||
class AgentAction(BaseModel):
|
||||
enabled: bool = True
|
||||
@@ -42,17 +42,17 @@ class Agent(BaseModel):
|
||||
return super().model_dump(exclude_none=True)
|
||||
|
||||
class GamePlayerCharacter(BaseModel):
|
||||
name: str = ""
|
||||
color: str = "#3362bb"
|
||||
gender: str = ""
|
||||
description: Optional[str] = ""
|
||||
name: str
|
||||
color: str
|
||||
gender: str
|
||||
description: Optional[str]
|
||||
|
||||
class Config:
|
||||
extra = "ignore"
|
||||
|
||||
|
||||
class Game(BaseModel):
|
||||
default_player_character: GamePlayerCharacter = GamePlayerCharacter()
|
||||
default_player_character: GamePlayerCharacter
|
||||
|
||||
class Config:
|
||||
extra = "ignore"
|
||||
@@ -65,22 +65,6 @@ class OpenAIConfig(BaseModel):
|
||||
|
||||
class RunPodConfig(BaseModel):
|
||||
api_key: Union[str,None]=None
|
||||
|
||||
class ElevenLabsConfig(BaseModel):
|
||||
api_key: Union[str,None]=None
|
||||
model: str = "eleven_turbo_v2"
|
||||
|
||||
class CoquiConfig(BaseModel):
|
||||
api_key: Union[str,None]=None
|
||||
|
||||
class TTSVoiceSamples(BaseModel):
|
||||
label:str
|
||||
value:str
|
||||
|
||||
class TTSConfig(BaseModel):
|
||||
device:str = "cuda"
|
||||
model:str = "tts_models/multilingual/multi-dataset/xtts_v2"
|
||||
voices: list[TTSVoiceSamples] = pydantic.Field(default_factory=list)
|
||||
|
||||
class ChromaDB(BaseModel):
|
||||
instructor_device: str="cpu"
|
||||
@@ -101,12 +85,6 @@ class Config(BaseModel):
|
||||
|
||||
chromadb: ChromaDB = ChromaDB()
|
||||
|
||||
elevenlabs: ElevenLabsConfig = ElevenLabsConfig()
|
||||
|
||||
coqui: CoquiConfig = CoquiConfig()
|
||||
|
||||
tts: TTSConfig = TTSConfig()
|
||||
|
||||
class Config:
|
||||
extra = "ignore"
|
||||
|
||||
|
||||
@@ -24,8 +24,6 @@ CommandStatus = signal("command_status")
|
||||
WorldState = signal("world_state")
|
||||
ArchivedHistory = signal("archived_history")
|
||||
|
||||
AudioQueue = signal("audio_queue")
|
||||
|
||||
MessageEdited = signal("message_edited")
|
||||
|
||||
handlers = {
|
||||
@@ -48,5 +46,4 @@ handlers = {
|
||||
"archived_history": ArchivedHistory,
|
||||
"message_edited": MessageEdited,
|
||||
"prompt_sent": PromptSent,
|
||||
"audio_queue": AudioQueue,
|
||||
}
|
||||
|
||||
@@ -4,7 +4,7 @@ from dataclasses import dataclass
|
||||
from typing import TYPE_CHECKING
|
||||
|
||||
if TYPE_CHECKING:
|
||||
from talemate.tale_mate import Scene, Actor, SceneMessage
|
||||
from talemate.tale_mate import Scene, Actor
|
||||
|
||||
__all__ = [
|
||||
"Event",
|
||||
@@ -46,8 +46,4 @@ class GameLoopStartEvent(GameLoopEvent):
|
||||
|
||||
@dataclass
|
||||
class GameLoopActorIterEvent(GameLoopEvent):
|
||||
actor: Actor
|
||||
|
||||
@dataclass
|
||||
class GameLoopNewMessageEvent(GameLoopEvent):
|
||||
message: SceneMessage
|
||||
actor: Actor
|
||||
@@ -190,11 +190,8 @@ async def load_scene_from_data(
|
||||
await scene.add_actor(actor)
|
||||
|
||||
if scene.environment != "creative":
|
||||
try:
|
||||
await scene.world_state.request_update(initial_only=True)
|
||||
except Exception as e:
|
||||
log.error("world_state.request_update", error=e)
|
||||
|
||||
await scene.world_state.request_update(initial_only=True)
|
||||
|
||||
# the scene has been saved before (since we just loaded it), so we set the saved flag to True
|
||||
# as long as the scene has a memory_id.
|
||||
scene.saved = "memory_id" in scene_data
|
||||
|
||||
@@ -473,6 +473,8 @@ class Prompt:
|
||||
|
||||
# remove all duplicate whitespace
|
||||
cleaned = re.sub(r"\s+", " ", cleaned)
|
||||
print("set_json_response", cleaned)
|
||||
|
||||
return self.set_prepared_response(cleaned)
|
||||
|
||||
|
||||
@@ -516,7 +518,7 @@ class Prompt:
|
||||
|
||||
log.warning("parse_json_response error on first attempt - sending to AI to fix", response=response, error=e)
|
||||
fixed_response = await self.client.send_prompt(
|
||||
f"fix the syntax errors in this JSON string, but keep the structure as is. Remove any comments.\n\nError:{e}\n\n```json\n{response}\n```<|BOT|>"+"{",
|
||||
f"fix the syntax errors in this JSON string, but keep the structure as is.\n\nError:{e}\n\n```json\n{response}\n```<|BOT|>"+"{",
|
||||
kind="analyze_long",
|
||||
)
|
||||
log.warning("parse_json_response error on first attempt - sending to AI to fix", response=response, error=e)
|
||||
|
||||
@@ -33,7 +33,8 @@ You may chose to have {{ talking_character.name}} respond to the conversation, o
|
||||
|
||||
Use an informal and colloquial register with a conversational tone. Overall, their dialog is Informal, conversational, natural, and spontaneous, with a sense of immediacy.
|
||||
|
||||
Spoken words MUST be enclosed in double quotes, e.g. {{ talking_character.name}}: "spoken words.".
|
||||
Spoken word should be enclosed in double quotes, e.g. "Hello, how are you?"
|
||||
Narration and actions should be enclosed in asterisks, e.g. *She smiles.*
|
||||
{{ extra_instructions }}
|
||||
<|CLOSE_SECTION|>
|
||||
{% if memory -%}
|
||||
|
||||
@@ -8,18 +8,12 @@ Scenario Premise: {{ scene.description }}
|
||||
{% endfor %}
|
||||
{% endblock -%}
|
||||
<|CLOSE_SECTION|>
|
||||
{% for scene_context in scene.context_history(budget=max_tokens-200-count_tokens(self.rendered_context()), min_dialogue=25) -%}
|
||||
{% for scene_context in scene.context_history(budget=max_tokens-200-count_tokens(self.rendered_context())) -%}
|
||||
{{ scene_context }}
|
||||
{% endfor %}
|
||||
<|SECTION:TASK|>
|
||||
Based on the previous line '{{ last_line }}', create the next line of narration. This line should focus solely on describing sensory details (like sounds, sights, smells, tactile sensations) or external actions that move the story forward. Avoid including any character's internal thoughts, feelings, or dialogue. Your narration should directly respond to '{{ last_line }}', either by elaborating on the immediate scene or by subtly advancing the plot. Generate exactly one sentence of new narration. If the character is trying to determine some state, truth or situation, try to answer as part of the narration.
|
||||
|
||||
Be creative and generate something new and interesting, but stay true to the setting and context of the story so far.
|
||||
|
||||
Use an informal and colloquial register with a conversational tone. Overall, the narrative is Informal, conversational, natural, and spontaneous, with a sense of immediacy.
|
||||
|
||||
Narration style should be that of a 90s point and click adventure game. You are omniscient and can describe the scene in detail.
|
||||
|
||||
Only generate new narration.
|
||||
Be creative and generate something new and interesting.
|
||||
<|CLOSE_SECTION|>
|
||||
{{ set_prepared_response('*') }}
|
||||
@@ -14,16 +14,17 @@ Last time we checked on {{ character.name }}:
|
||||
|
||||
<|SECTION:INFORMATION|>
|
||||
{{ query_memory("How old is {character.name}?") }}
|
||||
{{ query_scene("Where is {character.name} and what is {character.name} doing?") }}
|
||||
{{ query_scene("what is {character.name} wearing? Be explicit.") }}
|
||||
{{ query_scene("Where is {character.name}?") }}
|
||||
{{ query_scene("what is {character.name} doing?") }}
|
||||
{{ query_scene("what is {character.name} wearing?") }}
|
||||
<|CLOSE_SECTION|>
|
||||
|
||||
<|SECTION:TASK|>
|
||||
Last line of dialogue: {{ scene.history[-1] }}
|
||||
Questions: Where is {{ character.name}} currently and what are they doing? What is {{ character.name }}'s appearance at the end of the dialogue? What is {{ character.pronoun_2 }} wearing? What position is {{ character.pronoun_2 }} in?
|
||||
Instruction: Answer the questions to describe {{ character.name }}'s appearance at the end of the dialogue and summarize into narrative description. Use the whole dialogue for context. You must fill in gaps using imagination as long as it fits the existing context. You will provide a confident and decisive answer to the question.
|
||||
Instruction: Answer the questions to describe {{ character.name }}'s appearance at the end of the dialogue and summarize into narrative description. Use the whole dialogue for context.
|
||||
Content Context: This is a specific scene from {{ scene.context }}
|
||||
Narration style: point and click adventure game from the 90s
|
||||
Expected Answer: A brief summarized visual description of {{ character.name }}'s appearance at the end of the dialogue. NEVER break the fourth wall. (2 to 3 sentences)
|
||||
Expected Answer: A summarized visual description of {{ character.name }}'s appearance at the dialogue.
|
||||
<|CLOSE_SECTION|>
|
||||
Narrator answers: {{ bot_token }}At the end of the dialogue,
|
||||
@@ -14,17 +14,9 @@ Content Context: {{ scene.context }}
|
||||
{{ scene_context }}
|
||||
{% endfor %}
|
||||
<|SECTION:TASK|>
|
||||
Continue the current dialogue by narrating the progression of the scene.
|
||||
|
||||
Continue the current dialogue by narrating the progression of the scene
|
||||
Narration style: point and click adventure game from the 90s
|
||||
If the scene is over, narrate the beginning of the next scene.
|
||||
|
||||
Be creative and generate something new and interesting, but stay true to the setting and context of the story so far.
|
||||
|
||||
Use an informal and colloquial register with a conversational tone. Overall, the narrative is informal, conversational, natural, and spontaneous, with a sense of immediacy.
|
||||
|
||||
Narration style should be that of a 90s point and click adventure game. You are omniscient and can describe the scene in detail.
|
||||
|
||||
Only generate new narration. Avoid including any character's internal thoughts or dialogue.
|
||||
<|CLOSE_SECTION|>
|
||||
{{ bot_token }}
|
||||
{% for row in scene.history[-10:] -%}
|
||||
|
||||
@@ -6,21 +6,15 @@
|
||||
{% endfor %}
|
||||
<|SECTION:TASK|>
|
||||
{% if query.endswith("?") -%}
|
||||
Question: {{ query }}
|
||||
Extra context: {{ query_memory(query, as_question_answer=False) }}
|
||||
Instruction: Analyze Context, History and Dialogue and then answer the question: "{{ query }}".
|
||||
|
||||
When evaluating both story and context, story is more important. You can fill in gaps using imagination as long as it is based on the existing context.
|
||||
|
||||
Respect the scene progression and answer in the context of the end of the dialogue.
|
||||
|
||||
Use your imagination to fill in gaps in order to answer the question in a confident and decisive manner. Avoid uncertainty and vagueness.
|
||||
Instruction: Analyze Context, History and Dialogue. When evaluating both story and memory, story is more important. You can fill in gaps using imagination as long as it is based on the existing context. Respect the scene progression and answer in the context of the end of the dialogue.
|
||||
{% else -%}
|
||||
Instruction: {{ query }}
|
||||
Extra context: {{ query_memory(query, as_question_answer=False) }}
|
||||
Answer based on Context, History and Dialogue.
|
||||
When evaluating both story and context, story is more important. You can fill in gaps using imagination as long as it is based on the existing context.
|
||||
Answer based on Context, History and Dialogue. When evaluating both story and memory, story is more important. You can fill in gaps using imagination as long as it is based on the existing context.
|
||||
{% endif -%}
|
||||
Content Context: This is a specific scene from {{ scene.context }}
|
||||
Your answer should be in the style of short, concise narration that fits the context of the scene. (1 to 2 sentences)
|
||||
Your answer should be in the style of short narration that fits the context of the scene.
|
||||
<|CLOSE_SECTION|>
|
||||
Narrator answers: {% if at_the_end %}{{ bot_token }}At the end of the dialogue, {% endif %}
|
||||
@@ -1,6 +1,6 @@
|
||||
{% set questions = instruct_text("Ask the narrator three (3) questions to gather more context from the past for the continuation of this conversation. If a character is asking about a state, location or information about an item or another character, make sure to include question(s) that help gather context for this.", text) %}
|
||||
|
||||
<|SECTION:CONTEXT|>
|
||||
{% for memory in query_memory(questions, as_question_answer=False, max_tokens=max_tokens-500, iterate=10) -%}
|
||||
{% for memory in query_memory(text, as_question_answer=False, max_tokens=max_tokens-500, iterate=20) -%}
|
||||
{{ memory }}
|
||||
|
||||
{% endfor -%}
|
||||
@@ -8,7 +8,7 @@
|
||||
<|SECTION:TASK|>
|
||||
Answer the following questions:
|
||||
|
||||
{{ questions }}
|
||||
{{ instruct_text("Ask the narrator three (3) questions to gather more context from the past for the continuation of this conversation. If a character is asking about a state, location or information about an item or another character, make sure to include question(s) that help gather context for this.", text) }}
|
||||
|
||||
You answers should be precise, truthful and short. Pay close attention to timestamps when retrieving information from the context.
|
||||
|
||||
|
||||
@@ -1 +1 @@
|
||||
Instructions for a talented story analyst. The analyst will analyze parts of a story or dialogue and give truthful answers based on the dialogue or events given to him. The analyst will never make up facts or lie in his answers and will use crude, lewd and strong language if the context requires it.
|
||||
Instructions for a talented story analyst. The analyst will analyze parts of a story or dialogue and give truthful answers based on the dialogue or events given to him. The analyst will never make up facts or lie in his answers and will use crude, lewd and strong language if the content requires it.
|
||||
@@ -56,7 +56,7 @@ async def websocket_endpoint(websocket, path):
|
||||
await instance.sync_client_bootstraps()
|
||||
except Exception as e:
|
||||
log.error("send_client_bootstraps", error=e, traceback=traceback.format_exc())
|
||||
await asyncio.sleep(15)
|
||||
await asyncio.sleep(60)
|
||||
|
||||
send_client_bootstraps_task = asyncio.create_task(send_client_bootstraps())
|
||||
|
||||
@@ -110,6 +110,7 @@ async def websocket_endpoint(websocket, path):
|
||||
elif action_type == "request_scenes_list":
|
||||
query = data.get("query", "")
|
||||
handler.request_scenes_list(query)
|
||||
|
||||
elif action_type == "configure_clients":
|
||||
handler.configure_clients(data.get("clients"))
|
||||
elif action_type == "configure_agents":
|
||||
|
||||
@@ -1,6 +1,5 @@
|
||||
import pydantic
|
||||
import structlog
|
||||
from talemate import VERSION
|
||||
|
||||
from talemate.config import Config as AppConfigData, load_config, save_config
|
||||
|
||||
@@ -9,12 +8,6 @@ log = structlog.get_logger("talemate.server.config")
|
||||
class ConfigPayload(pydantic.BaseModel):
|
||||
config: AppConfigData
|
||||
|
||||
class DefaultCharacterPayload(pydantic.BaseModel):
|
||||
name: str
|
||||
gender: str
|
||||
description: str
|
||||
color: str = "#3362bb"
|
||||
|
||||
class ConfigPlugin:
|
||||
|
||||
router = "config"
|
||||
@@ -43,38 +36,8 @@ class ConfigPlugin:
|
||||
save_config(current_config)
|
||||
|
||||
self.websocket_handler.config = current_config
|
||||
self.websocket_handler.queue_put({
|
||||
"type": "app_config",
|
||||
"data": load_config(),
|
||||
"version": VERSION
|
||||
})
|
||||
|
||||
self.websocket_handler.queue_put({
|
||||
"type": "config",
|
||||
"action": "save_complete",
|
||||
})
|
||||
|
||||
async def handle_save_default_character(self, data):
|
||||
|
||||
log.info("Saving default character", data=data["data"])
|
||||
|
||||
payload = DefaultCharacterPayload(**data["data"])
|
||||
|
||||
current_config = load_config()
|
||||
|
||||
current_config["game"]["default_player_character"] = payload.model_dump()
|
||||
|
||||
log.info("Saving default character", character=current_config["game"]["default_player_character"])
|
||||
|
||||
save_config(current_config)
|
||||
|
||||
self.websocket_handler.config = current_config
|
||||
self.websocket_handler.queue_put({
|
||||
"type": "app_config",
|
||||
"data": load_config(),
|
||||
"version": VERSION
|
||||
})
|
||||
self.websocket_handler.queue_put({
|
||||
"type": "config",
|
||||
"action": "save_default_character_complete",
|
||||
})
|
||||
|
||||
})
|
||||
@@ -1,26 +0,0 @@
|
||||
import structlog
|
||||
|
||||
import talemate.instance as instance
|
||||
|
||||
log = structlog.get_logger("talemate.server.tts")
|
||||
|
||||
class TTSPlugin:
|
||||
router = "tts"
|
||||
|
||||
def __init__(self, websocket_handler):
|
||||
self.websocket_handler = websocket_handler
|
||||
self.tts = None
|
||||
|
||||
async def handle(self, data:dict):
|
||||
|
||||
action = data.get("action")
|
||||
|
||||
|
||||
if action == "test":
|
||||
return await self.handle_test(data)
|
||||
|
||||
async def handle_test(self, data:dict):
|
||||
|
||||
tts_agent = instance.get_agent("tts")
|
||||
|
||||
await tts_agent.generate("Welcome to talemate!")
|
||||
@@ -91,7 +91,7 @@ class WebsocketHandler(Receiver):
|
||||
for agent_typ, agent_config in self.agents.items():
|
||||
try:
|
||||
client = self.llm_clients.get(agent_config.get("client"))["client"]
|
||||
except TypeError as e:
|
||||
except TypeError:
|
||||
client = None
|
||||
|
||||
if not client:
|
||||
@@ -167,16 +167,11 @@ class WebsocketHandler(Receiver):
|
||||
log.info("Configuring clients", clients=clients)
|
||||
|
||||
for client in clients:
|
||||
|
||||
client.pop("status", None)
|
||||
|
||||
if client["type"] in ["textgenwebui", "lmstudio"]:
|
||||
try:
|
||||
max_token_length = int(client.get("max_token_length", 2048))
|
||||
except ValueError:
|
||||
continue
|
||||
|
||||
client.pop("model", None)
|
||||
|
||||
self.llm_clients[client["name"]] = {
|
||||
"type": client["type"],
|
||||
@@ -185,10 +180,6 @@ class WebsocketHandler(Receiver):
|
||||
"max_token_length": max_token_length,
|
||||
}
|
||||
elif client["type"] == "openai":
|
||||
|
||||
client.pop("model_name", None)
|
||||
client.pop("apiUrl", None)
|
||||
|
||||
self.llm_clients[client["name"]] = {
|
||||
"type": "openai",
|
||||
"name": client["name"],
|
||||
@@ -222,25 +213,16 @@ class WebsocketHandler(Receiver):
|
||||
def configure_agents(self, agents):
|
||||
self.agents = {typ: {} for typ in instance.agent_types()}
|
||||
|
||||
log.debug("Configuring agents")
|
||||
log.debug("Configuring agents", agents=agents)
|
||||
|
||||
for agent in agents:
|
||||
name = agent["name"]
|
||||
|
||||
# special case for memory agent
|
||||
if name == "memory" or name == "tts":
|
||||
if name == "memory":
|
||||
self.agents[name] = {
|
||||
"name": name,
|
||||
}
|
||||
agent_instance = instance.get_agent(name, **self.agents[name])
|
||||
if agent_instance.has_toggle:
|
||||
self.agents[name]["enabled"] = agent["enabled"]
|
||||
|
||||
if getattr(agent_instance, "actions", None):
|
||||
self.agents[name]["actions"] = agent.get("actions", {})
|
||||
|
||||
agent_instance.apply_config(**self.agents[name])
|
||||
log.debug("Configured agent", name=name)
|
||||
continue
|
||||
|
||||
if name not in self.agents:
|
||||
@@ -437,14 +419,6 @@ class WebsocketHandler(Receiver):
|
||||
}
|
||||
)
|
||||
|
||||
def handle_audio_queue(self, emission: Emission):
|
||||
self.queue_put(
|
||||
{
|
||||
"type": "audio_queue",
|
||||
"data": emission.data,
|
||||
}
|
||||
)
|
||||
|
||||
def handle_request_input(self, emission: Emission):
|
||||
self.waiting_for_input = True
|
||||
|
||||
|
||||
@@ -46,7 +46,7 @@ log = structlog.get_logger("talemate")
|
||||
async_signals.register("game_loop_start")
|
||||
async_signals.register("game_loop")
|
||||
async_signals.register("game_loop_actor_iter")
|
||||
async_signals.register("game_loop_new_message")
|
||||
|
||||
|
||||
class Character:
|
||||
"""
|
||||
@@ -578,7 +578,6 @@ class Scene(Emitter):
|
||||
"game_loop": async_signals.get("game_loop"),
|
||||
"game_loop_start": async_signals.get("game_loop_start"),
|
||||
"game_loop_actor_iter": async_signals.get("game_loop_actor_iter"),
|
||||
"game_loop_new_message": async_signals.get("game_loop_new_message"),
|
||||
}
|
||||
|
||||
self.setup_emitter(scene=self)
|
||||
@@ -705,12 +704,6 @@ class Scene(Emitter):
|
||||
messages=messages,
|
||||
)
|
||||
)
|
||||
|
||||
loop = asyncio.get_event_loop()
|
||||
for message in messages:
|
||||
loop.run_until_complete(self.signals["game_loop_new_message"].send(
|
||||
events.GameLoopNewMessageEvent(scene=self, event_type="game_loop_new_message", message=message)
|
||||
))
|
||||
|
||||
def push_archive(self, entry: data_objects.ArchiveEntry):
|
||||
|
||||
@@ -1184,7 +1177,7 @@ class Scene(Emitter):
|
||||
},
|
||||
)
|
||||
|
||||
self.log.debug("scene_status", scene=self.name, scene_time=self.ts, human_ts=util.iso8601_duration_to_human(self.ts, suffix=""), saved=self.saved)
|
||||
self.log.debug("scene_status", scene=self.name, scene_time=self.ts, saved=self.saved)
|
||||
|
||||
def set_environment(self, environment: str):
|
||||
"""
|
||||
@@ -1197,7 +1190,6 @@ class Scene(Emitter):
|
||||
"""
|
||||
Accepts an iso6801 duration string and advances the scene's world state by that amount
|
||||
"""
|
||||
log.debug("advance_time", ts=ts, scene_ts=self.ts, duration=isodate.parse_duration(ts), scene_duration=isodate.parse_duration(self.ts))
|
||||
|
||||
self.ts = isodate.duration_isoformat(
|
||||
isodate.parse_duration(self.ts) + isodate.parse_duration(ts)
|
||||
@@ -1220,12 +1212,9 @@ class Scene(Emitter):
|
||||
if self.archived_history[i].get("ts"):
|
||||
self.ts = self.archived_history[i]["ts"]
|
||||
break
|
||||
|
||||
end = self.archived_history[-1].get("end", 0)
|
||||
else:
|
||||
end = 0
|
||||
|
||||
for message in self.history[end:]:
|
||||
|
||||
for message in self.history:
|
||||
if isinstance(message, TimePassageMessage):
|
||||
self.advance_time(message.ts)
|
||||
|
||||
|
||||
@@ -490,39 +490,30 @@ def clean_attribute(attribute: str) -> str:
|
||||
|
||||
|
||||
|
||||
|
||||
def duration_to_timedelta(duration):
|
||||
"""Convert an isodate.Duration object or a datetime.timedelta object to a datetime.timedelta object."""
|
||||
# Check if the duration is already a timedelta object
|
||||
if isinstance(duration, datetime.timedelta):
|
||||
return duration
|
||||
|
||||
# If it's an isodate.Duration object with separate year, month, day, hour, minute, second attributes
|
||||
"""Convert an isodate.Duration object to a datetime.timedelta object."""
|
||||
days = int(duration.years) * 365 + int(duration.months) * 30 + int(duration.days)
|
||||
seconds = duration.tdelta.seconds
|
||||
return datetime.timedelta(days=days, seconds=seconds)
|
||||
return datetime.timedelta(days=days)
|
||||
|
||||
def timedelta_to_duration(delta):
|
||||
"""Convert a datetime.timedelta object to an isodate.Duration object."""
|
||||
# Extract days and convert to years, months, and days
|
||||
days = delta.days
|
||||
years = days // 365
|
||||
days %= 365
|
||||
months = days // 30
|
||||
days %= 30
|
||||
# Convert remaining seconds to hours, minutes, and seconds
|
||||
seconds = delta.seconds
|
||||
hours = seconds // 3600
|
||||
seconds %= 3600
|
||||
minutes = seconds // 60
|
||||
seconds %= 60
|
||||
return isodate.Duration(years=years, months=months, days=days, hours=hours, minutes=minutes, seconds=seconds)
|
||||
return isodate.duration.Duration(years=years, months=months, days=days)
|
||||
|
||||
def parse_duration_to_isodate_duration(duration_str):
|
||||
"""Parse ISO 8601 duration string and ensure the result is an isodate.Duration."""
|
||||
parsed_duration = isodate.parse_duration(duration_str)
|
||||
if isinstance(parsed_duration, datetime.timedelta):
|
||||
return timedelta_to_duration(parsed_duration)
|
||||
days = parsed_duration.days
|
||||
years = days // 365
|
||||
days %= 365
|
||||
months = days // 30
|
||||
days %= 30
|
||||
return isodate.duration.Duration(years=years, months=months, days=days)
|
||||
return parsed_duration
|
||||
|
||||
def iso8601_diff(duration_str1, duration_str2):
|
||||
@@ -542,50 +533,40 @@ def iso8601_diff(duration_str1, duration_str2):
|
||||
|
||||
return difference
|
||||
|
||||
def iso8601_duration_to_human(iso_duration, suffix: str = " ago"):
|
||||
|
||||
def iso8601_duration_to_human(iso_duration, suffix:str=" ago"):
|
||||
# Parse the ISO8601 duration string into an isodate duration object
|
||||
if not isinstance(iso_duration, isodate.Duration):
|
||||
duration = isodate.parse_duration(iso_duration)
|
||||
else:
|
||||
|
||||
if isinstance(iso_duration, isodate.Duration):
|
||||
duration = iso_duration
|
||||
|
||||
# Extract years, months, days, and the time part as seconds
|
||||
years, months, days, hours, minutes, seconds = 0, 0, 0, 0, 0, 0
|
||||
else:
|
||||
duration = isodate.parse_duration(iso_duration)
|
||||
|
||||
if isinstance(duration, isodate.Duration):
|
||||
years = duration.years
|
||||
months = duration.months
|
||||
days = duration.days
|
||||
hours = duration.tdelta.seconds // 3600
|
||||
minutes = (duration.tdelta.seconds % 3600) // 60
|
||||
seconds = duration.tdelta.seconds % 60
|
||||
elif isinstance(duration, datetime.timedelta):
|
||||
seconds = duration.tdelta.total_seconds()
|
||||
else:
|
||||
years, months = 0, 0
|
||||
days = duration.days
|
||||
hours = duration.seconds // 3600
|
||||
minutes = (duration.seconds % 3600) // 60
|
||||
seconds = duration.seconds % 60
|
||||
seconds = duration.total_seconds() - days * 86400 # Extract time-only part
|
||||
|
||||
# Adjust for cases where duration is a timedelta object
|
||||
# Convert days to weeks and days if applicable
|
||||
weeks, days = divmod(days, 7)
|
||||
|
||||
# Build the human-readable components
|
||||
hours, seconds = divmod(seconds, 3600)
|
||||
minutes, seconds = divmod(seconds, 60)
|
||||
|
||||
components = []
|
||||
if years:
|
||||
components.append(f"{years} Year{'s' if years > 1 else ''}")
|
||||
if months:
|
||||
components.append(f"{months} Month{'s' if months > 1 else ''}")
|
||||
if weeks:
|
||||
components.append(f"{weeks} Week{'s' if weeks > 1 else ''}")
|
||||
if days:
|
||||
components.append(f"{days} Day{'s' if days > 1 else ''}")
|
||||
if hours:
|
||||
components.append(f"{hours} Hour{'s' if hours > 1 else ''}")
|
||||
components.append(f"{int(hours)} Hour{'s' if hours > 1 else ''}")
|
||||
if minutes:
|
||||
components.append(f"{minutes} Minute{'s' if minutes > 1 else ''}")
|
||||
components.append(f"{int(minutes)} Minute{'s' if minutes > 1 else ''}")
|
||||
if seconds:
|
||||
components.append(f"{seconds} Second{'s' if seconds > 1 else ''}")
|
||||
components.append(f"{int(seconds)} Second{'s' if seconds > 1 else ''}")
|
||||
|
||||
# Construct the human-readable string
|
||||
if len(components) > 1:
|
||||
@@ -595,7 +576,7 @@ def iso8601_duration_to_human(iso_duration, suffix: str = " ago"):
|
||||
human_str = components[0]
|
||||
else:
|
||||
human_str = "Moments"
|
||||
|
||||
|
||||
return f"{human_str}{suffix}"
|
||||
|
||||
def iso8601_diff_to_human(start, end):
|
||||
@@ -603,7 +584,6 @@ def iso8601_diff_to_human(start, end):
|
||||
return ""
|
||||
|
||||
diff = iso8601_diff(start, end)
|
||||
|
||||
return iso8601_duration_to_human(diff)
|
||||
|
||||
|
||||
@@ -857,15 +837,6 @@ def ensure_dialog_line_format(line:str):
|
||||
elif segment_open is not None and segment_open != c:
|
||||
# open segment is not the same as the current character
|
||||
# opening - close the current segment and open a new one
|
||||
|
||||
# if we are at the last character we append the segment
|
||||
if i == len(line)-1 and segment.strip():
|
||||
segment += c
|
||||
segments += [segment.strip()]
|
||||
segment_open = None
|
||||
segment = None
|
||||
continue
|
||||
|
||||
segments += [segment.strip()]
|
||||
segment_open = c
|
||||
segment = c
|
||||
@@ -881,15 +852,14 @@ def ensure_dialog_line_format(line:str):
|
||||
segment += c
|
||||
|
||||
if segment is not None:
|
||||
if segment.strip().strip("*").strip('"'):
|
||||
segments += [segment.strip()]
|
||||
segments += [segment.strip()]
|
||||
|
||||
for i in range(len(segments)):
|
||||
segment = segments[i]
|
||||
if segment in ['"', '*']:
|
||||
if i > 0:
|
||||
prev_segment = segments[i-1]
|
||||
if prev_segment and prev_segment[-1] not in ['"', '*']:
|
||||
if prev_segment[-1] not in ['"', '*']:
|
||||
segments[i-1] = f"{prev_segment}{segment}"
|
||||
segments[i] = ""
|
||||
continue
|
||||
@@ -930,27 +900,4 @@ def ensure_dialog_line_format(line:str):
|
||||
elif next_segment and next_segment[0] == '*':
|
||||
segments[i] = f"\"{segment}\""
|
||||
|
||||
for i in range(len(segments)):
|
||||
segments[i] = clean_uneven_markers(segments[i], '"')
|
||||
segments[i] = clean_uneven_markers(segments[i], '*')
|
||||
|
||||
return " ".join(segment for segment in segments if segment).strip()
|
||||
|
||||
|
||||
def clean_uneven_markers(chunk:str, marker:str):
|
||||
|
||||
# if there is an uneven number of quotes, remove the last one if its
|
||||
# at the end of the chunk. If its in the middle, add a quote to the endc
|
||||
count = chunk.count(marker)
|
||||
|
||||
if count % 2 == 1:
|
||||
if chunk.endswith(marker):
|
||||
chunk = chunk[:-1]
|
||||
elif chunk.startswith(marker):
|
||||
chunk = chunk[1:]
|
||||
elif count == 1:
|
||||
chunk = chunk.replace(marker, "")
|
||||
else:
|
||||
chunk += marker
|
||||
|
||||
return chunk
|
||||
return " ".join(segment for segment in segments if segment)
|
||||
|
||||
@@ -1,7 +1,6 @@
|
||||
from pydantic import BaseModel
|
||||
from talemate.emit import emit
|
||||
import structlog
|
||||
import traceback
|
||||
from typing import Union
|
||||
|
||||
import talemate.instance as instance
|
||||
@@ -60,8 +59,7 @@ class WorldState(BaseModel):
|
||||
world_state = await self.agent.request_world_state()
|
||||
except Exception as e:
|
||||
self.emit()
|
||||
log.error("world_state.request_update", error=e, traceback=traceback.format_exc())
|
||||
return
|
||||
raise e
|
||||
|
||||
previous_characters = self.characters
|
||||
previous_items = self.items
|
||||
|
||||
243
talemate_frontend/package-lock.json
generated
243
talemate_frontend/package-lock.json
generated
@@ -64,13 +64,12 @@
|
||||
}
|
||||
},
|
||||
"node_modules/@babel/code-frame": {
|
||||
"version": "7.23.5",
|
||||
"resolved": "https://registry.npmjs.org/@babel/code-frame/-/code-frame-7.23.5.tgz",
|
||||
"integrity": "sha512-CgH3s1a96LipHCmSUmYFPwY7MNx8C3avkq7i4Wl3cfa662ldtUe4VM1TPXX70pfmrlWTb6jLqTYrZyT2ZTJBgA==",
|
||||
"version": "7.22.5",
|
||||
"resolved": "https://registry.npmmirror.com/@babel/code-frame/-/code-frame-7.22.5.tgz",
|
||||
"integrity": "sha512-Xmwn266vad+6DAqEB2A6V/CcZVp62BbwVmcOJc2RPuwih1kw02TjQvWVWlcKGbBPd+8/0V5DEkOcizRGYsspYQ==",
|
||||
"dev": true,
|
||||
"dependencies": {
|
||||
"@babel/highlight": "^7.23.4",
|
||||
"chalk": "^2.4.2"
|
||||
"@babel/highlight": "^7.22.5"
|
||||
},
|
||||
"engines": {
|
||||
"node": ">=6.9.0"
|
||||
@@ -130,12 +129,12 @@
|
||||
}
|
||||
},
|
||||
"node_modules/@babel/generator": {
|
||||
"version": "7.23.5",
|
||||
"resolved": "https://registry.npmjs.org/@babel/generator/-/generator-7.23.5.tgz",
|
||||
"integrity": "sha512-BPssCHrBD+0YrxviOa3QzpqwhNIXKEtOa2jQrm4FlmkC2apYgRnQcmPWiGZDlGxiNtltnUFolMe8497Esry+jA==",
|
||||
"version": "7.22.7",
|
||||
"resolved": "https://registry.npmmirror.com/@babel/generator/-/generator-7.22.7.tgz",
|
||||
"integrity": "sha512-p+jPjMG+SI8yvIaxGgeW24u7q9+5+TGpZh8/CuB7RhBKd7RCy8FayNEFNNKrNK/eUcY/4ExQqLmyrvBXKsIcwQ==",
|
||||
"dev": true,
|
||||
"dependencies": {
|
||||
"@babel/types": "^7.23.5",
|
||||
"@babel/types": "^7.22.5",
|
||||
"@jridgewell/gen-mapping": "^0.3.2",
|
||||
"@jridgewell/trace-mapping": "^0.3.17",
|
||||
"jsesc": "^2.5.1"
|
||||
@@ -244,22 +243,22 @@
|
||||
}
|
||||
},
|
||||
"node_modules/@babel/helper-environment-visitor": {
|
||||
"version": "7.22.20",
|
||||
"resolved": "https://registry.npmjs.org/@babel/helper-environment-visitor/-/helper-environment-visitor-7.22.20.tgz",
|
||||
"integrity": "sha512-zfedSIzFhat/gFhWfHtgWvlec0nqB9YEIVrpuwjruLlXfUSnA8cJB0miHKwqDnQ7d32aKo2xt88/xZptwxbfhA==",
|
||||
"version": "7.22.5",
|
||||
"resolved": "https://registry.npmmirror.com/@babel/helper-environment-visitor/-/helper-environment-visitor-7.22.5.tgz",
|
||||
"integrity": "sha512-XGmhECfVA/5sAt+H+xpSg0mfrHq6FzNr9Oxh7PSEBBRUb/mL7Kz3NICXb194rCqAEdxkhPT1a88teizAFyvk8Q==",
|
||||
"dev": true,
|
||||
"engines": {
|
||||
"node": ">=6.9.0"
|
||||
}
|
||||
},
|
||||
"node_modules/@babel/helper-function-name": {
|
||||
"version": "7.23.0",
|
||||
"resolved": "https://registry.npmjs.org/@babel/helper-function-name/-/helper-function-name-7.23.0.tgz",
|
||||
"integrity": "sha512-OErEqsrxjZTJciZ4Oo+eoZqeW9UIiOcuYKRJA4ZAgV9myA+pOXhhmpfNCKjEH/auVfEYVFJ6y1Tc4r0eIApqiw==",
|
||||
"version": "7.22.5",
|
||||
"resolved": "https://registry.npmmirror.com/@babel/helper-function-name/-/helper-function-name-7.22.5.tgz",
|
||||
"integrity": "sha512-wtHSq6jMRE3uF2otvfuD3DIvVhOsSNshQl0Qrd7qC9oQJzHvOL4qQXlQn2916+CXGywIjpGuIkoyZRRxHPiNQQ==",
|
||||
"dev": true,
|
||||
"dependencies": {
|
||||
"@babel/template": "^7.22.15",
|
||||
"@babel/types": "^7.23.0"
|
||||
"@babel/template": "^7.22.5",
|
||||
"@babel/types": "^7.22.5"
|
||||
},
|
||||
"engines": {
|
||||
"node": ">=6.9.0"
|
||||
@@ -413,18 +412,18 @@
|
||||
}
|
||||
},
|
||||
"node_modules/@babel/helper-string-parser": {
|
||||
"version": "7.23.4",
|
||||
"resolved": "https://registry.npmjs.org/@babel/helper-string-parser/-/helper-string-parser-7.23.4.tgz",
|
||||
"integrity": "sha512-803gmbQdqwdf4olxrX4AJyFBV/RTr3rSmOj0rKwesmzlfhYNDEs+/iOcznzpNWlJlIlTJC2QfPFcHB6DlzdVLQ==",
|
||||
"version": "7.22.5",
|
||||
"resolved": "https://registry.npmmirror.com/@babel/helper-string-parser/-/helper-string-parser-7.22.5.tgz",
|
||||
"integrity": "sha512-mM4COjgZox8U+JcXQwPijIZLElkgEpO5rsERVDJTc2qfCDfERyob6k5WegS14SX18IIjv+XD+GrqNumY5JRCDw==",
|
||||
"dev": true,
|
||||
"engines": {
|
||||
"node": ">=6.9.0"
|
||||
}
|
||||
},
|
||||
"node_modules/@babel/helper-validator-identifier": {
|
||||
"version": "7.22.20",
|
||||
"resolved": "https://registry.npmjs.org/@babel/helper-validator-identifier/-/helper-validator-identifier-7.22.20.tgz",
|
||||
"integrity": "sha512-Y4OZ+ytlatR8AI+8KZfKuL5urKp7qey08ha31L8b3BwewJAoJamTzyvxPR/5D+KkdJCGPq/+8TukHBlY10FX9A==",
|
||||
"version": "7.22.5",
|
||||
"resolved": "https://registry.npmmirror.com/@babel/helper-validator-identifier/-/helper-validator-identifier-7.22.5.tgz",
|
||||
"integrity": "sha512-aJXu+6lErq8ltp+JhkJUfk1MTGyuA4v7f3pA+BJ5HLfNC6nAQ0Cpi9uOquUj8Hehg0aUiHzWQbOVJGao6ztBAQ==",
|
||||
"dev": true,
|
||||
"engines": {
|
||||
"node": ">=6.9.0"
|
||||
@@ -469,13 +468,13 @@
|
||||
}
|
||||
},
|
||||
"node_modules/@babel/highlight": {
|
||||
"version": "7.23.4",
|
||||
"resolved": "https://registry.npmjs.org/@babel/highlight/-/highlight-7.23.4.tgz",
|
||||
"integrity": "sha512-acGdbYSfp2WheJoJm/EBBBLh/ID8KDc64ISZ9DYtBmC8/Q204PZJLHyzeB5qMzJ5trcOkybd78M4x2KWsUq++A==",
|
||||
"version": "7.22.5",
|
||||
"resolved": "https://registry.npmmirror.com/@babel/highlight/-/highlight-7.22.5.tgz",
|
||||
"integrity": "sha512-BSKlD1hgnedS5XRnGOljZawtag7H1yPfQp0tdNJCHoH6AZ+Pcm9VvkrK59/Yy593Ypg0zMxH2BxD1VPYUQ7UIw==",
|
||||
"dev": true,
|
||||
"dependencies": {
|
||||
"@babel/helper-validator-identifier": "^7.22.20",
|
||||
"chalk": "^2.4.2",
|
||||
"@babel/helper-validator-identifier": "^7.22.5",
|
||||
"chalk": "^2.0.0",
|
||||
"js-tokens": "^4.0.0"
|
||||
},
|
||||
"engines": {
|
||||
@@ -483,9 +482,9 @@
|
||||
}
|
||||
},
|
||||
"node_modules/@babel/parser": {
|
||||
"version": "7.23.5",
|
||||
"resolved": "https://registry.npmjs.org/@babel/parser/-/parser-7.23.5.tgz",
|
||||
"integrity": "sha512-hOOqoiNXrmGdFbhgCzu6GiURxUgM27Xwd/aPuu8RfHEZPBzL1Z54okAHAQjXfcQNwvrlkAmAp4SlRTZ45vlthQ==",
|
||||
"version": "7.22.7",
|
||||
"resolved": "https://registry.npmmirror.com/@babel/parser/-/parser-7.22.7.tgz",
|
||||
"integrity": "sha512-7NF8pOkHP5o2vpmGgNGcfAeCvOYhGLyA3Z4eBQkT1RJlWu47n63bCs93QfJ2hIAFCil7L5P2IWhs1oToVgrL0Q==",
|
||||
"bin": {
|
||||
"parser": "bin/babel-parser.js"
|
||||
},
|
||||
@@ -1774,33 +1773,33 @@
|
||||
}
|
||||
},
|
||||
"node_modules/@babel/template": {
|
||||
"version": "7.22.15",
|
||||
"resolved": "https://registry.npmjs.org/@babel/template/-/template-7.22.15.tgz",
|
||||
"integrity": "sha512-QPErUVm4uyJa60rkI73qneDacvdvzxshT3kksGqlGWYdOTIUOwJ7RDUL8sGqslY1uXWSL6xMFKEXDS3ox2uF0w==",
|
||||
"version": "7.22.5",
|
||||
"resolved": "https://registry.npmmirror.com/@babel/template/-/template-7.22.5.tgz",
|
||||
"integrity": "sha512-X7yV7eiwAxdj9k94NEylvbVHLiVG1nvzCV2EAowhxLTwODV1jl9UzZ48leOC0sH7OnuHrIkllaBgneUykIcZaw==",
|
||||
"dev": true,
|
||||
"dependencies": {
|
||||
"@babel/code-frame": "^7.22.13",
|
||||
"@babel/parser": "^7.22.15",
|
||||
"@babel/types": "^7.22.15"
|
||||
"@babel/code-frame": "^7.22.5",
|
||||
"@babel/parser": "^7.22.5",
|
||||
"@babel/types": "^7.22.5"
|
||||
},
|
||||
"engines": {
|
||||
"node": ">=6.9.0"
|
||||
}
|
||||
},
|
||||
"node_modules/@babel/traverse": {
|
||||
"version": "7.23.5",
|
||||
"resolved": "https://registry.npmjs.org/@babel/traverse/-/traverse-7.23.5.tgz",
|
||||
"integrity": "sha512-czx7Xy5a6sapWWRx61m1Ke1Ra4vczu1mCTtJam5zRTBOonfdJ+S/B6HYmGYu3fJtr8GGET3si6IhgWVBhJ/m8w==",
|
||||
"version": "7.22.8",
|
||||
"resolved": "https://registry.npmmirror.com/@babel/traverse/-/traverse-7.22.8.tgz",
|
||||
"integrity": "sha512-y6LPR+wpM2I3qJrsheCTwhIinzkETbplIgPBbwvqPKc+uljeA5gP+3nP8irdYt1mjQaDnlIcG+dw8OjAco4GXw==",
|
||||
"dev": true,
|
||||
"dependencies": {
|
||||
"@babel/code-frame": "^7.23.5",
|
||||
"@babel/generator": "^7.23.5",
|
||||
"@babel/helper-environment-visitor": "^7.22.20",
|
||||
"@babel/helper-function-name": "^7.23.0",
|
||||
"@babel/code-frame": "^7.22.5",
|
||||
"@babel/generator": "^7.22.7",
|
||||
"@babel/helper-environment-visitor": "^7.22.5",
|
||||
"@babel/helper-function-name": "^7.22.5",
|
||||
"@babel/helper-hoist-variables": "^7.22.5",
|
||||
"@babel/helper-split-export-declaration": "^7.22.6",
|
||||
"@babel/parser": "^7.23.5",
|
||||
"@babel/types": "^7.23.5",
|
||||
"@babel/parser": "^7.22.7",
|
||||
"@babel/types": "^7.22.5",
|
||||
"debug": "^4.1.0",
|
||||
"globals": "^11.1.0"
|
||||
},
|
||||
@@ -1809,13 +1808,13 @@
|
||||
}
|
||||
},
|
||||
"node_modules/@babel/types": {
|
||||
"version": "7.23.5",
|
||||
"resolved": "https://registry.npmjs.org/@babel/types/-/types-7.23.5.tgz",
|
||||
"integrity": "sha512-ON5kSOJwVO6xXVRTvOI0eOnWe7VdUcIpsovGo9U/Br4Ie4UVFQTboO2cYnDhAGU6Fp+UxSiT+pMft0SMHfuq6w==",
|
||||
"version": "7.22.5",
|
||||
"resolved": "https://registry.npmmirror.com/@babel/types/-/types-7.22.5.tgz",
|
||||
"integrity": "sha512-zo3MIHGOkPOfoRXitsgHLjEXmlDaD/5KU1Uzuc9GNiZPhSqVxVRtxuPaSBZDsYZ9qV88AjtMtWW7ww98loJ9KA==",
|
||||
"dev": true,
|
||||
"dependencies": {
|
||||
"@babel/helper-string-parser": "^7.23.4",
|
||||
"@babel/helper-validator-identifier": "^7.22.20",
|
||||
"@babel/helper-string-parser": "^7.22.5",
|
||||
"@babel/helper-validator-identifier": "^7.22.5",
|
||||
"to-fast-properties": "^2.0.0"
|
||||
},
|
||||
"engines": {
|
||||
@@ -3042,9 +3041,9 @@
|
||||
},
|
||||
"node_modules/@vue/vue-loader-v15": {
|
||||
"name": "vue-loader",
|
||||
"version": "15.11.1",
|
||||
"resolved": "https://registry.npmjs.org/vue-loader/-/vue-loader-15.11.1.tgz",
|
||||
"integrity": "sha512-0iw4VchYLePqJfJu9s62ACWUXeSqM30SQqlIftbYWM3C+jpPcEHKSPUZBLjSF9au4HTHQ/naF6OGnO3Q/qGR3Q==",
|
||||
"version": "15.10.1",
|
||||
"resolved": "https://registry.npmmirror.com/vue-loader/-/vue-loader-15.10.1.tgz",
|
||||
"integrity": "sha512-SaPHK1A01VrNthlix6h1hq4uJu7S/z0kdLUb6klubo738NeQoLbS6V9/d8Pv19tU0XdQKju3D1HSKuI8wJ5wMA==",
|
||||
"dev": true,
|
||||
"dependencies": {
|
||||
"@vue/component-compiler-utils": "^3.1.0",
|
||||
@@ -3061,9 +3060,6 @@
|
||||
"cache-loader": {
|
||||
"optional": true
|
||||
},
|
||||
"prettier": {
|
||||
"optional": true
|
||||
},
|
||||
"vue-template-compiler": {
|
||||
"optional": true
|
||||
}
|
||||
@@ -8162,23 +8158,9 @@
|
||||
}
|
||||
},
|
||||
"node_modules/postcss": {
|
||||
"version": "8.4.31",
|
||||
"resolved": "https://registry.npmjs.org/postcss/-/postcss-8.4.31.tgz",
|
||||
"integrity": "sha512-PS08Iboia9mts/2ygV3eLpY5ghnUcfLV/EXTOW1E2qYxJKGGBUtNjN76FYHnMs36RmARn41bC0AZmn+rR0OVpQ==",
|
||||
"funding": [
|
||||
{
|
||||
"type": "opencollective",
|
||||
"url": "https://opencollective.com/postcss/"
|
||||
},
|
||||
{
|
||||
"type": "tidelift",
|
||||
"url": "https://tidelift.com/funding/github/npm/postcss"
|
||||
},
|
||||
{
|
||||
"type": "github",
|
||||
"url": "https://github.com/sponsors/ai"
|
||||
}
|
||||
],
|
||||
"version": "8.4.25",
|
||||
"resolved": "https://registry.npmmirror.com/postcss/-/postcss-8.4.25.tgz",
|
||||
"integrity": "sha512-7taJ/8t2av0Z+sQEvNzCkpDynl0tX3uJMCODi6nT3PfASC7dYCWV9aQ+uiCf+KBD4SEFcu+GvJdGdwzQ6OSjCw==",
|
||||
"dependencies": {
|
||||
"nanoid": "^3.3.6",
|
||||
"picocolors": "^1.0.0",
|
||||
@@ -11228,13 +11210,12 @@
|
||||
}
|
||||
},
|
||||
"@babel/code-frame": {
|
||||
"version": "7.23.5",
|
||||
"resolved": "https://registry.npmjs.org/@babel/code-frame/-/code-frame-7.23.5.tgz",
|
||||
"integrity": "sha512-CgH3s1a96LipHCmSUmYFPwY7MNx8C3avkq7i4Wl3cfa662ldtUe4VM1TPXX70pfmrlWTb6jLqTYrZyT2ZTJBgA==",
|
||||
"version": "7.22.5",
|
||||
"resolved": "https://registry.npmmirror.com/@babel/code-frame/-/code-frame-7.22.5.tgz",
|
||||
"integrity": "sha512-Xmwn266vad+6DAqEB2A6V/CcZVp62BbwVmcOJc2RPuwih1kw02TjQvWVWlcKGbBPd+8/0V5DEkOcizRGYsspYQ==",
|
||||
"dev": true,
|
||||
"requires": {
|
||||
"@babel/highlight": "^7.23.4",
|
||||
"chalk": "^2.4.2"
|
||||
"@babel/highlight": "^7.22.5"
|
||||
}
|
||||
},
|
||||
"@babel/compat-data": {
|
||||
@@ -11278,12 +11259,12 @@
|
||||
}
|
||||
},
|
||||
"@babel/generator": {
|
||||
"version": "7.23.5",
|
||||
"resolved": "https://registry.npmjs.org/@babel/generator/-/generator-7.23.5.tgz",
|
||||
"integrity": "sha512-BPssCHrBD+0YrxviOa3QzpqwhNIXKEtOa2jQrm4FlmkC2apYgRnQcmPWiGZDlGxiNtltnUFolMe8497Esry+jA==",
|
||||
"version": "7.22.7",
|
||||
"resolved": "https://registry.npmmirror.com/@babel/generator/-/generator-7.22.7.tgz",
|
||||
"integrity": "sha512-p+jPjMG+SI8yvIaxGgeW24u7q9+5+TGpZh8/CuB7RhBKd7RCy8FayNEFNNKrNK/eUcY/4ExQqLmyrvBXKsIcwQ==",
|
||||
"dev": true,
|
||||
"requires": {
|
||||
"@babel/types": "^7.23.5",
|
||||
"@babel/types": "^7.22.5",
|
||||
"@jridgewell/gen-mapping": "^0.3.2",
|
||||
"@jridgewell/trace-mapping": "^0.3.17",
|
||||
"jsesc": "^2.5.1"
|
||||
@@ -11362,19 +11343,19 @@
|
||||
}
|
||||
},
|
||||
"@babel/helper-environment-visitor": {
|
||||
"version": "7.22.20",
|
||||
"resolved": "https://registry.npmjs.org/@babel/helper-environment-visitor/-/helper-environment-visitor-7.22.20.tgz",
|
||||
"integrity": "sha512-zfedSIzFhat/gFhWfHtgWvlec0nqB9YEIVrpuwjruLlXfUSnA8cJB0miHKwqDnQ7d32aKo2xt88/xZptwxbfhA==",
|
||||
"version": "7.22.5",
|
||||
"resolved": "https://registry.npmmirror.com/@babel/helper-environment-visitor/-/helper-environment-visitor-7.22.5.tgz",
|
||||
"integrity": "sha512-XGmhECfVA/5sAt+H+xpSg0mfrHq6FzNr9Oxh7PSEBBRUb/mL7Kz3NICXb194rCqAEdxkhPT1a88teizAFyvk8Q==",
|
||||
"dev": true
|
||||
},
|
||||
"@babel/helper-function-name": {
|
||||
"version": "7.23.0",
|
||||
"resolved": "https://registry.npmjs.org/@babel/helper-function-name/-/helper-function-name-7.23.0.tgz",
|
||||
"integrity": "sha512-OErEqsrxjZTJciZ4Oo+eoZqeW9UIiOcuYKRJA4ZAgV9myA+pOXhhmpfNCKjEH/auVfEYVFJ6y1Tc4r0eIApqiw==",
|
||||
"version": "7.22.5",
|
||||
"resolved": "https://registry.npmmirror.com/@babel/helper-function-name/-/helper-function-name-7.22.5.tgz",
|
||||
"integrity": "sha512-wtHSq6jMRE3uF2otvfuD3DIvVhOsSNshQl0Qrd7qC9oQJzHvOL4qQXlQn2916+CXGywIjpGuIkoyZRRxHPiNQQ==",
|
||||
"dev": true,
|
||||
"requires": {
|
||||
"@babel/template": "^7.22.15",
|
||||
"@babel/types": "^7.23.0"
|
||||
"@babel/template": "^7.22.5",
|
||||
"@babel/types": "^7.22.5"
|
||||
}
|
||||
},
|
||||
"@babel/helper-hoist-variables": {
|
||||
@@ -11489,15 +11470,15 @@
|
||||
}
|
||||
},
|
||||
"@babel/helper-string-parser": {
|
||||
"version": "7.23.4",
|
||||
"resolved": "https://registry.npmjs.org/@babel/helper-string-parser/-/helper-string-parser-7.23.4.tgz",
|
||||
"integrity": "sha512-803gmbQdqwdf4olxrX4AJyFBV/RTr3rSmOj0rKwesmzlfhYNDEs+/iOcznzpNWlJlIlTJC2QfPFcHB6DlzdVLQ==",
|
||||
"version": "7.22.5",
|
||||
"resolved": "https://registry.npmmirror.com/@babel/helper-string-parser/-/helper-string-parser-7.22.5.tgz",
|
||||
"integrity": "sha512-mM4COjgZox8U+JcXQwPijIZLElkgEpO5rsERVDJTc2qfCDfERyob6k5WegS14SX18IIjv+XD+GrqNumY5JRCDw==",
|
||||
"dev": true
|
||||
},
|
||||
"@babel/helper-validator-identifier": {
|
||||
"version": "7.22.20",
|
||||
"resolved": "https://registry.npmjs.org/@babel/helper-validator-identifier/-/helper-validator-identifier-7.22.20.tgz",
|
||||
"integrity": "sha512-Y4OZ+ytlatR8AI+8KZfKuL5urKp7qey08ha31L8b3BwewJAoJamTzyvxPR/5D+KkdJCGPq/+8TukHBlY10FX9A==",
|
||||
"version": "7.22.5",
|
||||
"resolved": "https://registry.npmmirror.com/@babel/helper-validator-identifier/-/helper-validator-identifier-7.22.5.tgz",
|
||||
"integrity": "sha512-aJXu+6lErq8ltp+JhkJUfk1MTGyuA4v7f3pA+BJ5HLfNC6nAQ0Cpi9uOquUj8Hehg0aUiHzWQbOVJGao6ztBAQ==",
|
||||
"dev": true
|
||||
},
|
||||
"@babel/helper-validator-option": {
|
||||
@@ -11530,20 +11511,20 @@
|
||||
}
|
||||
},
|
||||
"@babel/highlight": {
|
||||
"version": "7.23.4",
|
||||
"resolved": "https://registry.npmjs.org/@babel/highlight/-/highlight-7.23.4.tgz",
|
||||
"integrity": "sha512-acGdbYSfp2WheJoJm/EBBBLh/ID8KDc64ISZ9DYtBmC8/Q204PZJLHyzeB5qMzJ5trcOkybd78M4x2KWsUq++A==",
|
||||
"version": "7.22.5",
|
||||
"resolved": "https://registry.npmmirror.com/@babel/highlight/-/highlight-7.22.5.tgz",
|
||||
"integrity": "sha512-BSKlD1hgnedS5XRnGOljZawtag7H1yPfQp0tdNJCHoH6AZ+Pcm9VvkrK59/Yy593Ypg0zMxH2BxD1VPYUQ7UIw==",
|
||||
"dev": true,
|
||||
"requires": {
|
||||
"@babel/helper-validator-identifier": "^7.22.20",
|
||||
"chalk": "^2.4.2",
|
||||
"@babel/helper-validator-identifier": "^7.22.5",
|
||||
"chalk": "^2.0.0",
|
||||
"js-tokens": "^4.0.0"
|
||||
}
|
||||
},
|
||||
"@babel/parser": {
|
||||
"version": "7.23.5",
|
||||
"resolved": "https://registry.npmjs.org/@babel/parser/-/parser-7.23.5.tgz",
|
||||
"integrity": "sha512-hOOqoiNXrmGdFbhgCzu6GiURxUgM27Xwd/aPuu8RfHEZPBzL1Z54okAHAQjXfcQNwvrlkAmAp4SlRTZ45vlthQ=="
|
||||
"version": "7.22.7",
|
||||
"resolved": "https://registry.npmmirror.com/@babel/parser/-/parser-7.22.7.tgz",
|
||||
"integrity": "sha512-7NF8pOkHP5o2vpmGgNGcfAeCvOYhGLyA3Z4eBQkT1RJlWu47n63bCs93QfJ2hIAFCil7L5P2IWhs1oToVgrL0Q=="
|
||||
},
|
||||
"@babel/plugin-bugfix-safari-id-destructuring-collision-in-function-expression": {
|
||||
"version": "7.22.5",
|
||||
@@ -12401,42 +12382,42 @@
|
||||
}
|
||||
},
|
||||
"@babel/template": {
|
||||
"version": "7.22.15",
|
||||
"resolved": "https://registry.npmjs.org/@babel/template/-/template-7.22.15.tgz",
|
||||
"integrity": "sha512-QPErUVm4uyJa60rkI73qneDacvdvzxshT3kksGqlGWYdOTIUOwJ7RDUL8sGqslY1uXWSL6xMFKEXDS3ox2uF0w==",
|
||||
"version": "7.22.5",
|
||||
"resolved": "https://registry.npmmirror.com/@babel/template/-/template-7.22.5.tgz",
|
||||
"integrity": "sha512-X7yV7eiwAxdj9k94NEylvbVHLiVG1nvzCV2EAowhxLTwODV1jl9UzZ48leOC0sH7OnuHrIkllaBgneUykIcZaw==",
|
||||
"dev": true,
|
||||
"requires": {
|
||||
"@babel/code-frame": "^7.22.13",
|
||||
"@babel/parser": "^7.22.15",
|
||||
"@babel/types": "^7.22.15"
|
||||
"@babel/code-frame": "^7.22.5",
|
||||
"@babel/parser": "^7.22.5",
|
||||
"@babel/types": "^7.22.5"
|
||||
}
|
||||
},
|
||||
"@babel/traverse": {
|
||||
"version": "7.23.5",
|
||||
"resolved": "https://registry.npmjs.org/@babel/traverse/-/traverse-7.23.5.tgz",
|
||||
"integrity": "sha512-czx7Xy5a6sapWWRx61m1Ke1Ra4vczu1mCTtJam5zRTBOonfdJ+S/B6HYmGYu3fJtr8GGET3si6IhgWVBhJ/m8w==",
|
||||
"version": "7.22.8",
|
||||
"resolved": "https://registry.npmmirror.com/@babel/traverse/-/traverse-7.22.8.tgz",
|
||||
"integrity": "sha512-y6LPR+wpM2I3qJrsheCTwhIinzkETbplIgPBbwvqPKc+uljeA5gP+3nP8irdYt1mjQaDnlIcG+dw8OjAco4GXw==",
|
||||
"dev": true,
|
||||
"requires": {
|
||||
"@babel/code-frame": "^7.23.5",
|
||||
"@babel/generator": "^7.23.5",
|
||||
"@babel/helper-environment-visitor": "^7.22.20",
|
||||
"@babel/helper-function-name": "^7.23.0",
|
||||
"@babel/code-frame": "^7.22.5",
|
||||
"@babel/generator": "^7.22.7",
|
||||
"@babel/helper-environment-visitor": "^7.22.5",
|
||||
"@babel/helper-function-name": "^7.22.5",
|
||||
"@babel/helper-hoist-variables": "^7.22.5",
|
||||
"@babel/helper-split-export-declaration": "^7.22.6",
|
||||
"@babel/parser": "^7.23.5",
|
||||
"@babel/types": "^7.23.5",
|
||||
"@babel/parser": "^7.22.7",
|
||||
"@babel/types": "^7.22.5",
|
||||
"debug": "^4.1.0",
|
||||
"globals": "^11.1.0"
|
||||
}
|
||||
},
|
||||
"@babel/types": {
|
||||
"version": "7.23.5",
|
||||
"resolved": "https://registry.npmjs.org/@babel/types/-/types-7.23.5.tgz",
|
||||
"integrity": "sha512-ON5kSOJwVO6xXVRTvOI0eOnWe7VdUcIpsovGo9U/Br4Ie4UVFQTboO2cYnDhAGU6Fp+UxSiT+pMft0SMHfuq6w==",
|
||||
"version": "7.22.5",
|
||||
"resolved": "https://registry.npmmirror.com/@babel/types/-/types-7.22.5.tgz",
|
||||
"integrity": "sha512-zo3MIHGOkPOfoRXitsgHLjEXmlDaD/5KU1Uzuc9GNiZPhSqVxVRtxuPaSBZDsYZ9qV88AjtMtWW7ww98loJ9KA==",
|
||||
"dev": true,
|
||||
"requires": {
|
||||
"@babel/helper-string-parser": "^7.23.4",
|
||||
"@babel/helper-validator-identifier": "^7.22.20",
|
||||
"@babel/helper-string-parser": "^7.22.5",
|
||||
"@babel/helper-validator-identifier": "^7.22.5",
|
||||
"to-fast-properties": "^2.0.0"
|
||||
}
|
||||
},
|
||||
@@ -13477,9 +13458,9 @@
|
||||
"integrity": "sha512-7OjdcV8vQ74eiz1TZLzZP4JwqM5fA94K6yntPS5Z25r9HDuGNzaGdgvwKYq6S+MxwF0TFRwe50fIR/MYnakdkQ=="
|
||||
},
|
||||
"@vue/vue-loader-v15": {
|
||||
"version": "npm:vue-loader@15.11.1",
|
||||
"resolved": "https://registry.npmjs.org/vue-loader/-/vue-loader-15.11.1.tgz",
|
||||
"integrity": "sha512-0iw4VchYLePqJfJu9s62ACWUXeSqM30SQqlIftbYWM3C+jpPcEHKSPUZBLjSF9au4HTHQ/naF6OGnO3Q/qGR3Q==",
|
||||
"version": "npm:vue-loader@15.10.1",
|
||||
"resolved": "https://registry.npmmirror.com/vue-loader/-/vue-loader-15.10.1.tgz",
|
||||
"integrity": "sha512-SaPHK1A01VrNthlix6h1hq4uJu7S/z0kdLUb6klubo738NeQoLbS6V9/d8Pv19tU0XdQKju3D1HSKuI8wJ5wMA==",
|
||||
"dev": true,
|
||||
"requires": {
|
||||
"@vue/component-compiler-utils": "^3.1.0",
|
||||
@@ -17591,9 +17572,9 @@
|
||||
}
|
||||
},
|
||||
"postcss": {
|
||||
"version": "8.4.31",
|
||||
"resolved": "https://registry.npmjs.org/postcss/-/postcss-8.4.31.tgz",
|
||||
"integrity": "sha512-PS08Iboia9mts/2ygV3eLpY5ghnUcfLV/EXTOW1E2qYxJKGGBUtNjN76FYHnMs36RmARn41bC0AZmn+rR0OVpQ==",
|
||||
"version": "8.4.25",
|
||||
"resolved": "https://registry.npmmirror.com/postcss/-/postcss-8.4.25.tgz",
|
||||
"integrity": "sha512-7taJ/8t2av0Z+sQEvNzCkpDynl0tX3uJMCODi6nT3PfASC7dYCWV9aQ+uiCf+KBD4SEFcu+GvJdGdwzQ6OSjCw==",
|
||||
"requires": {
|
||||
"nanoid": "^3.3.6",
|
||||
"picocolors": "^1.0.0",
|
||||
|
||||
@@ -7,12 +7,11 @@
|
||||
size="14"></v-progress-circular>
|
||||
<v-icon v-else-if="agent.status === 'uninitialized'" color="orange" size="14">mdi-checkbox-blank-circle</v-icon>
|
||||
<v-icon v-else-if="agent.status === 'disabled'" color="grey-darken-2" size="14">mdi-checkbox-blank-circle</v-icon>
|
||||
<v-icon v-else-if="agent.status === 'error'" color="red" size="14">mdi-checkbox-blank-circle</v-icon>
|
||||
<v-icon v-else color="green" size="14">mdi-checkbox-blank-circle</v-icon>
|
||||
<span class="ml-1" v-if="agent.label"> {{ agent.label }}</span>
|
||||
<span class="ml-1" v-else> {{ agent.name }}</span>
|
||||
</v-list-item-title>
|
||||
<v-list-item-subtitle class="text-caption">
|
||||
<v-list-item-subtitle>
|
||||
{{ agent.client }}
|
||||
</v-list-item-subtitle>
|
||||
<v-chip class="mr-1" v-if="agent.status === 'disabled'" size="x-small">Disabled</v-chip>
|
||||
@@ -66,10 +65,7 @@ export default {
|
||||
for(let i = 0; i < this.state.agents.length; i++) {
|
||||
let agent = this.state.agents[i];
|
||||
|
||||
if(!agent.data.requires_llm_client)
|
||||
continue
|
||||
|
||||
if(agent.status === 'warning' || agent.status === 'error' || agent.status === 'uninitialized') {
|
||||
if(agent.status === 'warning' || agent.status === 'error') {
|
||||
console.log("agents: configuration required (1)", agent.status)
|
||||
return true;
|
||||
}
|
||||
@@ -103,6 +99,7 @@ export default {
|
||||
} else {
|
||||
this.state.agents[index] = agent;
|
||||
}
|
||||
this.state.dialog = false;
|
||||
this.$emit('agents-updated', this.state.agents);
|
||||
},
|
||||
editAgent(index) {
|
||||
|
||||
@@ -56,7 +56,7 @@
|
||||
</v-list-item-subtitle>
|
||||
</v-list-item>
|
||||
</v-list>
|
||||
<ClientModal :dialog="dialog" :formTitle="formTitle" @save="saveClient" @error="propagateError" @update:dialog="updateDialog"></ClientModal>
|
||||
<ClientModal :dialog="dialog" :formTitle="formTitle" @save="saveClient" @update:dialog="updateDialog"></ClientModal>
|
||||
<v-alert type="warning" variant="tonal" v-if="state.clients.length === 0">You have no LLM clients configured. Add one.</v-alert>
|
||||
<v-btn @click="openModal" prepend-icon="mdi-plus-box">Add client</v-btn>
|
||||
</div>
|
||||
@@ -127,9 +127,6 @@ export default {
|
||||
this.state.formTitle = 'Add Client';
|
||||
this.state.dialog = true;
|
||||
},
|
||||
propagateError(error) {
|
||||
this.$emit('error', error);
|
||||
},
|
||||
saveClient(client) {
|
||||
const index = this.state.clients.findIndex(c => c.name === client.name);
|
||||
if (index === -1) {
|
||||
@@ -156,13 +153,10 @@ export default {
|
||||
let agents = this.getAgents();
|
||||
let client = this.state.clients[index];
|
||||
|
||||
this.saveClient(client);
|
||||
|
||||
for (let i = 0; i < agents.length; i++) {
|
||||
agents[i].client = client.name;
|
||||
console.log("Assigning client", client.name, "to agent", agents[i].name);
|
||||
this.$emit('client-assigned', agents);
|
||||
}
|
||||
this.$emit('client-assigned', agents);
|
||||
},
|
||||
updateDialog(newVal) {
|
||||
this.state.dialog = newVal;
|
||||
|
||||
@@ -10,7 +10,7 @@
|
||||
</v-col>
|
||||
<v-col cols="3" class="text-right">
|
||||
<v-checkbox :label="enabledLabel()" hide-details density="compact" color="green" v-model="agent.enabled"
|
||||
v-if="agent.data.has_toggle" @update:modelValue="save(false)"></v-checkbox>
|
||||
v-if="agent.data.has_toggle"></v-checkbox>
|
||||
</v-col>
|
||||
</v-row>
|
||||
|
||||
@@ -18,7 +18,7 @@
|
||||
|
||||
</v-card-title>
|
||||
<v-card-text class="scrollable-content">
|
||||
<v-select v-if="agent.data.requires_llm_client" v-model="agent.client" :items="agent.data.client" label="Client" @update:modelValue="save(false)"></v-select>
|
||||
<v-select v-model="agent.client" :items="agent.data.client" label="Client"></v-select>
|
||||
|
||||
<v-alert type="warning" variant="tonal" density="compact" v-if="agent.data.experimental">
|
||||
This agent is currently experimental and may significantly decrease performance and / or require
|
||||
@@ -27,25 +27,27 @@
|
||||
|
||||
<v-card v-for="(action, key) in agent.actions" :key="key" density="compact">
|
||||
<v-card-subtitle>
|
||||
<v-checkbox v-if="!actionAlwaysEnabled(key)" :label="agent.data.actions[key].label" hide-details density="compact" color="green" v-model="action.enabled" @update:modelValue="save(false)"></v-checkbox>
|
||||
<v-checkbox :label="agent.data.actions[key].label" hide-details density="compact" color="green" v-model="action.enabled"></v-checkbox>
|
||||
</v-card-subtitle>
|
||||
<v-card-text>
|
||||
<div v-if="!actionAlwaysEnabled(key)">
|
||||
{{ agent.data.actions[key].description }}
|
||||
</div>
|
||||
{{ agent.data.actions[key].description }}
|
||||
<div v-for="(action_config, config_key) in agent.data.actions[key].config" :key="config_key">
|
||||
<div v-if="action.enabled">
|
||||
<!-- render config widgets based on action_config.type (int, str, bool, float) -->
|
||||
<v-text-field v-if="action_config.type === 'text' && action_config.choices === null" v-model="action.config[config_key].value" :label="action_config.label" :hint="action_config.description" density="compact" @update:modelValue="save(true)"></v-text-field>
|
||||
<v-autocomplete v-else-if="action_config.type === 'text' && action_config.choices !== null" v-model="action.config[config_key].value" :items="action_config.choices" :label="action_config.label" :hint="action_config.description" density="compact" item-title="label" item-value="value" @update:modelValue="save(false)"></v-autocomplete>
|
||||
<v-slider v-if="action_config.type === 'number' && action_config.step !== null" v-model="action.config[config_key].value" :label="action_config.label" :hint="action_config.description" :min="action_config.min" :max="action_config.max" :step="action_config.step" density="compact" thumb-label @update:modelValue="save(true)"></v-slider>
|
||||
<v-checkbox v-if="action_config.type === 'bool'" v-model="action.config[config_key].value" :label="action_config.label" :hint="action_config.description" density="compact" @update:modelValue="save(false)"></v-checkbox>
|
||||
<v-text-field v-if="action_config.type === 'text'" v-model="action.config[config_key].value" :label="action_config.label" :hint="action_config.description" density="compact"></v-text-field>
|
||||
<v-slider v-if="action_config.type === 'number' && action_config.step !== null" v-model="action.config[config_key].value" :label="action_config.label" :hint="action_config.description" :min="action_config.min" :max="action_config.max" :step="action_config.step" density="compact" thumb-label></v-slider>
|
||||
<v-checkbox v-if="action_config.type === 'bool'" v-model="action.config[config_key].value" :label="action_config.label" :hint="action_config.description" density="compact"></v-checkbox>
|
||||
</div>
|
||||
</div>
|
||||
</v-card-text>
|
||||
</v-card>
|
||||
|
||||
</v-card-text>
|
||||
<v-card-actions>
|
||||
<v-spacer></v-spacer>
|
||||
<v-btn color="primary" @click="close">Close</v-btn>
|
||||
<v-btn color="primary" @click="save">Save</v-btn>
|
||||
</v-card-actions>
|
||||
</v-card>
|
||||
</v-dialog>
|
||||
</template>
|
||||
@@ -56,10 +58,9 @@ export default {
|
||||
dialog: Boolean,
|
||||
formTitle: String
|
||||
},
|
||||
inject: ['state', 'getWebsocket'],
|
||||
inject: ['state'],
|
||||
data() {
|
||||
return {
|
||||
saveTimeout: null,
|
||||
localDialog: this.state.dialog,
|
||||
agent: { ...this.state.currentAgent }
|
||||
};
|
||||
@@ -89,32 +90,12 @@ export default {
|
||||
return 'Disabled';
|
||||
}
|
||||
},
|
||||
actionAlwaysEnabled(action) {
|
||||
if (action.charAt(0) === '_') {
|
||||
return true;
|
||||
} else {
|
||||
return false;
|
||||
}
|
||||
},
|
||||
|
||||
close() {
|
||||
this.$emit('update:dialog', false);
|
||||
},
|
||||
save(delayed = false) {
|
||||
console.log("save", delayed);
|
||||
if(!delayed) {
|
||||
this.$emit('save', this.agent);
|
||||
return;
|
||||
}
|
||||
|
||||
if(this.saveTimeout !== null)
|
||||
clearTimeout(this.saveTimeout);
|
||||
|
||||
this.saveTimeout = setTimeout(() => {
|
||||
this.$emit('save', this.agent);
|
||||
}, 500);
|
||||
|
||||
//this.$emit('save', this.agent);
|
||||
save() {
|
||||
this.$emit('save', this.agent);
|
||||
this.close();
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
@@ -1,96 +0,0 @@
|
||||
<template>
|
||||
<div class="audio-queue">
|
||||
<span>{{ queue.length }} sound(s) queued</span>
|
||||
<v-icon :color="isPlaying ? 'green' : ''" v-if="!isMuted" @click="toggleMute">mdi-volume-high</v-icon>
|
||||
<v-icon :color="isPlaying ? 'red' : ''" v-else @click="toggleMute">mdi-volume-off</v-icon>
|
||||
<v-icon class="ml-1" @click="stopAndClear">mdi-stop-circle-outline</v-icon>
|
||||
</div>
|
||||
</template>
|
||||
|
||||
<script>
|
||||
export default {
|
||||
name: 'AudioQueue',
|
||||
data() {
|
||||
return {
|
||||
queue: [],
|
||||
audioContext: null,
|
||||
isPlaying: false,
|
||||
isMuted: false,
|
||||
currentSource: null
|
||||
};
|
||||
},
|
||||
inject: ['getWebsocket', 'registerMessageHandler'],
|
||||
created() {
|
||||
this.audioContext = new (window.AudioContext || window.webkitAudioContext)();
|
||||
this.registerMessageHandler(this.handleMessage);
|
||||
},
|
||||
methods: {
|
||||
handleMessage(data) {
|
||||
if (data.type === 'audio_queue') {
|
||||
console.log('Received audio queue message', data)
|
||||
this.addToQueue(data.data.audio_data);
|
||||
}
|
||||
},
|
||||
addToQueue(base64Sound) {
|
||||
const soundBuffer = this.base64ToArrayBuffer(base64Sound);
|
||||
this.queue.push(soundBuffer);
|
||||
this.playNextSound();
|
||||
},
|
||||
base64ToArrayBuffer(base64) {
|
||||
const binaryString = window.atob(base64);
|
||||
const len = binaryString.length;
|
||||
const bytes = new Uint8Array(len);
|
||||
for (let i = 0; i < len; i++) {
|
||||
bytes[i] = binaryString.charCodeAt(i);
|
||||
}
|
||||
return bytes.buffer;
|
||||
},
|
||||
playNextSound() {
|
||||
if (this.isPlaying || this.queue.length === 0) {
|
||||
return;
|
||||
}
|
||||
this.isPlaying = true;
|
||||
const soundBuffer = this.queue.shift();
|
||||
this.audioContext.decodeAudioData(soundBuffer, (buffer) => {
|
||||
const source = this.audioContext.createBufferSource();
|
||||
source.buffer = buffer;
|
||||
this.currentSource = source;
|
||||
if (!this.isMuted) {
|
||||
source.connect(this.audioContext.destination);
|
||||
}
|
||||
source.onended = () => {
|
||||
this.isPlaying = false;
|
||||
this.playNextSound();
|
||||
};
|
||||
source.start(0);
|
||||
}, (error) => {
|
||||
console.error('Error with decoding audio data', error);
|
||||
});
|
||||
},
|
||||
toggleMute() {
|
||||
this.isMuted = !this.isMuted;
|
||||
if (this.isMuted && this.currentSource) {
|
||||
this.currentSource.disconnect(this.audioContext.destination);
|
||||
} else if (this.currentSource) {
|
||||
this.currentSource.connect(this.audioContext.destination);
|
||||
}
|
||||
},
|
||||
stopAndClear() {
|
||||
if (this.currentSource) {
|
||||
this.currentSource.stop();
|
||||
this.currentSource.disconnect();
|
||||
this.currentSource = null;
|
||||
}
|
||||
this.queue = [];
|
||||
this.isPlaying = false;
|
||||
}
|
||||
}
|
||||
};
|
||||
</script>
|
||||
|
||||
<style scoped>
|
||||
.audio-queue {
|
||||
display: flex;
|
||||
align-items: center;
|
||||
}
|
||||
</style>
|
||||
@@ -2,37 +2,36 @@
|
||||
<v-dialog v-model="localDialog" persistent max-width="600px">
|
||||
<v-card>
|
||||
<v-card-title>
|
||||
<v-icon>mdi-network-outline</v-icon>
|
||||
<span class="headline">{{ title() }}</span>
|
||||
<span class="headline">{{ formTitle }}</span>
|
||||
</v-card-title>
|
||||
<v-card-text>
|
||||
<v-container>
|
||||
<v-row>
|
||||
<v-col cols="6">
|
||||
<v-select v-model="client.type" :disabled="!typeEditable()" :items="['openai', 'textgenwebui', 'lmstudio']" label="Client Type" @update:model-value="resetToDefaults"></v-select>
|
||||
</v-col>
|
||||
<v-col cols="6">
|
||||
<v-text-field v-model="client.name" label="Client Name"></v-text-field>
|
||||
</v-col>
|
||||
|
||||
</v-row>
|
||||
<v-row>
|
||||
<v-col cols="12">
|
||||
<v-text-field v-model="client.apiUrl" v-if="isLocalApiClient(client)" label="API URL"></v-text-field>
|
||||
<v-select v-model="client.model" v-if="client.type === 'openai'" :items="['gpt-4-1106-preview', 'gpt-4', 'gpt-3.5-turbo', 'gpt-3.5-turbo-16k']" label="Model"></v-select>
|
||||
</v-col>
|
||||
</v-row>
|
||||
<v-row>
|
||||
<v-row>
|
||||
<v-col cols="6">
|
||||
<v-text-field v-model="client.max_token_length" v-if="isLocalApiClient(client)" type="number" label="Context Length"></v-text-field>
|
||||
<v-select v-model="client.type" :items="['openai', 'textgenwebui', 'lmstudio']" label="Client Type"></v-select>
|
||||
</v-col>
|
||||
</v-row>
|
||||
<v-col cols="6">
|
||||
<v-text-field v-model="client.name" label="Client Name"></v-text-field>
|
||||
</v-col>
|
||||
|
||||
</v-row>
|
||||
<v-row>
|
||||
<v-col cols="12">
|
||||
<v-text-field v-model="client.apiUrl" v-if="isLocalApiClient(client)" label="API URL"></v-text-field>
|
||||
<v-select v-model="client.model" v-if="client.type === 'openai'" :items="['gpt-4-1106-preview', 'gpt-4', 'gpt-3.5-turbo', 'gpt-3.5-turbo-16k']" label="Model"></v-select>
|
||||
</v-col>
|
||||
</v-row>
|
||||
<v-row>
|
||||
<v-col cols="6">
|
||||
<v-text-field v-model="client.max_token_length" v-if="isLocalApiClient(client)" type="number" label="Context Length"></v-text-field>
|
||||
</v-col>
|
||||
</v-row>
|
||||
</v-container>
|
||||
</v-card-text>
|
||||
<v-card-actions>
|
||||
<v-spacer></v-spacer>
|
||||
<v-btn color="primary" text @click="close" prepend-icon="mdi-cancel">Cancel</v-btn>
|
||||
<v-btn color="primary" text @click="save" prepend-icon="mdi-check-circle-outline">Save</v-btn>
|
||||
<v-btn color="blue darken-1" text @click="close">Close</v-btn>
|
||||
<v-btn color="blue darken-1" text @click="save">Save</v-btn>
|
||||
</v-card-actions>
|
||||
</v-card>
|
||||
</v-dialog>
|
||||
@@ -48,25 +47,7 @@ export default {
|
||||
data() {
|
||||
return {
|
||||
localDialog: this.state.dialog,
|
||||
client: { ...this.state.currentClient },
|
||||
defaultValuesByCLientType: {
|
||||
// when client type is changed in the modal, these values will be used
|
||||
// to populate the form
|
||||
'textgenwebui': {
|
||||
apiUrl: 'http://localhost:5000',
|
||||
max_token_length: 4096,
|
||||
name_prefix: 'TextGenWebUI',
|
||||
},
|
||||
'openai': {
|
||||
model: 'gpt-4-1106-preview',
|
||||
name_prefix: 'OpenAI',
|
||||
},
|
||||
'lmstudio': {
|
||||
apiUrl: 'http://localhost:1234',
|
||||
max_token_length: 4096,
|
||||
name_prefix: 'LMStudio',
|
||||
}
|
||||
}
|
||||
client: { ...this.state.currentClient } // Define client data property
|
||||
};
|
||||
},
|
||||
watch: {
|
||||
@@ -87,47 +68,10 @@ export default {
|
||||
}
|
||||
},
|
||||
methods: {
|
||||
resetToDefaults() {
|
||||
const defaults = this.defaultValuesByCLientType[this.client.type];
|
||||
if (defaults) {
|
||||
this.client.model = defaults.model || '';
|
||||
this.client.apiUrl = defaults.apiUrl || '';
|
||||
this.client.max_token_length = defaults.max_token_length || 4096;
|
||||
// loop and build name from prefix, checking against current clients
|
||||
let name = defaults.name_prefix;
|
||||
let i = 2;
|
||||
while (this.state.clients.find(c => c.name === name)) {
|
||||
name = `${defaults.name_prefix} ${i}`;
|
||||
i++;
|
||||
}
|
||||
this.client.name = name;
|
||||
}
|
||||
},
|
||||
validateName() {
|
||||
|
||||
// if we are editing a client, we should exclude the current client from the check
|
||||
if(!this.typeEditable()) {
|
||||
return this.state.clients.findIndex(c => c.name === this.client.name && c.name !== this.state.currentClient.name) === -1;
|
||||
}
|
||||
|
||||
return this.state.clients.findIndex(c => c.name === this.client.name) === -1;
|
||||
},
|
||||
typeEditable() {
|
||||
return this.state.formTitle === 'Add Client';
|
||||
},
|
||||
title() {
|
||||
return this.typeEditable() ? 'Add Client' : 'Edit Client';
|
||||
},
|
||||
close() {
|
||||
this.$emit('update:dialog', false);
|
||||
},
|
||||
save() {
|
||||
|
||||
if(!this.validateName()) {
|
||||
this.$emit('error', 'Client name already exists');
|
||||
return;
|
||||
}
|
||||
|
||||
this.$emit('save', this.client); // Emit save event with client object
|
||||
this.close();
|
||||
},
|
||||
|
||||
@@ -1,92 +0,0 @@
|
||||
<template>
|
||||
<v-dialog v-model="showModal" max-width="800px">
|
||||
<v-card>
|
||||
<v-card-title class="headline">Your Character</v-card-title>
|
||||
<v-card-text>
|
||||
<v-alert type="info" variant="tonal" v-if="defaultCharacter.name === ''" density="compact">You have not yet
|
||||
configured a default player character. This character will be used when a scenario is loaded that does not come
|
||||
with a pre-defined player character.</v-alert>
|
||||
<v-container>
|
||||
<v-row>
|
||||
<v-col cols="12" sm="6">
|
||||
<v-text-field v-model="defaultCharacter.name" label="Name" :rules="[rules.required]"></v-text-field>
|
||||
</v-col>
|
||||
<v-col cols="12" sm="6">
|
||||
<v-text-field v-model="defaultCharacter.gender" label="Gender" :rules="[rules.required]"></v-text-field>
|
||||
</v-col>
|
||||
</v-row>
|
||||
<v-row>
|
||||
<v-col cols="12">
|
||||
<v-textarea v-model="defaultCharacter.description" label="Description" auto-grow></v-textarea>
|
||||
</v-col>
|
||||
</v-row>
|
||||
</v-container>
|
||||
</v-card-text>
|
||||
<v-card-actions>
|
||||
<v-spacer></v-spacer>
|
||||
<v-btn color="primary" v-if="!saving" text @click="cancel" prepend-icon="mdi-cancel">Cancel</v-btn>
|
||||
<v-progress-circular v-else indeterminate color="primary" size="20"></v-progress-circular>
|
||||
<v-btn color="primary" text :disabled="saving" @click="saveDefaultCharacter" prepend-icon="mdi-check-circle-outline">Continue</v-btn>
|
||||
</v-card-actions>
|
||||
</v-card>
|
||||
</v-dialog>
|
||||
</template>
|
||||
|
||||
<script>
|
||||
export default {
|
||||
name: 'DefaultCharacter',
|
||||
inject: ['getWebsocket', 'registerMessageHandler'],
|
||||
data() {
|
||||
return {
|
||||
showModal: false,
|
||||
saving: false,
|
||||
defaultCharacter: {
|
||||
name: '',
|
||||
gender: '',
|
||||
description: '',
|
||||
color: '#3362bb'
|
||||
},
|
||||
rules: {
|
||||
required: value => !!value || 'Required.'
|
||||
}
|
||||
};
|
||||
},
|
||||
methods: {
|
||||
saveDefaultCharacter() {
|
||||
// Send the new default character data to the server
|
||||
this.saving = true;
|
||||
this.getWebsocket().send(JSON.stringify({
|
||||
type: 'config',
|
||||
action: 'save_default_character',
|
||||
data: this.defaultCharacter
|
||||
}));
|
||||
},
|
||||
cancel() {
|
||||
this.$emit("cancel");
|
||||
this.closeModal();
|
||||
},
|
||||
open() {
|
||||
this.saving = false;
|
||||
this.showModal = true;
|
||||
},
|
||||
closeModal() {
|
||||
this.showModal = false;
|
||||
},
|
||||
handleMessage(message) {
|
||||
if (message.type == 'config') {
|
||||
if (message.action == 'save_default_character_complete') {
|
||||
this.closeModal();
|
||||
this.$emit("save");
|
||||
}
|
||||
}
|
||||
},
|
||||
},
|
||||
created() {
|
||||
this.registerMessageHandler(this.handleMessage);
|
||||
},
|
||||
};
|
||||
</script>
|
||||
|
||||
<style scoped>
|
||||
/* Add any specific styles for your DefaultCharacter modal here */
|
||||
</style>
|
||||
@@ -1,13 +1,10 @@
|
||||
<template>
|
||||
<v-list-subheader v-if="appConfig !== null" @click="toggle()" class="text-uppercase"><v-icon>mdi-script-text-outline</v-icon> Load
|
||||
<v-list-subheader @click="toggle()" class="text-uppercase"><v-icon>mdi-script-text-outline</v-icon> Load
|
||||
<v-progress-circular v-if="loading" indeterminate color="primary" size="20"></v-progress-circular>
|
||||
<v-icon v-if="expanded" icon="mdi-chevron-down"></v-icon>
|
||||
<v-icon v-else icon="mdi-chevron-up"></v-icon>
|
||||
</v-list-subheader>
|
||||
<v-list-subheader class="text-uppercase" v-else>
|
||||
<v-progress-circular indeterminate color="primary" size="20"></v-progress-circular> Waiting for config...
|
||||
</v-list-subheader>
|
||||
<v-list-item-group v-if="!loading && isConnected() && expanded && !configurationRequired() && appConfig !== null">
|
||||
<v-list-item-group v-if="!loading && isConnected() && expanded && !configurationRequired()">
|
||||
<v-list-item>
|
||||
<v-list-item-content class="mb-3">
|
||||
<!-- Toggle buttons for switching between file upload and path input -->
|
||||
@@ -46,18 +43,12 @@
|
||||
<div v-else-if="configurationRequired()">
|
||||
<v-alert type="warning" variant="tonal">You need to configure a Talemate client before you can load scenes.</v-alert>
|
||||
</div>
|
||||
<DefaultCharacter ref="defaultCharacterModal" @save="loadScene" @cancel="loadCanceled"></DefaultCharacter>
|
||||
</template>
|
||||
|
||||
|
||||
<script>
|
||||
import DefaultCharacter from './DefaultCharacter.vue';
|
||||
|
||||
export default {
|
||||
name: 'LoadScene',
|
||||
components: {
|
||||
DefaultCharacter,
|
||||
},
|
||||
data() {
|
||||
return {
|
||||
loading: false,
|
||||
@@ -69,16 +60,10 @@ export default {
|
||||
sceneSearchLoading: false,
|
||||
sceneSaved: null,
|
||||
expanded: true,
|
||||
appConfig: null, // Store the app configuration
|
||||
}
|
||||
},
|
||||
inject: ['getWebsocket', 'registerMessageHandler', 'isConnected', 'configurationRequired'],
|
||||
methods: {
|
||||
// Method to show the DefaultCharacter modal
|
||||
showDefaultCharacterModal() {
|
||||
this.$refs.defaultCharacterModal.open();
|
||||
},
|
||||
|
||||
toggle() {
|
||||
this.expanded = !this.expanded;
|
||||
},
|
||||
@@ -95,21 +80,9 @@ export default {
|
||||
this.getWebsocket().send(JSON.stringify({ type: 'request_scenes_list', query: this.sceneSearchInput }));
|
||||
},
|
||||
loadCreative() {
|
||||
if(this.sceneSaved === false) {
|
||||
if(!confirm("The current scene is not saved. Are you sure you want to load a new scene?")) {
|
||||
return;
|
||||
}
|
||||
}
|
||||
|
||||
this.loading = true;
|
||||
this.getWebsocket().send(JSON.stringify({ type: 'load_scene', file_path: "environment:creative" }));
|
||||
},
|
||||
loadCanceled() {
|
||||
console.log("Load canceled");
|
||||
this.loading = false;
|
||||
this.sceneFile = [];
|
||||
},
|
||||
|
||||
loadScene() {
|
||||
|
||||
if(this.sceneSaved === false) {
|
||||
@@ -118,26 +91,13 @@ export default {
|
||||
}
|
||||
}
|
||||
|
||||
this.sceneSaved = null;
|
||||
|
||||
this.loading = true;
|
||||
if (this.inputMethod === 'file' && this.sceneFile.length > 0) { // Check if the input method is "file" and there is at least one file
|
||||
|
||||
// if file is image check if default character is set
|
||||
if(this.sceneFile[0].type.startsWith("image/")) {
|
||||
if(!this.appConfig.game.default_player_character.name) {
|
||||
this.showDefaultCharacterModal();
|
||||
return;
|
||||
}
|
||||
}
|
||||
|
||||
this.loading = true;
|
||||
|
||||
// Convert the uploaded file to base64
|
||||
const reader = new FileReader();
|
||||
reader.readAsDataURL(this.sceneFile[0]); // Access the first file in the array
|
||||
reader.onload = () => {
|
||||
//const base64File = reader.result.split(',')[1];
|
||||
this.$emit("loading", true)
|
||||
this.getWebsocket().send(JSON.stringify({
|
||||
type: 'load_scene',
|
||||
scene_data: reader.result,
|
||||
@@ -146,28 +106,11 @@ export default {
|
||||
this.sceneFile = [];
|
||||
};
|
||||
} else if (this.inputMethod === 'path' && this.sceneInput) { // Check if the input method is "path" and the scene input is not empty
|
||||
|
||||
// if path ends with .png/jpg/webp check if default character is set
|
||||
|
||||
if(this.sceneInput.endsWith(".png") || this.sceneInput.endsWith(".jpg") || this.sceneInput.endsWith(".webp")) {
|
||||
if(!this.appConfig.game.default_player_character.name) {
|
||||
this.showDefaultCharacterModal();
|
||||
return;
|
||||
}
|
||||
}
|
||||
|
||||
this.loading = true;
|
||||
this.$emit("loading", true)
|
||||
this.getWebsocket().send(JSON.stringify({ type: 'load_scene', file_path: this.sceneInput }));
|
||||
this.sceneInput = '';
|
||||
}
|
||||
},
|
||||
handleMessage(data) {
|
||||
// Handle app configuration
|
||||
if (data.type === 'app_config') {
|
||||
this.appConfig = data.data;
|
||||
console.log("App config", this.appConfig);
|
||||
}
|
||||
|
||||
// Scene loaded
|
||||
if (data.type === "system") {
|
||||
@@ -190,11 +133,10 @@ export default {
|
||||
return;
|
||||
}
|
||||
|
||||
},
|
||||
}
|
||||
},
|
||||
created() {
|
||||
this.registerMessageHandler(this.handleMessage);
|
||||
//this.getWebsocket().send(JSON.stringify({ type: 'request_config' })); // Request the current app configuration
|
||||
},
|
||||
mounted() {
|
||||
console.log("Websocket", this.getWebsocket()); // Check if websocket is available
|
||||
|
||||
@@ -11,7 +11,7 @@
|
||||
Make sure the backend process is running.
|
||||
</p>
|
||||
</v-alert>
|
||||
<LoadScene ref="loadScene" @loading="sceneStartedLoading" />
|
||||
<LoadScene ref="loadScene" />
|
||||
<v-divider></v-divider>
|
||||
<div :style="(sceneActive && scene.environment === 'scene' ? 'display:block' : 'display:none')">
|
||||
<!-- <GameOptions v-if="sceneActive" ref="gameOptions" /> -->
|
||||
@@ -39,7 +39,7 @@
|
||||
Clients</v-list-subheader>
|
||||
<v-list-item-group>
|
||||
<v-list-item>
|
||||
<AIClient ref="aiClient" @save="saveClients" @error="uxErrorHandler" @clients-updated="saveClients" @client-assigned="saveAgents"></AIClient>
|
||||
<AIClient ref="aiClient" @save="saveClients" @clients-updated="saveClients" @client-assigned="saveAgents"></AIClient>
|
||||
</v-list-item>
|
||||
</v-list-item-group>
|
||||
<v-divider></v-divider>
|
||||
@@ -75,8 +75,6 @@
|
||||
<span v-if="connecting" class="ml-1"><v-icon class="mr-1">mdi-progress-helper</v-icon>connecting</span>
|
||||
<span v-else-if="connected" class="ml-1"><v-icon class="mr-1" color="green" size="14">mdi-checkbox-blank-circle</v-icon>connected</span>
|
||||
<span v-else class="ml-1"><v-icon class="mr-1">mdi-progress-close</v-icon>disconnected</span>
|
||||
<v-divider class="ml-1 mr-1" vertical></v-divider>
|
||||
<AudioQueue ref="audioQueue" />
|
||||
<v-spacer></v-spacer>
|
||||
<span v-if="version !== null">v{{ version }}</span>
|
||||
<span v-if="configurationRequired()">
|
||||
@@ -163,7 +161,6 @@ import SceneHistory from './SceneHistory.vue';
|
||||
import CreativeEditor from './CreativeEditor.vue';
|
||||
import AppConfig from './AppConfig.vue';
|
||||
import DebugTools from './DebugTools.vue';
|
||||
import AudioQueue from './AudioQueue.vue';
|
||||
|
||||
export default {
|
||||
components: {
|
||||
@@ -180,7 +177,6 @@ export default {
|
||||
CreativeEditor,
|
||||
AppConfig,
|
||||
DebugTools,
|
||||
AudioQueue,
|
||||
},
|
||||
name: 'TalemateApp',
|
||||
data() {
|
||||
@@ -429,7 +425,7 @@ export default {
|
||||
let agent = this.$refs.aiAgent.getActive();
|
||||
|
||||
if (agent) {
|
||||
return agent.label;
|
||||
return agent.name;
|
||||
}
|
||||
return null;
|
||||
},
|
||||
@@ -448,14 +444,6 @@ export default {
|
||||
openAppConfig() {
|
||||
this.$refs.appConfig.show();
|
||||
},
|
||||
uxErrorHandler(error) {
|
||||
this.errorNotification = true;
|
||||
this.errorMessage = error;
|
||||
},
|
||||
sceneStartedLoading() {
|
||||
this.loading = true;
|
||||
this.sceneActive = false;
|
||||
}
|
||||
}
|
||||
}
|
||||
</script>
|
||||
|
||||
@@ -1,2 +0,0 @@
|
||||
SYSTEM: {{ system_message }}
|
||||
USER: {{ set_response(prompt, "\nASSISTANT: ") }}
|
||||
@@ -1 +0,0 @@
|
||||
Human: {{ system_message }} {{ set_response(prompt, "\n\nAssistant:") }}
|
||||
@@ -1,4 +0,0 @@
|
||||
{{ system_message }}
|
||||
|
||||
### Instruction:
|
||||
{{ set_response(prompt, "\n\n### Response:\n") }}
|
||||
@@ -1 +0,0 @@
|
||||
User: {{ system_message }} {{ set_response(prompt, "\nAssistant: ") }}
|
||||
@@ -1,4 +0,0 @@
|
||||
<|im_start|>system
|
||||
{{ system_message }}<|im_end|>
|
||||
<|im_start|>user
|
||||
{{ set_response(prompt, "<|im_end|>\n<|im_start|>assistant\n") }}
|
||||
@@ -1 +0,0 @@
|
||||
GPT4 Correct System: {{ system_message }}<|end_of_turn|>GPT4 Correct User: {{ set_response(prompt, "<|end_of_turn|>GPT4 Correct Assistant:") }}
|
||||
@@ -1,21 +0,0 @@
|
||||
import pytest
|
||||
from talemate.util import ensure_dialog_format
|
||||
|
||||
@pytest.mark.parametrize("input, expected", [
|
||||
('Hello how are you?', 'Hello how are you?'),
|
||||
('"Hello how are you?"', '"Hello how are you?"'),
|
||||
('"Hello how are you?" he asks "I am fine"', '"Hello how are you?" *he asks* "I am fine"'),
|
||||
('Hello how are you? *he asks* I am fine', '"Hello how are you?" *he asks* "I am fine"'),
|
||||
|
||||
('Hello how are you?" *he asks* I am fine', '"Hello how are you?" *he asks* "I am fine"'),
|
||||
('Hello how are you?" *he asks I am fine', '"Hello how are you?" *he asks I am fine*'),
|
||||
('Hello how are you?" *he asks* "I am fine" *', '"Hello how are you?" *he asks* "I am fine"'),
|
||||
|
||||
('"Hello how are you *he asks* I am fine"', '"Hello how are you" *he asks* "I am fine"'),
|
||||
('This is a string without any markers', 'This is a string without any markers'),
|
||||
('This is a string with an ending quote"', '"This is a string with an ending quote"'),
|
||||
('This is a string with an ending asterisk*', '*This is a string with an ending asterisk*'),
|
||||
('"Mixed markers*', '*Mixed markers*'),
|
||||
])
|
||||
def test_dialogue_cleanup(input, expected):
|
||||
assert ensure_dialog_format(input) == expected
|
||||
@@ -1,38 +0,0 @@
|
||||
from talemate.util import (
|
||||
iso8601_add,
|
||||
iso8601_correct_duration,
|
||||
iso8601_diff,
|
||||
iso8601_diff_to_human,
|
||||
iso8601_duration_to_human,
|
||||
parse_duration_to_isodate_duration,
|
||||
timedelta_to_duration,
|
||||
duration_to_timedelta,
|
||||
isodate
|
||||
)
|
||||
|
||||
|
||||
def test_isodate_utils():
|
||||
|
||||
date1 = "P11MT15M"
|
||||
date2 = "PT1S"
|
||||
|
||||
duration1= parse_duration_to_isodate_duration(date1)
|
||||
assert duration1.months == 11
|
||||
assert duration1.tdelta.seconds == 900
|
||||
|
||||
duration2 = parse_duration_to_isodate_duration(date2)
|
||||
assert duration2.seconds == 1
|
||||
|
||||
timedelta1 = duration_to_timedelta(duration1)
|
||||
assert timedelta1.seconds == 900
|
||||
assert timedelta1.days == 11*30, timedelta1.days
|
||||
|
||||
timedelta2 = duration_to_timedelta(duration2)
|
||||
assert timedelta2.seconds == 1
|
||||
|
||||
parsed = parse_duration_to_isodate_duration("P11MT14M59S")
|
||||
assert iso8601_diff(date1, date2) == parsed, parsed
|
||||
|
||||
assert iso8601_duration_to_human(date1) == "11 Months and 15 Minutes ago", iso8601_duration_to_human(date1)
|
||||
assert iso8601_duration_to_human(date2) == "1 Second ago", iso8601_duration_to_human(date2)
|
||||
assert iso8601_duration_to_human(iso8601_diff(date1, date2)) == "11 Months, 14 Minutes and 59 Seconds ago", iso8601_duration_to_human(iso8601_diff(date1, date2))
|
||||
Reference in New Issue
Block a user