mirror of
https://github.com/vegu-ai/talemate.git
synced 2025-12-16 19:57:47 +01:00
Compare commits
8 Commits
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
611f77a730 | ||
|
|
0738899ac9 | ||
|
|
76b7b5c0e0 | ||
|
|
cae5e8d217 | ||
|
|
97bfd3a672 | ||
|
|
8fb1341b93 | ||
|
|
cba4412f3d | ||
|
|
2ad87f6e8a |
36
README.md
36
README.md
@@ -2,14 +2,17 @@
|
||||
|
||||
Allows you to play roleplay scenarios with large language models.
|
||||
|
||||
It does not run any large language models itself but relies on existing APIs. Currently supports **text-generation-webui** and **openai**.
|
||||
|
||||
This means you need to either have an openai api key or know how to setup [oobabooga/text-generation-webui](https://github.com/oobabooga/text-generation-webui) (locally or remotely via gpu renting. `--extension openai` flag needs to be set)
|
||||
|||
|
||||
|------------------------------------------|------------------------------------------|
|
||||
|
||||
As of version 0.13.0 the legacy text-generator-webui API `--extension api` is no longer supported, please use their new `--extension openai` api implementation instead.
|
||||
> :warning: **It does not run any large language models itself but relies on existing APIs. Currently supports OpenAI, text-generation-webui and LMStudio.**
|
||||
|
||||

|
||||

|
||||
This means you need to either have:
|
||||
- an [OpenAI](https://platform.openai.com/overview) api key
|
||||
- OR setup local (or remote via runpod) LLM inference via one of these options:
|
||||
- [oobabooga/text-generation-webui](https://github.com/oobabooga/text-generation-webui)
|
||||
- [LMStudio](https://lmstudio.ai/)
|
||||
|
||||
## Current features
|
||||
|
||||
@@ -22,7 +25,8 @@ As of version 0.13.0 the legacy text-generator-webui API `--extension api` is no
|
||||
- editor: improves AI responses (very hit and miss at the moment)
|
||||
- world state: generates world snapshot and handles passage of time (objects and characters)
|
||||
- creator: character / scenario creator
|
||||
- multi-client (agents can be connected to separate APIs)
|
||||
- tts: text to speech via elevenlabs, coqui studio, coqui local
|
||||
- multi-client support (agents can be connected to separate APIs)
|
||||
- long term memory
|
||||
- chromadb integration
|
||||
- passage of time
|
||||
@@ -40,7 +44,7 @@ Kinda making it up as i go along, but i want to lean more into gameplay through
|
||||
|
||||
In no particular order:
|
||||
|
||||
- TTS support
|
||||
|
||||
- Extension support
|
||||
- modular agents and clients
|
||||
- Improved world state
|
||||
@@ -60,13 +64,13 @@ In no particular order:
|
||||
|
||||
## Installation
|
||||
|
||||
Post [here](https://github.com/final-wombat/talemate/issues/17) if you run into problems during installation.
|
||||
Post [here](https://github.com/vegu-ai/talemate/issues/17) if you run into problems during installation.
|
||||
|
||||
### Windows
|
||||
|
||||
1. Download and install Python 3.10 or higher from the [official Python website](https://www.python.org/downloads/windows/).
|
||||
1. Download and install Node.js from the [official Node.js website](https://nodejs.org/en/download/). This will also install npm.
|
||||
1. Download the Talemate project to your local machine. Download from [the Releases page](https://github.com/final-wombat/talemate/releases).
|
||||
1. Download the Talemate project to your local machine. Download from [the Releases page](https://github.com/vegu-ai/talemate/releases).
|
||||
1. Unpack the download and run `install.bat` by double clicking it. This will set up the project on your local machine.
|
||||
1. Once the installation is complete, you can start the backend and frontend servers by running `start.bat`.
|
||||
1. Navigate your browser to http://localhost:8080
|
||||
@@ -75,7 +79,7 @@ Post [here](https://github.com/final-wombat/talemate/issues/17) if you run into
|
||||
|
||||
`python 3.10` or higher is required.
|
||||
|
||||
1. `git clone git@github.com:final-wombat/talemate`
|
||||
1. `git clone git@github.com:vegu-ai/talemate`
|
||||
1. `cd talemate`
|
||||
1. `source install.sh`
|
||||
1. Start the backend: `python src/talemate/server/run.py runserver --host 0.0.0.0 --port 5050`.
|
||||
@@ -118,15 +122,15 @@ https://www.reddit.com/r/LocalLLaMA/comments/17fhp9k/huge_llm_comparisontest_39_
|
||||
|
||||
On the right hand side click the "Add Client" button. If there is no button, you may need to toggle the client options by clicking this button:
|
||||
|
||||
As of version 0.13.0 the legacy text-generator-webui API `--extension api` is no longer supported, please use their new `--extension openai` api implementation instead.
|
||||
|
||||

|
||||
|
||||
### Text-generation-webui
|
||||
|
||||
> :warning: As of version 0.13.0 the legacy text-generator-webui API `--extension api` is no longer supported, please use their new `--extension openai` api implementation instead.
|
||||
|
||||
In the modal if you're planning to connect to text-generation-webui, you can likely leave everything as is and just click Save.
|
||||
|
||||

|
||||

|
||||
|
||||
### OpenAI
|
||||
|
||||
@@ -164,8 +168,8 @@ Make sure you save the scene after the character is loaded as it can then be loa
|
||||
|
||||
Please read the documents in the `docs` folder for more advanced configuration and usage.
|
||||
|
||||
- Creative mode (docs WIP)
|
||||
- Prompt template overrides
|
||||
- [Prompt template overrides](docs/templates.md)
|
||||
- [Text-to-Speech (TTS)](docs/tts.md)
|
||||
- [ChromaDB (long term memory)](docs/chromadb.md)
|
||||
- Runpod Integration
|
||||
- [Runpod Integration](docs/runpod.md)
|
||||
- Creative mode
|
||||
@@ -7,12 +7,7 @@ creator:
|
||||
- a thrilling action story aimed at an adult audience.
|
||||
- a mysterious adventure aimed at an adult audience.
|
||||
- an epic sci-fi adventure aimed at an adult audience.
|
||||
game:
|
||||
default_player_character:
|
||||
color: '#6495ed'
|
||||
description: a young man with a penchant for adventure.
|
||||
gender: male
|
||||
name: Elmer
|
||||
game: {}
|
||||
|
||||
## Long-term memory
|
||||
|
||||
|
||||
BIN
docs/img/client-setup-0.13.png
Normal file
BIN
docs/img/client-setup-0.13.png
Normal file
Binary file not shown.
|
After Width: | Height: | Size: 14 KiB |
BIN
docs/img/runpod-docs-1.png
Normal file
BIN
docs/img/runpod-docs-1.png
Normal file
Binary file not shown.
|
After Width: | Height: | Size: 6.6 KiB |
52
docs/runpod.md
Normal file
52
docs/runpod.md
Normal file
@@ -0,0 +1,52 @@
|
||||
## RunPod integration
|
||||
|
||||
RunPod allows you to quickly set up and run text-generation-webui instances on powerful GPUs, remotely. If you want to run the significantly larger models (like 70B parameters) with reasonable speeds, this is probably the best way to do it.
|
||||
|
||||
### Create / grab your RunPod API key and add it to the talemate config
|
||||
|
||||
You can manage your RunPod api keys at [https://www.runpod.io/console/user/settings](https://www.runpod.io/console/user/settings)
|
||||
|
||||
Add the key to your Talemate config file (config.yaml):
|
||||
|
||||
```yaml
|
||||
runpod:
|
||||
api_key: <your api key>
|
||||
```
|
||||
|
||||
Then restart Talemate.
|
||||
|
||||
### Create a RunPod instance
|
||||
|
||||
#### Community Cloud
|
||||
|
||||
The community cloud pods are cheaper and there are generally more GPUs available. They do however not support persistent storage and you will have to download your model and data every time you deploy a pod.
|
||||
|
||||
#### Secure Cloud
|
||||
|
||||
The secure cloud pods are more expensive and there are generally fewer GPUs available, but they do support persistent storage.
|
||||
|
||||
Peristent volumes are super convenient, but optional for our purposes and are **not** free and you will have to pay for the storage you use.
|
||||
|
||||
### Deploy pod
|
||||
|
||||
For us it does not matter which cloud you choose. The only thing that matters is that it deploys a text-generation-webui instance, and you ensure that by choosing the right template.
|
||||
|
||||
Pick the GPU you want to use, for 70B models you want at least 48GB of VRAM and click `Deploy`, then select a template and deploy.
|
||||
|
||||
When choosing the template for your pod, choose the `RunPod TheBloke LLMs` template. This template is pre-configured with all the dependencies needed to run text-generation-webui. There are other text-generation-webui templates, but they are usually out of date and this one i found to be consistently good.
|
||||
|
||||
> :warning: The name of your pod is important and ensures that Talemate will be able to find it. Talemate will only be able to find pods that have the word `thebloke llms` or `textgen` in their name. (case insensitive)
|
||||
|
||||
Once your pod is deployed and has finished setup and is running, the client will automatically appear in the Talemate client list, making it available for you to use like you would use a locally hosted text-generation-webui instance.
|
||||
|
||||

|
||||
|
||||
### Connecting to the text-generation-webui UI
|
||||
|
||||
To manage your text-generation-webui instance, click the `Connect` button in your RunPod pod dashboard at [https://www.runpod.io/console/pods](https://www.runpod.io/console/pods) and in the popup click on `Connect to HTTP Service [Port 7860]` to open the text-generation-webui UI. Then just download and load your model as you normally would.
|
||||
|
||||
## :warning: Always check your pod status on the RunPod dashboard
|
||||
|
||||
Talemate is not a suitable or reliable way for you to determine whether your pod is currently running or not. **Always** check the runpod dashboard to see if your pod is running or not.
|
||||
|
||||
While your pod us running it will be eating up your credits, so make sure to stop it when you're not using it.
|
||||
82
docs/templates.md
Normal file
82
docs/templates.md
Normal file
@@ -0,0 +1,82 @@
|
||||
# Template Overrides in Talemate
|
||||
|
||||
## Introduction to Templates
|
||||
|
||||
In Talemate, templates are used to generate dynamic content for various agents involved in roleplaying scenarios. These templates leverage the Jinja2 templating engine, allowing for the inclusion of variables, conditional logic, and custom functions to create rich and interactive narratives.
|
||||
|
||||
## Template Structure
|
||||
|
||||
A typical template in Talemate consists of several sections, each enclosed within special section tags (`<|SECTION:NAME|>` and `<|CLOSE_SECTION|>`). These sections can include character details, dialogue examples, scenario overviews, tasks, and additional context. Templates utilize loops and blocks to iterate over data and render content conditionally based on the task requirements.
|
||||
|
||||
## Overriding Templates
|
||||
|
||||
Users can customize the behavior of Talemate by overriding the default templates. To override a template, create a new template file with the same name in the `./templates/prompts/{agent}/` directory. When a custom template is present, Jinja2 will prioritize it over the default template located in the `./src/talemate/prompts/templates/{agent}/` directory.
|
||||
|
||||
## Creator Agent Templates
|
||||
|
||||
The creator agent templates allow for the creation of new characters within the character creator. Following the naming convention `character-attributes-*.jinja2`, `character-details-*.jinja2`, and `character-example-dialogue-*.jinja2`, users can add new templates that will be available for selection in the character creator.
|
||||
|
||||
### Requirements for Creator Templates
|
||||
|
||||
- All three types (`attributes`, `details`, `example-dialogue`) need to be available for a choice to be valid in the character creator.
|
||||
- Users can check the human templates for an understanding of how to structure these templates.
|
||||
|
||||
### Example Templates
|
||||
|
||||
- [Character Attributes Human Template](src/talemate/prompts/templates/creator/character-attributes-human.jinja2)
|
||||
- [Character Details Human Template](src/talemate/prompts/templates/creator/character-details-human.jinja2)
|
||||
- [Character Example Dialogue Human Template](src/talemate/prompts/templates/creator/character-example-dialogue-human.jinja2)
|
||||
|
||||
These example templates can serve as a guide for users to create their own custom templates for the character creator.
|
||||
|
||||
### Extending Existing Templates
|
||||
|
||||
Jinja2's template inheritance feature allows users to extend existing templates and add extra information. By using the `{% extends "template-name.jinja2" %}` tag, a new template can inherit everything from an existing template and then add or override specific blocks of content.
|
||||
|
||||
#### Example
|
||||
|
||||
To add a description of a character's hairstyle to the human character details template, you could create a new template like this:
|
||||
|
||||
```jinja2
|
||||
{% extends "character-details-human.jinja2" %}
|
||||
{% block questions %}
|
||||
{% if character_details.q("what does "+character.name+"'s hair look like?") -%}
|
||||
Briefly describe {{ character.name }}'s hair-style using a narrative writing style that reminds of mid 90s point and click adventure games. (2 - 3 sentences).
|
||||
{% endif %}
|
||||
{% endblock %}
|
||||
```
|
||||
|
||||
This example shows how to extend the `character-details-human.jinja2` template and add a block for questions about the character's hair. The `{% block questions %}` tag is used to define a section where additional questions can be inserted or existing ones can be overridden.
|
||||
|
||||
## Advanced Template Topics
|
||||
|
||||
### Jinja2 Functions in Talemate
|
||||
|
||||
Talemate exposes several functions to the Jinja2 template environment, providing utilities for data manipulation, querying, and controlling content flow. Here's a list of available functions:
|
||||
|
||||
1. `set_prepared_response(response, prepend)`: Sets the prepared response with an optional prepend string. This function allows the template to specify the beginning of the LLM response when processing the rendered template. For example, `set_prepared_response("Certainly!")` will ensure that the LLM's response starts with "Certainly!".
|
||||
2. `set_prepared_response_random(responses, prefix)`: Chooses a random response from a list and sets it as the prepared response with an optional prefix.
|
||||
3. `set_eval_response(empty)`: Prepares the response for evaluation, optionally initializing a counter for an empty string.
|
||||
4. `set_json_response(initial_object, instruction, cutoff)`: Prepares for a JSON response with an initial object and optional instruction and cutoff.
|
||||
5. `set_question_eval(question, trigger, counter, weight)`: Sets up a question for evaluation with a trigger, counter, and weight.
|
||||
6. `disable_dedupe()`: Disables deduplication of the response text.
|
||||
7. `random(min, max)`: Generates a random integer between the specified minimum and maximum.
|
||||
8. `query_scene(query, at_the_end, as_narrative)`: Queries the scene with a question and returns the formatted response.
|
||||
9. `query_text(query, text, as_question_answer)`: Queries a text with a question and returns the formatted response.
|
||||
10. `query_memory(query, as_question_answer, **kwargs)`: Queries the memory with a question and returns the formatted response.
|
||||
11. `instruct_text(instruction, text)`: Instructs the text with a command and returns the result.
|
||||
12. `retrieve_memories(lines, goal)`: Retrieves memories based on the provided lines and an optional goal.
|
||||
13. `uuidgen()`: Generates a UUID string.
|
||||
14. `to_int(x)`: Converts the given value to an integer.
|
||||
15. `config`: Accesses the configuration settings.
|
||||
16. `len(x)`: Returns the length of the given object.
|
||||
17. `count_tokens(x)`: Counts the number of tokens in the given text.
|
||||
18. `print(x)`: Prints the given object (mainly for debugging purposes).
|
||||
|
||||
These functions enhance the capabilities of templates, allowing for dynamic and interactive content generation.
|
||||
|
||||
### Error Handling
|
||||
|
||||
Errors encountered during template rendering are logged and propagated to the user interface. This ensures that users are informed of any issues that may arise, allowing them to troubleshoot and resolve problems effectively.
|
||||
|
||||
By following these guidelines, users can create custom templates that tailor the Talemate experience to their specific storytelling needs.# Template Overrides in Talemate
|
||||
1171
poetry.lock
generated
1171
poetry.lock
generated
File diff suppressed because it is too large
Load Diff
@@ -4,7 +4,7 @@ build-backend = "poetry.masonry.api"
|
||||
|
||||
[tool.poetry]
|
||||
name = "talemate"
|
||||
version = "0.14.0"
|
||||
version = "0.16.0"
|
||||
description = "AI-backed roleplay and narrative tools"
|
||||
authors = ["FinalWombat"]
|
||||
license = "GNU Affero General Public License v3.0"
|
||||
@@ -32,7 +32,7 @@ beautifulsoup4 = "^4.12.2"
|
||||
python-dotenv = "^1.0.0"
|
||||
websockets = "^11.0.3"
|
||||
structlog = "^23.1.0"
|
||||
runpod = "==1.2.0"
|
||||
runpod = "^1.2.0"
|
||||
nest_asyncio = "^1.5.7"
|
||||
isodate = ">=0.6.1"
|
||||
thefuzz = ">=0.20.0"
|
||||
|
||||
@@ -2,4 +2,4 @@ from .agents import Agent
|
||||
from .client import TextGeneratorWebuiClient
|
||||
from .tale_mate import *
|
||||
|
||||
VERSION = "0.14.0"
|
||||
VERSION = "0.16.0"
|
||||
|
||||
@@ -1,6 +1,5 @@
|
||||
from .base import Agent
|
||||
from .creator import CreatorAgent
|
||||
from .context import ContextAgent
|
||||
from .conversation import ConversationAgent
|
||||
from .director import DirectorAgent
|
||||
from .memory import ChromaDBMemoryAgent, MemoryAgent
|
||||
|
||||
@@ -9,6 +9,7 @@ from blinker import signal
|
||||
|
||||
import talemate.instance as instance
|
||||
import talemate.util as util
|
||||
from talemate.agents.context import ActiveAgent
|
||||
from talemate.emit import emit
|
||||
from talemate.events import GameLoopStartEvent
|
||||
import talemate.emit.async_signals
|
||||
@@ -23,21 +24,12 @@ __all__ = [
|
||||
|
||||
log = structlog.get_logger("talemate.agents.base")
|
||||
|
||||
class CallableConfigValue:
|
||||
def __init__(self, fn):
|
||||
self.fn = fn
|
||||
|
||||
def __str__(self):
|
||||
return "CallableConfigValue"
|
||||
|
||||
def __repr__(self):
|
||||
return "CallableConfigValue"
|
||||
|
||||
class AgentActionConfig(pydantic.BaseModel):
|
||||
type: str
|
||||
label: str
|
||||
description: str = ""
|
||||
value: Union[int, float, str, bool, None]
|
||||
value: Union[int, float, str, bool, None] = None
|
||||
default_value: Union[int, float, str, bool] = None
|
||||
max: Union[int, float, None] = None
|
||||
min: Union[int, float, None] = None
|
||||
@@ -65,11 +57,12 @@ def set_processing(fn):
|
||||
"""
|
||||
|
||||
async def wrapper(self, *args, **kwargs):
|
||||
try:
|
||||
await self.emit_status(processing=True)
|
||||
return await fn(self, *args, **kwargs)
|
||||
finally:
|
||||
await self.emit_status(processing=False)
|
||||
with ActiveAgent(self, fn):
|
||||
try:
|
||||
await self.emit_status(processing=True)
|
||||
return await fn(self, *args, **kwargs)
|
||||
finally:
|
||||
await self.emit_status(processing=False)
|
||||
|
||||
wrapper.__name__ = fn.__name__
|
||||
|
||||
@@ -85,6 +78,7 @@ class Agent(ABC):
|
||||
verbose_name = None
|
||||
set_processing = set_processing
|
||||
requires_llm_client = True
|
||||
auto_break_repetition = False
|
||||
|
||||
@property
|
||||
def agent_details(self):
|
||||
@@ -291,6 +285,22 @@ class Agent(ABC):
|
||||
|
||||
current_memory_context.append(memory)
|
||||
return current_memory_context
|
||||
|
||||
# LLM client related methods. These are called during or after the client
|
||||
# sends the prompt to the API.
|
||||
|
||||
def inject_prompt_paramters(self, prompt_param:dict, kind:str, agent_function_name:str):
|
||||
"""
|
||||
Injects prompt parameters before the client sends off the prompt
|
||||
Override as needed.
|
||||
"""
|
||||
pass
|
||||
|
||||
def allow_repetition_break(self, kind:str, agent_function_name:str, auto:bool=False):
|
||||
"""
|
||||
Returns True if repetition breaking is allowed, False otherwise.
|
||||
"""
|
||||
return False
|
||||
|
||||
@dataclasses.dataclass
|
||||
class AgentEmission:
|
||||
|
||||
@@ -1,54 +1,33 @@
|
||||
from .base import Agent
|
||||
from .registry import register
|
||||
|
||||
from typing import Callable, TYPE_CHECKING
|
||||
import contextvars
|
||||
import pydantic
|
||||
|
||||
@register
|
||||
class ContextAgent(Agent):
|
||||
"""
|
||||
Agent that helps retrieve context for the continuation
|
||||
of dialogue.
|
||||
"""
|
||||
__all__ = [
|
||||
"active_agent",
|
||||
]
|
||||
|
||||
agent_type = "context"
|
||||
active_agent = contextvars.ContextVar("active_agent", default=None)
|
||||
|
||||
def __init__(self, client, **kwargs):
|
||||
self.client = client
|
||||
class ActiveAgentContext(pydantic.BaseModel):
|
||||
agent: object
|
||||
fn: Callable
|
||||
|
||||
class Config:
|
||||
arbitrary_types_allowed=True
|
||||
|
||||
@property
|
||||
def action(self):
|
||||
return self.fn.__name__
|
||||
|
||||
def determine_questions(self, scene_text):
|
||||
prompt = [
|
||||
"You are tasked to continue the following dialogue in a roleplaying session, but before you can do so you can ask three questions for extra context."
|
||||
"",
|
||||
"What are the questions you would ask?",
|
||||
"",
|
||||
"Known context and dialogue:" "",
|
||||
scene_text,
|
||||
"",
|
||||
"Questions:",
|
||||
"",
|
||||
]
|
||||
|
||||
prompt = "\n".join(prompt)
|
||||
|
||||
questions = self.client.send_prompt(prompt, kind="question")
|
||||
|
||||
questions = self.clean_result(questions)
|
||||
|
||||
return questions.split("\n")
|
||||
|
||||
def get_answer(self, question, context):
|
||||
prompt = [
|
||||
"Read the context and answer the question:",
|
||||
"",
|
||||
"Context:",
|
||||
"",
|
||||
context,
|
||||
"",
|
||||
f"Question: {question}",
|
||||
"Answer:",
|
||||
]
|
||||
|
||||
prompt = "\n".join(prompt)
|
||||
|
||||
answer = self.client.send_prompt(prompt, kind="answer")
|
||||
answer = self.clean_result(answer)
|
||||
return answer
|
||||
class ActiveAgent:
|
||||
|
||||
def __init__(self, agent, fn):
|
||||
self.agent = ActiveAgentContext(agent=agent, fn=fn)
|
||||
|
||||
def __enter__(self):
|
||||
self.token = active_agent.set(self.agent)
|
||||
|
||||
def __exit__(self, *args, **kwargs):
|
||||
active_agent.reset(self.token)
|
||||
return False
|
||||
|
||||
@@ -85,20 +85,25 @@ class ConversationAgent(Agent):
|
||||
"instructions": AgentActionConfig(
|
||||
type="text",
|
||||
label="Instructions",
|
||||
value="1-3 sentences.",
|
||||
value="Write 1-3 sentences. Never wax poetic.",
|
||||
description="Extra instructions to give the AI for dialog generatrion.",
|
||||
),
|
||||
"jiggle": AgentActionConfig(
|
||||
type="number",
|
||||
label="Jiggle",
|
||||
label="Jiggle (Increased Randomness)",
|
||||
description="If > 0.0 will cause certain generation parameters to have a slight random offset applied to them. The bigger the number, the higher the potential offset.",
|
||||
value=0.0,
|
||||
min=0.0,
|
||||
max=1.0,
|
||||
step=0.1,
|
||||
),
|
||||
)
|
||||
}
|
||||
),
|
||||
"auto_break_repetition": AgentAction(
|
||||
enabled = True,
|
||||
label = "Auto Break Repetition",
|
||||
description = "Will attempt to automatically break AI repetition.",
|
||||
),
|
||||
"natural_flow": AgentAction(
|
||||
enabled = True,
|
||||
label = "Natural Flow",
|
||||
@@ -131,7 +136,7 @@ class ConversationAgent(Agent):
|
||||
config = {
|
||||
"ai_selected": AgentActionConfig(
|
||||
type="bool",
|
||||
label="AI Selected",
|
||||
label="AI memory retrieval",
|
||||
description="If enabled, the AI will select the long term memory to use. (will increase how long it takes to generate a response)",
|
||||
value=False,
|
||||
),
|
||||
@@ -534,3 +539,11 @@ class ConversationAgent(Agent):
|
||||
actor.scene.push_history(messages)
|
||||
|
||||
return messages
|
||||
|
||||
|
||||
def allow_repetition_break(self, kind: str, agent_function_name: str, auto: bool = False):
|
||||
|
||||
if auto and not self.actions["auto_break_repetition"].enabled:
|
||||
return False
|
||||
|
||||
return agent_function_name == "converse"
|
||||
@@ -156,6 +156,12 @@ class EditorAgent(Agent):
|
||||
message = content.split(character_prefix)[1]
|
||||
content = f"{character_prefix}*{message.strip('*')}*"
|
||||
return content
|
||||
elif '"' in content:
|
||||
# if both are present we strip the * and add them back later
|
||||
# through ensure_dialog_format - right now most LLMs aren't
|
||||
# smart enough to do quotes and italics at the same time consistently
|
||||
# especially throughout long conversations
|
||||
content = content.replace('*', '')
|
||||
|
||||
content = util.clean_dialogue(content, main_name=character.name)
|
||||
content = util.strip_partial_sentences(content)
|
||||
|
||||
@@ -6,10 +6,14 @@ from typing import TYPE_CHECKING, Callable, List, Optional, Union
|
||||
from chromadb.config import Settings
|
||||
import talemate.events as events
|
||||
import talemate.util as util
|
||||
from talemate.emit import emit
|
||||
from talemate.emit.signals import handlers
|
||||
from talemate.context import scene_is_loading
|
||||
from talemate.config import load_config
|
||||
from talemate.agents.base import set_processing
|
||||
import structlog
|
||||
import shutil
|
||||
import functools
|
||||
|
||||
try:
|
||||
import chromadb
|
||||
@@ -57,6 +61,15 @@ class MemoryAgent(Agent):
|
||||
self.scene = scene
|
||||
self.memory_tracker = {}
|
||||
self.config = load_config()
|
||||
|
||||
handlers["config_saved"].connect(self.on_config_saved)
|
||||
|
||||
def on_config_saved(self, event):
|
||||
openai_key = self.openai_api_key
|
||||
self.config = load_config()
|
||||
if openai_key != self.openai_api_key:
|
||||
loop = asyncio.get_running_loop()
|
||||
loop.run_until_complete(self.emit_status())
|
||||
|
||||
async def set_db(self):
|
||||
raise NotImplementedError()
|
||||
@@ -67,33 +80,43 @@ class MemoryAgent(Agent):
|
||||
async def count(self):
|
||||
raise NotImplementedError()
|
||||
|
||||
@set_processing
|
||||
async def add(self, text, character=None, uid=None, ts:str=None, **kwargs):
|
||||
if not text:
|
||||
return
|
||||
if self.readonly:
|
||||
log.debug("memory agent", status="readonly")
|
||||
return
|
||||
await self._add(text, character=character, uid=uid, ts=ts, **kwargs)
|
||||
|
||||
loop = asyncio.get_running_loop()
|
||||
|
||||
await loop.run_in_executor(None, functools.partial(self._add, text, character, uid=uid, ts=ts, **kwargs))
|
||||
|
||||
async def _add(self, text, character=None, ts:str=None, **kwargs):
|
||||
def _add(self, text, character=None, ts:str=None, **kwargs):
|
||||
raise NotImplementedError()
|
||||
|
||||
@set_processing
|
||||
async def add_many(self, objects: list[dict]):
|
||||
if self.readonly:
|
||||
log.debug("memory agent", status="readonly")
|
||||
return
|
||||
await self._add_many(objects)
|
||||
|
||||
loop = asyncio.get_running_loop()
|
||||
await loop.run_in_executor(None, self._add_many, objects)
|
||||
|
||||
async def _add_many(self, objects: list[dict]):
|
||||
def _add_many(self, objects: list[dict]):
|
||||
"""
|
||||
Add multiple objects to the memory
|
||||
"""
|
||||
raise NotImplementedError()
|
||||
|
||||
@set_processing
|
||||
async def get(self, text, character=None, **query):
|
||||
return await self._get(str(text), character, **query)
|
||||
loop = asyncio.get_running_loop()
|
||||
|
||||
return await loop.run_in_executor(None, functools.partial(self._get, text, character, **query))
|
||||
|
||||
async def _get(self, text, character=None, **query):
|
||||
def _get(self, text, character=None, **query):
|
||||
raise NotImplementedError()
|
||||
|
||||
def get_document(self, id):
|
||||
@@ -140,6 +163,10 @@ class MemoryAgent(Agent):
|
||||
"""
|
||||
|
||||
memory_context = []
|
||||
|
||||
if not query:
|
||||
return memory_context
|
||||
|
||||
for memory in await self.get(query):
|
||||
if memory in memory_context:
|
||||
continue
|
||||
@@ -179,6 +206,10 @@ class MemoryAgent(Agent):
|
||||
|
||||
memory_context = []
|
||||
for query in queries:
|
||||
|
||||
if not query:
|
||||
continue
|
||||
|
||||
i = 0
|
||||
for memory in await self.get(formatter(query), limit=iterate, **where):
|
||||
if memory in memory_context:
|
||||
@@ -210,6 +241,10 @@ class ChromaDBMemoryAgent(MemoryAgent):
|
||||
|
||||
@property
|
||||
def ready(self):
|
||||
|
||||
if self.embeddings == "openai" and not self.openai_api_key:
|
||||
return False
|
||||
|
||||
if getattr(self, "db_client", None):
|
||||
return True
|
||||
return False
|
||||
@@ -218,10 +253,18 @@ class ChromaDBMemoryAgent(MemoryAgent):
|
||||
def status(self):
|
||||
if self.ready:
|
||||
return "active" if not getattr(self, "processing", False) else "busy"
|
||||
|
||||
if self.embeddings == "openai" and not self.openai_api_key:
|
||||
return "error"
|
||||
|
||||
return "waiting"
|
||||
|
||||
@property
|
||||
def agent_details(self):
|
||||
|
||||
if self.embeddings == "openai" and not self.openai_api_key:
|
||||
return "No OpenAI API key set"
|
||||
|
||||
return f"ChromaDB: {self.embeddings}"
|
||||
|
||||
@property
|
||||
@@ -266,6 +309,10 @@ class ChromaDBMemoryAgent(MemoryAgent):
|
||||
def db_name(self):
|
||||
return getattr(self, "collection_name", "<unnamed>")
|
||||
|
||||
@property
|
||||
def openai_api_key(self):
|
||||
return self.config.get("openai",{}).get("api_key")
|
||||
|
||||
def make_collection_name(self, scene):
|
||||
|
||||
if self.USE_OPENAI:
|
||||
@@ -286,17 +333,19 @@ class ChromaDBMemoryAgent(MemoryAgent):
|
||||
await asyncio.sleep(0)
|
||||
return self.db.count()
|
||||
|
||||
@set_processing
|
||||
async def set_db(self):
|
||||
await self.emit_status(processing=True)
|
||||
|
||||
loop = asyncio.get_running_loop()
|
||||
await loop.run_in_executor(None, self._set_db)
|
||||
|
||||
def _set_db(self):
|
||||
if not getattr(self, "db_client", None):
|
||||
log.info("chromadb agent", status="setting up db client to persistent db")
|
||||
self.db_client = chromadb.PersistentClient(
|
||||
settings=Settings(anonymized_telemetry=False)
|
||||
)
|
||||
|
||||
openai_key = self.config.get("openai").get("api_key") or os.environ.get("OPENAI_API_KEY")
|
||||
openai_key = self.openai_api_key
|
||||
|
||||
self.collection_name = collection_name = self.make_collection_name(self.scene)
|
||||
|
||||
@@ -341,8 +390,6 @@ class ChromaDBMemoryAgent(MemoryAgent):
|
||||
self.db = self.db_client.get_or_create_collection(collection_name)
|
||||
|
||||
self.scene._memory_never_persisted = self.db.count() == 0
|
||||
|
||||
await self.emit_status(processing=False)
|
||||
log.info("chromadb agent", status="db ready")
|
||||
|
||||
def clear_db(self):
|
||||
@@ -383,12 +430,10 @@ class ChromaDBMemoryAgent(MemoryAgent):
|
||||
|
||||
self.db = None
|
||||
|
||||
async def _add(self, text, character=None, uid=None, ts:str=None, **kwargs):
|
||||
def _add(self, text, character=None, uid=None, ts:str=None, **kwargs):
|
||||
metadatas = []
|
||||
ids = []
|
||||
|
||||
await self.emit_status(processing=True)
|
||||
|
||||
if character:
|
||||
meta = {"character": character.name, "source": "talemate"}
|
||||
if ts:
|
||||
@@ -413,17 +458,13 @@ class ChromaDBMemoryAgent(MemoryAgent):
|
||||
#log.debug("chromadb agent add", text=text, meta=meta, id=id)
|
||||
|
||||
self.db.upsert(documents=[text], metadatas=metadatas, ids=ids)
|
||||
|
||||
await self.emit_status(processing=False)
|
||||
|
||||
async def _add_many(self, objects: list[dict]):
|
||||
def _add_many(self, objects: list[dict]):
|
||||
|
||||
documents = []
|
||||
metadatas = []
|
||||
ids = []
|
||||
|
||||
await self.emit_status(processing=True)
|
||||
|
||||
for obj in objects:
|
||||
documents.append(obj["text"])
|
||||
meta = obj.get("meta", {})
|
||||
@@ -436,11 +477,7 @@ class ChromaDBMemoryAgent(MemoryAgent):
|
||||
ids.append(uid)
|
||||
self.db.upsert(documents=documents, metadatas=metadatas, ids=ids)
|
||||
|
||||
await self.emit_status(processing=False)
|
||||
|
||||
async def _get(self, text, character=None, limit:int=15, **kwargs):
|
||||
await self.emit_status(processing=True)
|
||||
|
||||
def _get(self, text, character=None, limit:int=15, **kwargs):
|
||||
where = {}
|
||||
where.setdefault("$and", [])
|
||||
|
||||
@@ -480,7 +517,7 @@ class ChromaDBMemoryAgent(MemoryAgent):
|
||||
if distance < 1:
|
||||
|
||||
try:
|
||||
log.debug("chromadb agent get", ts=ts, scene_ts=self.scene.ts)
|
||||
#log.debug("chromadb agent get", ts=ts, scene_ts=self.scene.ts)
|
||||
date_prefix = util.iso8601_diff_to_human(ts, self.scene.ts)
|
||||
except Exception as e:
|
||||
log.error("chromadb agent", error="failed to get date prefix", details=e, ts=ts, scene_ts=self.scene.ts)
|
||||
@@ -497,6 +534,4 @@ class ChromaDBMemoryAgent(MemoryAgent):
|
||||
if len(results) > limit:
|
||||
break
|
||||
|
||||
await self.emit_status(processing=False)
|
||||
|
||||
return results
|
||||
|
||||
@@ -68,6 +68,24 @@ class NarratorAgent(Agent):
|
||||
# agent actions
|
||||
|
||||
self.actions = {
|
||||
"generation_override": AgentAction(
|
||||
enabled = True,
|
||||
label = "Generation Override",
|
||||
description = "Override generation parameters",
|
||||
config = {
|
||||
"instructions": AgentActionConfig(
|
||||
type="text",
|
||||
label="Instructions",
|
||||
value="Never wax poetic.",
|
||||
description="Extra instructions to give to the AI for narrative generation.",
|
||||
),
|
||||
}
|
||||
),
|
||||
"auto_break_repetition": AgentAction(
|
||||
enabled = True,
|
||||
label = "Auto Break Repetition",
|
||||
description = "Will attempt to automatically break AI repetition.",
|
||||
),
|
||||
"narrate_time_passage": AgentAction(enabled=True, label="Narrate Time Passage", description="Whenever you indicate passage of time, narrate right after"),
|
||||
"narrate_dialogue": AgentAction(
|
||||
enabled=True,
|
||||
@@ -92,10 +110,22 @@ class NarratorAgent(Agent):
|
||||
max=1.0,
|
||||
step=0.1,
|
||||
),
|
||||
"generate_dialogue": AgentActionConfig(
|
||||
type="bool",
|
||||
label="Allow Dialogue in Narration",
|
||||
description="Allow the narrator to generate dialogue in narration",
|
||||
value=False,
|
||||
),
|
||||
}
|
||||
),
|
||||
}
|
||||
|
||||
@property
|
||||
def extra_instructions(self):
|
||||
if self.actions["generation_override"].enabled:
|
||||
return self.actions["generation_override"].config["instructions"].value
|
||||
return ""
|
||||
|
||||
def clean_result(self, result):
|
||||
|
||||
"""
|
||||
@@ -153,16 +183,22 @@ class NarratorAgent(Agent):
|
||||
|
||||
if not self.actions["narrate_dialogue"].enabled:
|
||||
return
|
||||
narrate_on_ai_chance = self.actions["narrate_dialogue"].config["ai_dialog"].value
|
||||
narrate_on_player_chance = self.actions["narrate_dialogue"].config["player_dialog"].value
|
||||
narrate_on_ai = random.random() < narrate_on_ai_chance
|
||||
narrate_on_player = random.random() < narrate_on_player_chance
|
||||
log.debug(
|
||||
"narrate on dialog",
|
||||
narrate_on_ai=narrate_on_ai,
|
||||
narrate_on_ai_chance=narrate_on_ai_chance,
|
||||
narrate_on_player=narrate_on_player,
|
||||
narrate_on_player_chance=narrate_on_player_chance,
|
||||
)
|
||||
|
||||
narrate_on_ai_chance = random.random() < self.actions["narrate_dialogue"].config["ai_dialog"].value
|
||||
narrate_on_player_chance = random.random() < self.actions["narrate_dialogue"].config["player_dialog"].value
|
||||
|
||||
log.debug("narrate on dialog", narrate_on_ai_chance=narrate_on_ai_chance, narrate_on_player_chance=narrate_on_player_chance)
|
||||
|
||||
if event.actor.character.is_player and not narrate_on_player_chance:
|
||||
if event.actor.character.is_player and not narrate_on_player:
|
||||
return
|
||||
|
||||
if not event.actor.character.is_player and not narrate_on_ai_chance:
|
||||
if not event.actor.character.is_player and not narrate_on_ai:
|
||||
return
|
||||
|
||||
response = await self.narrate_after_dialogue(event.actor.character)
|
||||
@@ -183,6 +219,7 @@ class NarratorAgent(Agent):
|
||||
vars = {
|
||||
"scene": self.scene,
|
||||
"max_tokens": self.client.max_token_length,
|
||||
"extra_instructions": self.extra_instructions,
|
||||
}
|
||||
)
|
||||
|
||||
@@ -200,22 +237,11 @@ class NarratorAgent(Agent):
|
||||
"""
|
||||
|
||||
scene = self.scene
|
||||
director = scene.get_helper("director").agent
|
||||
pc = scene.get_player_character()
|
||||
npcs = list(scene.get_npc_characters())
|
||||
npc_names= ", ".join([npc.name for npc in npcs])
|
||||
|
||||
#summarized_history = await scene.summarized_dialogue_history(
|
||||
# budget = self.client.max_token_length - 300,
|
||||
# min_dialogue = 50,
|
||||
#)
|
||||
|
||||
#augmented_context = await self.augment_context()
|
||||
|
||||
if narrative_direction is None:
|
||||
#narrative_direction = await director.direct_narrative(
|
||||
# scene.context_history(budget=self.client.max_token_length - 500, min_dialogue=20),
|
||||
#)
|
||||
narrative_direction = "Slightly move the current scene forward."
|
||||
|
||||
self.scene.log.info("narrative_direction", narrative_direction=narrative_direction)
|
||||
@@ -226,13 +252,12 @@ class NarratorAgent(Agent):
|
||||
"narrate",
|
||||
vars = {
|
||||
"scene": self.scene,
|
||||
#"summarized_history": summarized_history,
|
||||
#"augmented_context": augmented_context,
|
||||
"max_tokens": self.client.max_token_length,
|
||||
"narrative_direction": narrative_direction,
|
||||
"player_character": pc,
|
||||
"npcs": npcs,
|
||||
"npc_names": npc_names,
|
||||
"extra_instructions": self.extra_instructions,
|
||||
}
|
||||
)
|
||||
|
||||
@@ -263,6 +288,7 @@ class NarratorAgent(Agent):
|
||||
"query": query,
|
||||
"at_the_end": at_the_end,
|
||||
"as_narrative": as_narrative,
|
||||
"extra_instructions": self.extra_instructions,
|
||||
}
|
||||
)
|
||||
log.info("narrate_query", response=response)
|
||||
@@ -299,6 +325,7 @@ class NarratorAgent(Agent):
|
||||
"character": character,
|
||||
"max_tokens": self.client.max_token_length,
|
||||
"memory": memory_context,
|
||||
"extra_instructions": self.extra_instructions,
|
||||
}
|
||||
)
|
||||
|
||||
@@ -323,6 +350,7 @@ class NarratorAgent(Agent):
|
||||
vars = {
|
||||
"scene": self.scene,
|
||||
"max_tokens": self.client.max_token_length,
|
||||
"extra_instructions": self.extra_instructions,
|
||||
}
|
||||
)
|
||||
|
||||
@@ -343,6 +371,7 @@ class NarratorAgent(Agent):
|
||||
"max_tokens": self.client.max_token_length,
|
||||
"memory": memory_context,
|
||||
"questions": questions,
|
||||
"extra_instructions": self.extra_instructions,
|
||||
}
|
||||
)
|
||||
|
||||
@@ -368,6 +397,7 @@ class NarratorAgent(Agent):
|
||||
"max_tokens": self.client.max_token_length,
|
||||
"duration": duration,
|
||||
"narrative": narrative,
|
||||
"extra_instructions": self.extra_instructions,
|
||||
}
|
||||
)
|
||||
|
||||
@@ -393,7 +423,8 @@ class NarratorAgent(Agent):
|
||||
"scene": self.scene,
|
||||
"max_tokens": self.client.max_token_length,
|
||||
"character": character,
|
||||
"last_line": str(self.scene.history[-1])
|
||||
"last_line": str(self.scene.history[-1]),
|
||||
"extra_instructions": self.extra_instructions,
|
||||
}
|
||||
)
|
||||
|
||||
@@ -401,5 +432,28 @@ class NarratorAgent(Agent):
|
||||
|
||||
response = self.clean_result(response.strip().strip("*"))
|
||||
response = f"*{response}*"
|
||||
|
||||
allow_dialogue = self.actions["narrate_dialogue"].config["generate_dialogue"].value
|
||||
|
||||
if not allow_dialogue:
|
||||
response = response.split('"')[0].strip()
|
||||
response = response.replace("*", "")
|
||||
response = util.strip_partial_sentences(response)
|
||||
response = f"*{response}*"
|
||||
|
||||
return response
|
||||
return response
|
||||
|
||||
# LLM client related methods. These are called during or after the client
|
||||
|
||||
def inject_prompt_paramters(self, prompt_param: dict, kind: str, agent_function_name: str):
|
||||
log.debug("inject_prompt_paramters", prompt_param=prompt_param, kind=kind, agent_function_name=agent_function_name)
|
||||
character_names = [f"\n{c.name}:" for c in self.scene.get_characters()]
|
||||
if prompt_param.get("extra_stopping_strings") is None:
|
||||
prompt_param["extra_stopping_strings"] = []
|
||||
prompt_param["extra_stopping_strings"] += character_names
|
||||
|
||||
def allow_repetition_break(self, kind: str, agent_function_name: str, auto:bool=False):
|
||||
if auto and not self.actions["auto_break_repetition"].enabled:
|
||||
return False
|
||||
|
||||
return True
|
||||
@@ -15,7 +15,9 @@ from nltk.tokenize import sent_tokenize
|
||||
|
||||
import talemate.config as config
|
||||
import talemate.emit.async_signals
|
||||
import talemate.instance as instance
|
||||
from talemate.emit import emit
|
||||
from talemate.emit.signals import handlers
|
||||
from talemate.events import GameLoopNewMessageEvent
|
||||
from talemate.scene_message import CharacterMessage, NarratorMessage
|
||||
|
||||
@@ -38,7 +40,6 @@ if not TTS:
|
||||
# so we don't want to require it unless the user wants to use it
|
||||
log.info("TTS (local) requires the TTS package, please install with `pip install TTS` if you want to use the local api")
|
||||
|
||||
nltk.download("punkt")
|
||||
|
||||
def parse_chunks(text):
|
||||
|
||||
@@ -56,10 +57,25 @@ def parse_chunks(text):
|
||||
|
||||
for i, chunk in enumerate(cleaned_chunks):
|
||||
chunk = chunk.replace("__ellipsis__", "...")
|
||||
|
||||
cleaned_chunks[i] = chunk
|
||||
|
||||
return cleaned_chunks
|
||||
|
||||
def clean_quotes(chunk:str):
|
||||
|
||||
# if there is an uneven number of quotes, remove the last one if its
|
||||
# at the end of the chunk. If its in the middle, add a quote to the end
|
||||
if chunk.count('"') % 2 == 1:
|
||||
|
||||
if chunk.endswith('"'):
|
||||
chunk = chunk[:-1]
|
||||
else:
|
||||
chunk += '"'
|
||||
|
||||
return chunk
|
||||
|
||||
|
||||
def rejoin_chunks(chunks:list[str], chunk_size:int=250):
|
||||
|
||||
"""
|
||||
@@ -74,14 +90,13 @@ def rejoin_chunks(chunks:list[str], chunk_size:int=250):
|
||||
for chunk in chunks:
|
||||
|
||||
if len(current_chunk) + len(chunk) > chunk_size:
|
||||
joined_chunks.append(current_chunk)
|
||||
joined_chunks.append(clean_quotes(current_chunk))
|
||||
current_chunk = ""
|
||||
|
||||
current_chunk += chunk
|
||||
|
||||
if current_chunk:
|
||||
joined_chunks.append(current_chunk)
|
||||
|
||||
joined_chunks.append(clean_quotes(current_chunk))
|
||||
return joined_chunks
|
||||
|
||||
|
||||
@@ -104,7 +119,7 @@ class TTSAgent(Agent):
|
||||
"""
|
||||
|
||||
agent_type = "tts"
|
||||
verbose_name = "Text to speech"
|
||||
verbose_name = "Voice"
|
||||
requires_llm_client = False
|
||||
|
||||
@classmethod
|
||||
@@ -121,6 +136,7 @@ class TTSAgent(Agent):
|
||||
def __init__(self, **kwargs):
|
||||
|
||||
self.is_enabled = False
|
||||
nltk.download("punkt", quiet=True)
|
||||
|
||||
self.voices = {
|
||||
"elevenlabs": VoiceLibrary(api="elevenlabs"),
|
||||
@@ -175,7 +191,7 @@ class TTSAgent(Agent):
|
||||
),
|
||||
"generate_chunks": AgentActionConfig(
|
||||
type="bool",
|
||||
value=True,
|
||||
value=False,
|
||||
label="Split generation",
|
||||
description="Generate audio chunks for each sentence - will be much more responsive but may loose context to inform inflection",
|
||||
)
|
||||
@@ -184,6 +200,7 @@ class TTSAgent(Agent):
|
||||
}
|
||||
|
||||
self.actions["_config"].model_dump()
|
||||
handlers["config_saved"].connect(self.on_config_saved)
|
||||
|
||||
|
||||
@property
|
||||
@@ -274,6 +291,7 @@ class TTSAgent(Agent):
|
||||
if self.api == "tts":
|
||||
if not TTS:
|
||||
return "error"
|
||||
return "uninitialized"
|
||||
|
||||
@property
|
||||
def max_generation_length(self):
|
||||
@@ -309,12 +327,17 @@ class TTSAgent(Agent):
|
||||
super().connect(scene)
|
||||
talemate.emit.async_signals.get("game_loop_new_message").connect(self.on_game_loop_new_message)
|
||||
|
||||
def on_config_saved(self, event):
|
||||
config = event.data
|
||||
self.config = config
|
||||
instance.emit_agent_status(self.__class__, self)
|
||||
|
||||
async def on_game_loop_new_message(self, emission:GameLoopNewMessageEvent):
|
||||
"""
|
||||
Called when a conversation is generated
|
||||
"""
|
||||
|
||||
if not self.enabled:
|
||||
if not self.enabled or not self.ready:
|
||||
return
|
||||
|
||||
if not isinstance(emission.message, (CharacterMessage, NarratorMessage)):
|
||||
@@ -362,21 +385,20 @@ class TTSAgent(Agent):
|
||||
|
||||
library = self.voices[self.api]
|
||||
|
||||
log.info("Listing voices", api=self.api, last_synced=library.last_synced)
|
||||
|
||||
# TODO: allow re-syncing voices
|
||||
if library.last_synced:
|
||||
return library.voices
|
||||
|
||||
list_fn = getattr(self, f"_list_voices_{self.api}")
|
||||
log.info("Listing voices", api=self.api)
|
||||
|
||||
library.voices = await list_fn()
|
||||
library.last_synced = time.time()
|
||||
|
||||
# if the current voice cannot be found, reset it
|
||||
if not self.voice(self.default_voice_id):
|
||||
self.actions["_config"].config["voice_id"].value = ""
|
||||
|
||||
|
||||
# set loading to false
|
||||
return library.voices
|
||||
|
||||
@@ -471,7 +493,7 @@ class TTSAgent(Agent):
|
||||
}
|
||||
data = {
|
||||
"text": text,
|
||||
"model_id": "eleven_monolingual_v1",
|
||||
"model_id": self.config.get("elevenlabs",{}).get("model"),
|
||||
"voice_settings": {
|
||||
"stability": 0.5,
|
||||
"similarity_boost": 0.5
|
||||
|
||||
@@ -17,7 +17,7 @@ import talemate.client.system_prompts as system_prompts
|
||||
import talemate.util as util
|
||||
from talemate.client.context import client_context_attribute
|
||||
from talemate.client.model_prompts import model_prompt
|
||||
|
||||
from talemate.agents.context import active_agent
|
||||
|
||||
# Set up logging level for httpx to WARNING to suppress debug logs.
|
||||
logging.getLogger('httpx').setLevel(logging.WARNING)
|
||||
@@ -37,10 +37,10 @@ class ClientBase:
|
||||
enabled: bool = True
|
||||
current_status: str = None
|
||||
max_token_length: int = 4096
|
||||
randomizable_inference_parameters: list[str] = ["temperature"]
|
||||
processing: bool = False
|
||||
connected: bool = False
|
||||
conversation_retries: int = 5
|
||||
auto_break_repetition_enabled: bool = True
|
||||
|
||||
client_type = "base"
|
||||
|
||||
@@ -74,6 +74,17 @@ class ClientBase:
|
||||
|
||||
return model_prompt(self.model_name, sys_msg, prompt)
|
||||
|
||||
def has_prompt_template(self):
|
||||
if not self.model_name:
|
||||
return False
|
||||
|
||||
return model_prompt.exists(self.model_name)
|
||||
|
||||
def prompt_template_example(self):
|
||||
if not self.model_name:
|
||||
return None
|
||||
return model_prompt(self.model_name, "sysmsg", "prompt<|BOT|>{LLM coercion}")
|
||||
|
||||
def reconfigure(self, **kwargs):
|
||||
|
||||
"""
|
||||
@@ -142,6 +153,8 @@ class ClientBase:
|
||||
return system_prompts.EDITOR
|
||||
if "world_state" in kind:
|
||||
return system_prompts.WORLD_STATE
|
||||
if "analyze_freeform" in kind:
|
||||
return system_prompts.ANALYST_FREEFORM
|
||||
if "analyst" in kind:
|
||||
return system_prompts.ANALYST
|
||||
if "analyze" in kind:
|
||||
@@ -181,6 +194,10 @@ class ClientBase:
|
||||
id=self.name,
|
||||
details=model_name,
|
||||
status=status,
|
||||
data={
|
||||
"prompt_template_example": self.prompt_template_example(),
|
||||
"has_prompt_template": self.has_prompt_template(),
|
||||
}
|
||||
)
|
||||
|
||||
if status_change:
|
||||
@@ -244,6 +261,10 @@ class ClientBase:
|
||||
fn_tune_kind = getattr(self, f"tune_prompt_parameters_{kind}", None)
|
||||
if fn_tune_kind:
|
||||
fn_tune_kind(parameters)
|
||||
|
||||
agent_context = active_agent.get()
|
||||
if agent_context.agent:
|
||||
agent_context.agent.inject_prompt_paramters(parameters, kind, agent_context.action)
|
||||
|
||||
def tune_prompt_parameters_conversation(self, parameters:dict):
|
||||
conversation_context = client_context_attribute("conversation")
|
||||
@@ -275,7 +296,7 @@ class ClientBase:
|
||||
return ""
|
||||
|
||||
async def send_prompt(
|
||||
self, prompt: str, kind: str = "conversation", finalize: Callable = lambda x: x
|
||||
self, prompt: str, kind: str = "conversation", finalize: Callable = lambda x: x, retries:int=2
|
||||
) -> str:
|
||||
"""
|
||||
Send a prompt to the AI and return its response.
|
||||
@@ -298,8 +319,14 @@ class ClientBase:
|
||||
time_start = time.time()
|
||||
extra_stopping_strings = prompt_param.pop("extra_stopping_strings", [])
|
||||
|
||||
self.log.debug("send_prompt", token_length=token_length, max_token_length=self.max_token_length, parameters=prompt_param)
|
||||
response = await self.generate(finalized_prompt, prompt_param, kind)
|
||||
self.log.debug("send_prompt", token_length=token_length, max_token_length=self.max_token_length, parameters=prompt_param)
|
||||
response = await self.generate(
|
||||
self.repetition_adjustment(finalized_prompt),
|
||||
prompt_param,
|
||||
kind
|
||||
)
|
||||
|
||||
response, finalized_prompt = await self.auto_break_repetition(finalized_prompt, prompt_param, response, kind, retries)
|
||||
|
||||
time_end = time.time()
|
||||
|
||||
@@ -325,6 +352,125 @@ class ClientBase:
|
||||
finally:
|
||||
self.emit_status(processing=False)
|
||||
|
||||
|
||||
async def auto_break_repetition(
|
||||
self,
|
||||
finalized_prompt:str,
|
||||
prompt_param:dict,
|
||||
response:str,
|
||||
kind:str,
|
||||
retries:int,
|
||||
pad_max_tokens:int=32,
|
||||
) -> str:
|
||||
|
||||
"""
|
||||
If repetition breaking is enabled, this will retry the prompt if its
|
||||
response is too similar to other messages in the prompt
|
||||
|
||||
This requires the agent to have the allow_repetition_break method
|
||||
and the jiggle_enabled_for method and the client to have the
|
||||
auto_break_repetition_enabled attribute set to True
|
||||
|
||||
Arguments:
|
||||
|
||||
- finalized_prompt: the prompt that was sent
|
||||
- prompt_param: the parameters that were used
|
||||
- response: the response that was received
|
||||
- kind: the kind of generation
|
||||
- retries: the number of retries left
|
||||
- pad_max_tokens: increase response max_tokens by this amount per iteration
|
||||
|
||||
Returns:
|
||||
|
||||
- the response
|
||||
"""
|
||||
|
||||
if not self.auto_break_repetition_enabled:
|
||||
return response, finalized_prompt
|
||||
|
||||
agent_context = active_agent.get()
|
||||
if self.jiggle_enabled_for(kind, auto=True):
|
||||
|
||||
# check if the response is a repetition
|
||||
# using the default similarity threshold of 98, meaning it needs
|
||||
# to be really similar to be considered a repetition
|
||||
|
||||
is_repetition, similarity_score, matched_line = util.similarity_score(
|
||||
response,
|
||||
finalized_prompt.split("\n"),
|
||||
)
|
||||
|
||||
if not is_repetition:
|
||||
|
||||
# not a repetition, return the response
|
||||
|
||||
self.log.debug("send_prompt no similarity", similarity_score=similarity_score)
|
||||
return response, finalized_prompt
|
||||
|
||||
while is_repetition and retries > 0:
|
||||
|
||||
# it's a repetition, retry the prompt with adjusted parameters
|
||||
|
||||
self.log.warn(
|
||||
"send_prompt similarity retry",
|
||||
agent=agent_context.agent.agent_type,
|
||||
similarity_score=similarity_score,
|
||||
retries=retries
|
||||
)
|
||||
|
||||
# first we apply the client's randomness jiggle which will adjust
|
||||
# parameters like temperature and repetition_penalty, depending
|
||||
# on the client
|
||||
#
|
||||
# this is a cumulative adjustment, so it will add to the previous
|
||||
# iteration's adjustment, this also means retries should be kept low
|
||||
# otherwise it will get out of hand and start generating nonsense
|
||||
|
||||
self.jiggle_randomness(prompt_param, offset=0.5)
|
||||
|
||||
# then we pad the max_tokens by the pad_max_tokens amount
|
||||
|
||||
prompt_param["max_tokens"] += pad_max_tokens
|
||||
|
||||
# send the prompt again
|
||||
# we use the repetition_adjustment method to further encourage
|
||||
# the AI to break the repetition on its own as well.
|
||||
|
||||
finalized_prompt = self.repetition_adjustment(finalized_prompt, is_repetitive=True)
|
||||
|
||||
response = retried_response = await self.generate(
|
||||
finalized_prompt,
|
||||
prompt_param,
|
||||
kind
|
||||
)
|
||||
|
||||
self.log.debug("send_prompt dedupe sentences", response=response, matched_line=matched_line)
|
||||
|
||||
# a lot of the times the response will now contain the repetition + something new
|
||||
# so we dedupe the response to remove the repetition on sentences level
|
||||
|
||||
response = util.dedupe_sentences(response, matched_line, similarity_threshold=85, debug=True)
|
||||
self.log.debug("send_prompt dedupe sentences (after)", response=response)
|
||||
|
||||
# deduping may have removed the entire response, so we check for that
|
||||
|
||||
if not util.strip_partial_sentences(response).strip():
|
||||
|
||||
# if the response is empty, we set the response to the original
|
||||
# and try again next loop
|
||||
|
||||
response = retried_response
|
||||
|
||||
# check if the response is a repetition again
|
||||
|
||||
is_repetition, similarity_score, matched_line = util.similarity_score(
|
||||
response,
|
||||
finalized_prompt.split("\n"),
|
||||
)
|
||||
retries -= 1
|
||||
|
||||
return response, finalized_prompt
|
||||
|
||||
def count_tokens(self, content:str):
|
||||
return util.count_tokens(content)
|
||||
|
||||
@@ -338,12 +484,35 @@ class ClientBase:
|
||||
min_offset = offset * 0.3
|
||||
prompt_config["temperature"] = random.uniform(temp + min_offset, temp + offset)
|
||||
|
||||
def jiggle_enabled_for(self, kind:str):
|
||||
def jiggle_enabled_for(self, kind:str, auto:bool=False) -> bool:
|
||||
|
||||
if kind in ["conversation", "story"]:
|
||||
return True
|
||||
agent_context = active_agent.get()
|
||||
agent = agent_context.agent
|
||||
|
||||
if kind.startswith("narrate"):
|
||||
return True
|
||||
if not agent:
|
||||
return False
|
||||
|
||||
return False
|
||||
return agent.allow_repetition_break(kind, agent_context.action, auto=auto)
|
||||
|
||||
def repetition_adjustment(self, prompt:str, is_repetitive:bool=False):
|
||||
"""
|
||||
Breaks the prompt into lines and checkse each line for a match with
|
||||
[$REPETITION|{repetition_adjustment}].
|
||||
|
||||
On match and if is_repetitive is True, the line is removed from the prompt and
|
||||
replaced with the repetition_adjustment.
|
||||
|
||||
On match and if is_repetitive is False, the line is removed from the prompt.
|
||||
"""
|
||||
|
||||
lines = prompt.split("\n")
|
||||
new_lines = []
|
||||
|
||||
for line in lines:
|
||||
if line.startswith("[$REPETITION|"):
|
||||
if is_repetitive:
|
||||
new_lines.append(line.split("|")[1][:-1])
|
||||
else:
|
||||
new_lines.append(line)
|
||||
|
||||
return "\n".join(new_lines)
|
||||
@@ -39,6 +39,9 @@ class ModelPrompt:
|
||||
"set_response" : self.set_response
|
||||
})
|
||||
|
||||
def exists(self, model_name:str):
|
||||
return bool(self.get_template(model_name))
|
||||
|
||||
def set_response(self, prompt:str, response_str:str):
|
||||
|
||||
prompt = prompt.strip("\n").strip()
|
||||
|
||||
@@ -6,7 +6,10 @@ from openai import AsyncOpenAI
|
||||
from talemate.client.base import ClientBase
|
||||
from talemate.client.registry import register
|
||||
from talemate.emit import emit
|
||||
from talemate.emit.signals import handlers
|
||||
import talemate.emit.async_signals as async_signals
|
||||
from talemate.config import load_config
|
||||
import talemate.instance as instance
|
||||
import talemate.client.system_prompts as system_prompts
|
||||
import structlog
|
||||
import tiktoken
|
||||
@@ -75,39 +78,40 @@ class OpenAIClient(ClientBase):
|
||||
|
||||
client_type = "openai"
|
||||
conversation_retries = 0
|
||||
auto_break_repetition_enabled = False
|
||||
|
||||
def __init__(self, model="gpt-4-1106-preview", **kwargs):
|
||||
|
||||
self.model_name = model
|
||||
self.api_key_status = None
|
||||
self.config = load_config()
|
||||
super().__init__(**kwargs)
|
||||
|
||||
# if os.environ.get("OPENAI_API_KEY") is not set, look in the config file
|
||||
# and set it
|
||||
|
||||
if not os.environ.get("OPENAI_API_KEY"):
|
||||
if self.config.get("openai", {}).get("api_key"):
|
||||
os.environ["OPENAI_API_KEY"] = self.config["openai"]["api_key"]
|
||||
|
||||
self.set_client()
|
||||
|
||||
handlers["config_saved"].connect(self.on_config_saved)
|
||||
|
||||
|
||||
@property
|
||||
def openai_api_key(self):
|
||||
return os.environ.get("OPENAI_API_KEY")
|
||||
return self.config.get("openai",{}).get("api_key")
|
||||
|
||||
|
||||
def emit_status(self, processing: bool = None):
|
||||
if processing is not None:
|
||||
self.processing = processing
|
||||
|
||||
if os.environ.get("OPENAI_API_KEY"):
|
||||
if self.openai_api_key:
|
||||
status = "busy" if self.processing else "idle"
|
||||
model_name = self.model_name or "No model loaded"
|
||||
model_name = self.model_name
|
||||
else:
|
||||
status = "error"
|
||||
model_name = "No API key set"
|
||||
|
||||
if not self.model_name:
|
||||
status = "error"
|
||||
model_name = "No model loaded"
|
||||
|
||||
self.current_status = status
|
||||
|
||||
emit(
|
||||
@@ -121,12 +125,17 @@ class OpenAIClient(ClientBase):
|
||||
def set_client(self, max_token_length:int=None):
|
||||
|
||||
if not self.openai_api_key:
|
||||
self.client = AsyncOpenAI(api_key="sk-1111")
|
||||
log.error("No OpenAI API key set")
|
||||
if self.api_key_status:
|
||||
self.api_key_status = False
|
||||
emit('request_client_status')
|
||||
emit('request_agent_status')
|
||||
return
|
||||
|
||||
model = self.model_name
|
||||
|
||||
self.client = AsyncOpenAI()
|
||||
self.client = AsyncOpenAI(api_key=self.openai_api_key)
|
||||
if model == "gpt-3.5-turbo":
|
||||
self.max_token_length = min(max_token_length or 4096, 4096)
|
||||
elif model == "gpt-4":
|
||||
@@ -138,12 +147,27 @@ class OpenAIClient(ClientBase):
|
||||
else:
|
||||
self.max_token_length = max_token_length or 2048
|
||||
|
||||
if not self.api_key_status:
|
||||
if self.api_key_status is False:
|
||||
emit('request_client_status')
|
||||
emit('request_agent_status')
|
||||
self.api_key_status = True
|
||||
|
||||
log.info("openai set client")
|
||||
|
||||
def reconfigure(self, **kwargs):
|
||||
if "model" in kwargs:
|
||||
self.model_name = kwargs["model"]
|
||||
self.set_client(kwargs.get("max_token_length"))
|
||||
|
||||
def on_config_saved(self, event):
|
||||
config = event.data
|
||||
self.config = config
|
||||
self.set_client()
|
||||
|
||||
def count_tokens(self, content: str):
|
||||
if not self.model_name:
|
||||
return 0
|
||||
return num_tokens_from_messages([{"content": content}], model=self.model_name)
|
||||
|
||||
async def status(self):
|
||||
@@ -179,6 +203,9 @@ class OpenAIClient(ClientBase):
|
||||
Generates text from the given prompt and parameters.
|
||||
"""
|
||||
|
||||
if not self.openai_api_key:
|
||||
raise Exception("No OpenAI API key set")
|
||||
|
||||
# only gpt-4-1106-preview supports json_object response coersion
|
||||
supports_json_object = self.model_name in ["gpt-4-1106-preview"]
|
||||
right = None
|
||||
@@ -187,7 +214,7 @@ class OpenAIClient(ClientBase):
|
||||
expected_response = right.strip()
|
||||
if expected_response.startswith("{") and supports_json_object:
|
||||
parameters["response_format"] = {"type": "json_object"}
|
||||
except IndexError:
|
||||
except (IndexError, ValueError):
|
||||
pass
|
||||
|
||||
human_message = {'role': 'user', 'content': prompt.strip()}
|
||||
@@ -208,5 +235,4 @@ class OpenAIClient(ClientBase):
|
||||
return response
|
||||
|
||||
except Exception as e:
|
||||
self.log.error("generate error", e=e)
|
||||
return ""
|
||||
raise
|
||||
@@ -6,6 +6,8 @@ import os
|
||||
from pydantic import BaseModel
|
||||
from typing import Optional, Dict, Union
|
||||
|
||||
from talemate.emit import emit
|
||||
|
||||
log = structlog.get_logger("talemate.config")
|
||||
|
||||
class Client(BaseModel):
|
||||
@@ -20,7 +22,7 @@ class Client(BaseModel):
|
||||
|
||||
|
||||
class AgentActionConfig(BaseModel):
|
||||
value: Union[int, float, str, bool]
|
||||
value: Union[int, float, str, bool, None] = None
|
||||
|
||||
class AgentAction(BaseModel):
|
||||
enabled: bool = True
|
||||
@@ -42,17 +44,17 @@ class Agent(BaseModel):
|
||||
return super().model_dump(exclude_none=True)
|
||||
|
||||
class GamePlayerCharacter(BaseModel):
|
||||
name: str
|
||||
color: str
|
||||
gender: str
|
||||
description: Optional[str]
|
||||
name: str = ""
|
||||
color: str = "#3362bb"
|
||||
gender: str = ""
|
||||
description: Optional[str] = ""
|
||||
|
||||
class Config:
|
||||
extra = "ignore"
|
||||
|
||||
|
||||
class Game(BaseModel):
|
||||
default_player_character: GamePlayerCharacter
|
||||
default_player_character: GamePlayerCharacter = GamePlayerCharacter()
|
||||
|
||||
class Config:
|
||||
extra = "ignore"
|
||||
@@ -68,6 +70,7 @@ class RunPodConfig(BaseModel):
|
||||
|
||||
class ElevenLabsConfig(BaseModel):
|
||||
api_key: Union[str,None]=None
|
||||
model: str = "eleven_turbo_v2"
|
||||
|
||||
class CoquiConfig(BaseModel):
|
||||
api_key: Union[str,None]=None
|
||||
@@ -157,4 +160,6 @@ def save_config(config, file_path: str = "./config.yaml"):
|
||||
return None
|
||||
|
||||
with open(file_path, "w") as file:
|
||||
yaml.dump(config, file)
|
||||
yaml.dump(config, file)
|
||||
|
||||
emit("config_saved", data=config)
|
||||
@@ -13,7 +13,9 @@ RequestInput = signal("request_input")
|
||||
ReceiveInput = signal("receive_input")
|
||||
|
||||
ClientStatus = signal("client_status")
|
||||
RequestClientStatus = signal("request_client_status")
|
||||
AgentStatus = signal("agent_status")
|
||||
RequestAgentStatus = signal("request_agent_status")
|
||||
ClientBootstraps = signal("client_bootstraps")
|
||||
PromptSent = signal("prompt_sent")
|
||||
|
||||
@@ -28,6 +30,8 @@ AudioQueue = signal("audio_queue")
|
||||
|
||||
MessageEdited = signal("message_edited")
|
||||
|
||||
ConfigSaved = signal("config_saved")
|
||||
|
||||
handlers = {
|
||||
"system": SystemMessage,
|
||||
"narrator": NarratorMessage,
|
||||
@@ -38,7 +42,9 @@ handlers = {
|
||||
"request_input": RequestInput,
|
||||
"receive_input": ReceiveInput,
|
||||
"client_status": ClientStatus,
|
||||
"request_client_status": RequestClientStatus,
|
||||
"agent_status": AgentStatus,
|
||||
"request_agent_status": RequestAgentStatus,
|
||||
"client_bootstraps": ClientBootstraps,
|
||||
"clear_screen": ClearScreen,
|
||||
"remove_message": RemoveMessage,
|
||||
@@ -49,4 +55,5 @@ handlers = {
|
||||
"message_edited": MessageEdited,
|
||||
"prompt_sent": PromptSent,
|
||||
"audio_queue": AudioQueue,
|
||||
"config_saved": ConfigSaved,
|
||||
}
|
||||
|
||||
@@ -1,10 +1,11 @@
|
||||
"""
|
||||
Keep track of clients and agents
|
||||
"""
|
||||
|
||||
import asyncio
|
||||
import talemate.agents as agents
|
||||
import talemate.client as clients
|
||||
from talemate.emit import emit
|
||||
from talemate.emit.signals import handlers
|
||||
import talemate.client.bootstrap as bootstrap
|
||||
|
||||
import structlog
|
||||
@@ -14,6 +15,8 @@ AGENTS = {}
|
||||
CLIENTS = {}
|
||||
|
||||
|
||||
|
||||
|
||||
def get_agent(typ: str, *create_args, **create_kwargs):
|
||||
agent = AGENTS.get(typ)
|
||||
|
||||
@@ -94,11 +97,19 @@ async def emit_clients_status():
|
||||
"""
|
||||
Will emit status of all clients
|
||||
"""
|
||||
|
||||
#log.debug("emit", type="client status")
|
||||
for client in CLIENTS.values():
|
||||
if client:
|
||||
await client.status()
|
||||
|
||||
def _sync_emit_clients_status(*args, **kwargs):
|
||||
"""
|
||||
Will emit status of all clients
|
||||
in synchronous mode
|
||||
"""
|
||||
loop = asyncio.get_event_loop()
|
||||
loop.run_until_complete(emit_clients_status())
|
||||
handlers["request_client_status"].connect(_sync_emit_clients_status)
|
||||
|
||||
def emit_client_bootstraps():
|
||||
emit(
|
||||
@@ -144,11 +155,13 @@ def emit_agent_status(cls, agent=None):
|
||||
)
|
||||
|
||||
|
||||
def emit_agents_status():
|
||||
def emit_agents_status(*args, **kwargs):
|
||||
"""
|
||||
Will emit status of all agents
|
||||
"""
|
||||
|
||||
#log.debug("emit", type="agent status")
|
||||
for typ, cls in agents.AGENT_CLASSES.items():
|
||||
agent = AGENTS.get(typ)
|
||||
emit_agent_status(cls, agent)
|
||||
|
||||
handlers["request_agent_status"].connect(emit_agents_status)
|
||||
@@ -343,7 +343,7 @@ class Prompt:
|
||||
parsed_text = env.from_string(prompt_text).render(self.vars)
|
||||
|
||||
if self.dedupe_enabled:
|
||||
parsed_text = dedupe_string(parsed_text, debug=True)
|
||||
parsed_text = dedupe_string(parsed_text, debug=False)
|
||||
|
||||
parsed_text = remove_extra_linebreaks(parsed_text)
|
||||
|
||||
@@ -395,7 +395,7 @@ class Prompt:
|
||||
f"Answer: " + loop.run_until_complete(memory.query(query, **kwargs)),
|
||||
])
|
||||
else:
|
||||
return loop.run_until_complete(memory.multi_query(query.split("\n"), **kwargs))
|
||||
return loop.run_until_complete(memory.multi_query([q for q in query.split("\n") if q.strip()], **kwargs))
|
||||
|
||||
def instruct_text(self, instruction:str, text:str):
|
||||
loop = asyncio.get_event_loop()
|
||||
@@ -516,7 +516,7 @@ class Prompt:
|
||||
|
||||
log.warning("parse_json_response error on first attempt - sending to AI to fix", response=response, error=e)
|
||||
fixed_response = await self.client.send_prompt(
|
||||
f"fix the syntax errors in this JSON string, but keep the structure as is.\n\nError:{e}\n\n```json\n{response}\n```<|BOT|>"+"{",
|
||||
f"fix the syntax errors in this JSON string, but keep the structure as is. Remove any comments.\n\nError:{e}\n\n```json\n{response}\n```<|BOT|>"+"{",
|
||||
kind="analyze_long",
|
||||
)
|
||||
log.warning("parse_json_response error on first attempt - sending to AI to fix", response=response, error=e)
|
||||
|
||||
@@ -33,8 +33,7 @@ You may chose to have {{ talking_character.name}} respond to the conversation, o
|
||||
|
||||
Use an informal and colloquial register with a conversational tone. Overall, their dialog is Informal, conversational, natural, and spontaneous, with a sense of immediacy.
|
||||
|
||||
Spoken word should be enclosed in double quotes, e.g. "Hello, how are you?"
|
||||
Narration and actions should be enclosed in asterisks, e.g. *She smiles.*
|
||||
Spoken words MUST be enclosed in double quotes, e.g. {{ talking_character.name}}: "spoken words.".
|
||||
{{ extra_instructions }}
|
||||
<|CLOSE_SECTION|>
|
||||
{% if memory -%}
|
||||
|
||||
@@ -8,12 +8,19 @@ Scenario Premise: {{ scene.description }}
|
||||
{% endfor %}
|
||||
{% endblock -%}
|
||||
<|CLOSE_SECTION|>
|
||||
{% for scene_context in scene.context_history(budget=max_tokens-200-count_tokens(self.rendered_context())) -%}
|
||||
{% for scene_context in scene.context_history(budget=max_tokens-200-count_tokens(self.rendered_context()), min_dialogue=25) -%}
|
||||
{{ scene_context }}
|
||||
{% endfor %}
|
||||
<|SECTION:TASK|>
|
||||
Based on the previous line '{{ last_line }}', create the next line of narration. This line should focus solely on describing sensory details (like sounds, sights, smells, tactile sensations) or external actions that move the story forward. Avoid including any character's internal thoughts, feelings, or dialogue. Your narration should directly respond to '{{ last_line }}', either by elaborating on the immediate scene or by subtly advancing the plot. Generate exactly one sentence of new narration. If the character is trying to determine some state, truth or situation, try to answer as part of the narration.
|
||||
|
||||
Be creative and generate something new and interesting, but stay true to the setting and context of the story so far.
|
||||
|
||||
Use an informal and colloquial register with a conversational tone. Overall, the narrative is Informal, conversational, natural, and spontaneous, with a sense of immediacy.
|
||||
|
||||
Narration style should be that of a 90s point and click adventure game. You are omniscient and can describe the scene in detail.
|
||||
|
||||
Only generate new narration. {{ extra_instructions }}
|
||||
[$REPETITION|Narration is getting repetitive. Try to choose different words to break up the repetitive text.]
|
||||
<|CLOSE_SECTION|>
|
||||
{{ set_prepared_response('*') }}
|
||||
@@ -8,23 +8,23 @@ Last time we checked on {{ character.name }}:
|
||||
{% endfor %}
|
||||
<|CLOSE_SECTION|>
|
||||
|
||||
{% for scene_context in scene.context_history(budget=max_tokens-300, min_dialogue=30) -%}
|
||||
{% for scene_context in scene.context_history(budget=max_tokens-300, min_dialogue=20) -%}
|
||||
{{ scene_context }}
|
||||
{% endfor %}
|
||||
|
||||
<|SECTION:INFORMATION|>
|
||||
{{ query_memory("How old is {character.name}?") }}
|
||||
{{ query_scene("Where is {character.name}?") }}
|
||||
{{ query_scene("what is {character.name} doing?") }}
|
||||
{{ query_scene("what is {character.name} wearing?") }}
|
||||
{{ query_scene("Where is {character.name} and what is {character.name} doing?") }}
|
||||
{{ query_scene("what is {character.name} wearing? Be explicit.") }}
|
||||
<|CLOSE_SECTION|>
|
||||
|
||||
<|SECTION:TASK|>
|
||||
Last line of dialogue: {{ scene.history[-1] }}
|
||||
Questions: Where is {{ character.name}} currently and what are they doing? What is {{ character.name }}'s appearance at the end of the dialogue? What is {{ character.pronoun_2 }} wearing? What position is {{ character.pronoun_2 }} in?
|
||||
Instruction: Answer the questions to describe {{ character.name }}'s appearance at the end of the dialogue and summarize into narrative description. Use the whole dialogue for context.
|
||||
Instruction: Answer the questions to describe {{ character.name }}'s appearance at the end of the dialogue and summarize into narrative description. Use the whole dialogue for context. You must fill in gaps using imagination as long as it fits the existing context. You will provide a confident and decisive answer to the question.
|
||||
Content Context: This is a specific scene from {{ scene.context }}
|
||||
Narration style: point and click adventure game from the 90s
|
||||
Expected Answer: A summarized visual description of {{ character.name }}'s appearance at the dialogue.
|
||||
Expected Answer: A brief summarized visual description of {{ character.name }}'s appearance at the end of the dialogue. NEVER break the fourth wall. (2 to 3 sentences)
|
||||
{{ extra_instructions }}
|
||||
<|CLOSE_SECTION|>
|
||||
Narrator answers: {{ bot_token }}At the end of the dialogue,
|
||||
{{ bot_token }}At the end of the dialogue,
|
||||
@@ -1,3 +1,4 @@
|
||||
{% block extra_context -%}
|
||||
<|SECTION:CONTEXT|>
|
||||
Scenario Premise: {{ scene.description }}
|
||||
|
||||
@@ -9,19 +10,24 @@ NPCs: {{ npc_names }}
|
||||
Player Character: {{ player_character.name }}
|
||||
Content Context: {{ scene.context }}
|
||||
<|CLOSE_SECTION|>
|
||||
|
||||
{% for scene_context in scene.context_history(budget=max_tokens-300, min_dialogue=30, sections=False, dialogue_negative_offset=10) -%}
|
||||
{% endblock -%}
|
||||
{% for scene_context in scene.context_history(budget=max_tokens-300, min_dialogue=20, sections=False) -%}
|
||||
{{ scene_context }}
|
||||
{% endfor %}
|
||||
<|SECTION:TASK|>
|
||||
Continue the current dialogue by narrating the progression of the scene
|
||||
Narration style: point and click adventure game from the 90s
|
||||
Continue the current dialogue by narrating the progression of the scene.
|
||||
|
||||
If the scene is over, narrate the beginning of the next scene.
|
||||
|
||||
Be creative and generate something new and interesting, but stay true to the setting and context of the story so far.
|
||||
|
||||
Use an informal and colloquial register with a conversational tone. Overall, the narrative is informal, conversational, natural, and spontaneous, with a sense of immediacy.
|
||||
|
||||
Narration style should be that of a 90s point and click adventure game. You are omniscient and can describe the scene in detail.
|
||||
|
||||
Only generate new narration. Avoid including any character's internal thoughts or dialogue.
|
||||
Write 2 to 4 sentences. {{ extra_instructions }}
|
||||
<|CLOSE_SECTION|>
|
||||
{{ bot_token }}
|
||||
{% for row in scene.history[-10:] -%}
|
||||
{{ row }}
|
||||
{% endfor %}
|
||||
{{
|
||||
set_prepared_response_random(
|
||||
npc_names.split(", ") + [
|
||||
|
||||
@@ -6,15 +6,22 @@
|
||||
{% endfor %}
|
||||
<|SECTION:TASK|>
|
||||
{% if query.endswith("?") -%}
|
||||
Question: {{ query }}
|
||||
Extra context: {{ query_memory(query, as_question_answer=False) }}
|
||||
Instruction: Analyze Context, History and Dialogue. When evaluating both story and memory, story is more important. You can fill in gaps using imagination as long as it is based on the existing context. Respect the scene progression and answer in the context of the end of the dialogue.
|
||||
Instruction: Analyze Context, History and Dialogue and then answer the question: "{{ query }}".
|
||||
|
||||
When evaluating both story and context, story is more important. You can fill in gaps using imagination as long as it is based on the existing context.
|
||||
|
||||
Respect the scene progression and answer in the context of the end of the dialogue.
|
||||
|
||||
Use your imagination to fill in gaps in order to answer the question in a confident and decisive manner. Avoid uncertainty and vagueness.
|
||||
{% else -%}
|
||||
Instruction: {{ query }}
|
||||
Extra context: {{ query_memory(query, as_question_answer=False) }}
|
||||
Answer based on Context, History and Dialogue. When evaluating both story and memory, story is more important. You can fill in gaps using imagination as long as it is based on the existing context.
|
||||
Answer based on Context, History and Dialogue.
|
||||
When evaluating both story and context, story is more important. You can fill in gaps using imagination as long as it is based on the existing context.
|
||||
{% endif -%}
|
||||
Content Context: This is a specific scene from {{ scene.context }}
|
||||
Your answer should be in the style of short narration that fits the context of the scene.
|
||||
Your answer should be in the style of short, concise narration that fits the context of the scene. (1 to 2 sentences)
|
||||
{{ extra_instructions }}
|
||||
<|CLOSE_SECTION|>
|
||||
Narrator answers: {% if at_the_end %}{{ bot_token }}At the end of the dialogue, {% endif %}
|
||||
{% if at_the_end %}{{ bot_token }}At the end of the dialogue, {% endif %}
|
||||
@@ -8,5 +8,6 @@ Scenario Premise: {{ scene.description }}
|
||||
<|CLOSE_SECTION|>
|
||||
<|SECTION:TASK|>
|
||||
Provide a visual description of what is currently happening in the scene. Don't progress the scene.
|
||||
{{ extra_instructions }}
|
||||
<|CLOSE_SECTION|>
|
||||
{{ bot_token }}At the end of the scene we currently see:
|
||||
@@ -12,5 +12,6 @@ Content Context: {{ scene.context }}
|
||||
<|SECTION:TASK|>
|
||||
Narrate the passage of time that just occured, subtly move the story forward, and set up the next scene.
|
||||
Write 1 to 3 sentences.
|
||||
{{ extra_instructions }}
|
||||
<|CLOSE_SECTION|>
|
||||
{{ bot_token }}{{ narrative }}:
|
||||
@@ -1,6 +1,6 @@
|
||||
|
||||
{% set questions = instruct_text("Ask the narrator three (3) questions to gather more context from the past for the continuation of this conversation. If a character is asking about a state, location or information about an item or another character, make sure to include question(s) that help gather context for this.", text) %}
|
||||
<|SECTION:CONTEXT|>
|
||||
{% for memory in query_memory(text, as_question_answer=False, max_tokens=max_tokens-500, iterate=20) -%}
|
||||
{% for memory in query_memory(questions, as_question_answer=False, max_tokens=max_tokens-500, iterate=10) -%}
|
||||
{{ memory }}
|
||||
|
||||
{% endfor -%}
|
||||
@@ -8,7 +8,7 @@
|
||||
<|SECTION:TASK|>
|
||||
Answer the following questions:
|
||||
|
||||
{{ instruct_text("Ask the narrator three (3) questions to gather more context from the past for the continuation of this conversation. If a character is asking about a state, location or information about an item or another character, make sure to include question(s) that help gather context for this.", text) }}
|
||||
{{ questions }}
|
||||
|
||||
You answers should be precise, truthful and short. Pay close attention to timestamps when retrieving information from the context.
|
||||
|
||||
|
||||
@@ -1 +1 @@
|
||||
Instructions for a talented story analyst. The analyst will analyze parts of a story or dialogue and give truthful answers based on the dialogue or events given to him. The analyst will never make up facts or lie in his answers and will use crude, lewd and strong language if the content requires it.
|
||||
Instructions for a talented story analyst. The analyst will analyze parts of a story or dialogue and give truthful answers based on the dialogue or events given to him. The analyst will never make up facts or lie in his answers and will use crude, lewd and strong language if the context requires it.
|
||||
@@ -56,7 +56,7 @@ async def websocket_endpoint(websocket, path):
|
||||
await instance.sync_client_bootstraps()
|
||||
except Exception as e:
|
||||
log.error("send_client_bootstraps", error=e, traceback=traceback.format_exc())
|
||||
await asyncio.sleep(60)
|
||||
await asyncio.sleep(15)
|
||||
|
||||
send_client_bootstraps_task = asyncio.create_task(send_client_bootstraps())
|
||||
|
||||
|
||||
@@ -1,5 +1,6 @@
|
||||
import pydantic
|
||||
import structlog
|
||||
from talemate import VERSION
|
||||
|
||||
from talemate.config import Config as AppConfigData, load_config, save_config
|
||||
|
||||
@@ -8,6 +9,12 @@ log = structlog.get_logger("talemate.server.config")
|
||||
class ConfigPayload(pydantic.BaseModel):
|
||||
config: AppConfigData
|
||||
|
||||
class DefaultCharacterPayload(pydantic.BaseModel):
|
||||
name: str
|
||||
gender: str
|
||||
description: str
|
||||
color: str = "#3362bb"
|
||||
|
||||
class ConfigPlugin:
|
||||
|
||||
router = "config"
|
||||
@@ -36,8 +43,38 @@ class ConfigPlugin:
|
||||
save_config(current_config)
|
||||
|
||||
self.websocket_handler.config = current_config
|
||||
|
||||
self.websocket_handler.queue_put({
|
||||
"type": "app_config",
|
||||
"data": load_config(),
|
||||
"version": VERSION
|
||||
})
|
||||
self.websocket_handler.queue_put({
|
||||
"type": "config",
|
||||
"action": "save_complete",
|
||||
})
|
||||
})
|
||||
|
||||
async def handle_save_default_character(self, data):
|
||||
|
||||
log.info("Saving default character", data=data["data"])
|
||||
|
||||
payload = DefaultCharacterPayload(**data["data"])
|
||||
|
||||
current_config = load_config()
|
||||
|
||||
current_config["game"]["default_player_character"] = payload.model_dump()
|
||||
|
||||
log.info("Saving default character", character=current_config["game"]["default_player_character"])
|
||||
|
||||
save_config(current_config)
|
||||
|
||||
self.websocket_handler.config = current_config
|
||||
self.websocket_handler.queue_put({
|
||||
"type": "app_config",
|
||||
"data": load_config(),
|
||||
"version": VERSION
|
||||
})
|
||||
self.websocket_handler.queue_put({
|
||||
"type": "config",
|
||||
"action": "save_default_character_complete",
|
||||
})
|
||||
|
||||
|
||||
@@ -6,10 +6,11 @@ import textwrap
|
||||
import structlog
|
||||
import isodate
|
||||
import datetime
|
||||
from typing import List
|
||||
from typing import List, Union
|
||||
from thefuzz import fuzz
|
||||
from colorama import Back, Fore, Style, init
|
||||
from PIL import Image
|
||||
from nltk.tokenize import sent_tokenize
|
||||
|
||||
from talemate.scene_message import SceneMessage
|
||||
log = structlog.get_logger("talemate.util")
|
||||
@@ -497,13 +498,9 @@ def duration_to_timedelta(duration):
|
||||
if isinstance(duration, datetime.timedelta):
|
||||
return duration
|
||||
|
||||
# Check if the duration is an isodate.Duration object with a tdelta attribute
|
||||
if hasattr(duration, 'tdelta'):
|
||||
return duration.tdelta
|
||||
|
||||
# If it's an isodate.Duration object with separate year, month, day, hour, minute, second attributes
|
||||
days = int(duration.years) * 365 + int(duration.months) * 30 + int(duration.days)
|
||||
seconds = int(duration.hours) * 3600 + int(duration.minutes) * 60 + int(duration.seconds)
|
||||
seconds = duration.tdelta.seconds
|
||||
return datetime.timedelta(days=days, seconds=seconds)
|
||||
|
||||
def timedelta_to_duration(delta):
|
||||
@@ -737,12 +734,91 @@ def extract_json(s):
|
||||
json_object = json.loads(json_string)
|
||||
return json_string, json_object
|
||||
|
||||
def similarity_score(line: str, lines: list[str], similarity_threshold: int = 95) -> tuple[bool, int, str]:
|
||||
"""
|
||||
Checks if a line is similar to any of the lines in the list of lines.
|
||||
|
||||
Arguments:
|
||||
line (str): The line to check.
|
||||
lines (list): The list of lines to check against.
|
||||
threshold (int): The similarity threshold to use when comparing lines.
|
||||
|
||||
Returns:
|
||||
bool: Whether a similar line was found.
|
||||
int: The similarity score of the line. If no similar line was found, the highest similarity score is returned.
|
||||
str: The similar line that was found. If no similar line was found, None is returned.
|
||||
"""
|
||||
|
||||
highest_similarity = 0
|
||||
|
||||
for existing_line in lines:
|
||||
similarity = fuzz.ratio(line, existing_line)
|
||||
highest_similarity = max(highest_similarity, similarity)
|
||||
#print("SIMILARITY", similarity, existing_line[:32]+"...")
|
||||
if similarity >= similarity_threshold:
|
||||
return True, similarity, existing_line
|
||||
|
||||
return False, highest_similarity, None
|
||||
|
||||
def dedupe_sentences(line_a:str, line_b:str, similarity_threshold:int=95, debug:bool=False, split_on_comma:bool=True) -> str:
|
||||
"""
|
||||
Will split both lines into sentences and then compare each sentence in line_a
|
||||
against similar sentences in line_b. If a similar sentence is found, it will be
|
||||
removed from line_a.
|
||||
|
||||
The similarity threshold is used to determine if two sentences are similar.
|
||||
|
||||
Arguments:
|
||||
line_a (str): The first line.
|
||||
line_b (str): The second line.
|
||||
similarity_threshold (int): The similarity threshold to use when comparing sentences.
|
||||
debug (bool): Whether to log debug messages.
|
||||
split_on_comma (bool): Whether to split line_b sentences on commas as well.
|
||||
|
||||
Returns:
|
||||
str: the cleaned line_a.
|
||||
"""
|
||||
|
||||
line_a_sentences = sent_tokenize(line_a)
|
||||
line_b_sentences = sent_tokenize(line_b)
|
||||
|
||||
cleaned_line_a_sentences = []
|
||||
|
||||
if split_on_comma:
|
||||
# collect all sentences from line_b that contain a comma
|
||||
line_b_sentences_with_comma = []
|
||||
for line_b_sentence in line_b_sentences:
|
||||
if "," in line_b_sentence:
|
||||
line_b_sentences_with_comma.append(line_b_sentence)
|
||||
|
||||
# then split all sentences in line_b_sentences_with_comma on the comma
|
||||
# and extend line_b_sentences with the split sentences, making sure
|
||||
# to strip whitespace from the beginning and end of each sentence
|
||||
|
||||
for line_b_sentence in line_b_sentences_with_comma:
|
||||
line_b_sentences.extend([s.strip() for s in line_b_sentence.split(",")])
|
||||
|
||||
|
||||
for line_a_sentence in line_a_sentences:
|
||||
similar_found = False
|
||||
for line_b_sentence in line_b_sentences:
|
||||
similarity = fuzz.ratio(line_a_sentence, line_b_sentence)
|
||||
if similarity >= similarity_threshold:
|
||||
if debug:
|
||||
log.debug("DEDUPE SENTENCE", similarity=similarity, line_a_sentence=line_a_sentence, line_b_sentence=line_b_sentence)
|
||||
similar_found = True
|
||||
break
|
||||
if not similar_found:
|
||||
cleaned_line_a_sentences.append(line_a_sentence)
|
||||
|
||||
return " ".join(cleaned_line_a_sentences)
|
||||
|
||||
def dedupe_string(s: str, min_length: int = 32, similarity_threshold: int = 95, debug: bool = False) -> str:
|
||||
|
||||
"""
|
||||
Removes duplicate lines from a string.
|
||||
|
||||
Parameters:
|
||||
Arguments:
|
||||
s (str): The input string.
|
||||
min_length (int): The minimum length of a line to be checked for duplicates.
|
||||
similarity_threshold (int): The similarity threshold to use when comparing lines.
|
||||
@@ -861,6 +937,15 @@ def ensure_dialog_line_format(line:str):
|
||||
elif segment_open is not None and segment_open != c:
|
||||
# open segment is not the same as the current character
|
||||
# opening - close the current segment and open a new one
|
||||
|
||||
# if we are at the last character we append the segment
|
||||
if i == len(line)-1 and segment.strip():
|
||||
segment += c
|
||||
segments += [segment.strip()]
|
||||
segment_open = None
|
||||
segment = None
|
||||
continue
|
||||
|
||||
segments += [segment.strip()]
|
||||
segment_open = c
|
||||
segment = c
|
||||
@@ -876,14 +961,15 @@ def ensure_dialog_line_format(line:str):
|
||||
segment += c
|
||||
|
||||
if segment is not None:
|
||||
segments += [segment.strip()]
|
||||
if segment.strip().strip("*").strip('"'):
|
||||
segments += [segment.strip()]
|
||||
|
||||
for i in range(len(segments)):
|
||||
segment = segments[i]
|
||||
if segment in ['"', '*']:
|
||||
if i > 0:
|
||||
prev_segment = segments[i-1]
|
||||
if prev_segment[-1] not in ['"', '*']:
|
||||
if prev_segment and prev_segment[-1] not in ['"', '*']:
|
||||
segments[i-1] = f"{prev_segment}{segment}"
|
||||
segments[i] = ""
|
||||
continue
|
||||
@@ -924,4 +1010,27 @@ def ensure_dialog_line_format(line:str):
|
||||
elif next_segment and next_segment[0] == '*':
|
||||
segments[i] = f"\"{segment}\""
|
||||
|
||||
return " ".join(segment for segment in segments if segment)
|
||||
for i in range(len(segments)):
|
||||
segments[i] = clean_uneven_markers(segments[i], '"')
|
||||
segments[i] = clean_uneven_markers(segments[i], '*')
|
||||
|
||||
return " ".join(segment for segment in segments if segment).strip()
|
||||
|
||||
|
||||
def clean_uneven_markers(chunk:str, marker:str):
|
||||
|
||||
# if there is an uneven number of quotes, remove the last one if its
|
||||
# at the end of the chunk. If its in the middle, add a quote to the endc
|
||||
count = chunk.count(marker)
|
||||
|
||||
if count % 2 == 1:
|
||||
if chunk.endswith(marker):
|
||||
chunk = chunk[:-1]
|
||||
elif chunk.startswith(marker):
|
||||
chunk = chunk[1:]
|
||||
elif count == 1:
|
||||
chunk = chunk.replace(marker, "")
|
||||
else:
|
||||
chunk += marker
|
||||
|
||||
return chunk
|
||||
243
talemate_frontend/package-lock.json
generated
243
talemate_frontend/package-lock.json
generated
@@ -64,12 +64,13 @@
|
||||
}
|
||||
},
|
||||
"node_modules/@babel/code-frame": {
|
||||
"version": "7.22.5",
|
||||
"resolved": "https://registry.npmmirror.com/@babel/code-frame/-/code-frame-7.22.5.tgz",
|
||||
"integrity": "sha512-Xmwn266vad+6DAqEB2A6V/CcZVp62BbwVmcOJc2RPuwih1kw02TjQvWVWlcKGbBPd+8/0V5DEkOcizRGYsspYQ==",
|
||||
"version": "7.23.5",
|
||||
"resolved": "https://registry.npmjs.org/@babel/code-frame/-/code-frame-7.23.5.tgz",
|
||||
"integrity": "sha512-CgH3s1a96LipHCmSUmYFPwY7MNx8C3avkq7i4Wl3cfa662ldtUe4VM1TPXX70pfmrlWTb6jLqTYrZyT2ZTJBgA==",
|
||||
"dev": true,
|
||||
"dependencies": {
|
||||
"@babel/highlight": "^7.22.5"
|
||||
"@babel/highlight": "^7.23.4",
|
||||
"chalk": "^2.4.2"
|
||||
},
|
||||
"engines": {
|
||||
"node": ">=6.9.0"
|
||||
@@ -129,12 +130,12 @@
|
||||
}
|
||||
},
|
||||
"node_modules/@babel/generator": {
|
||||
"version": "7.22.7",
|
||||
"resolved": "https://registry.npmmirror.com/@babel/generator/-/generator-7.22.7.tgz",
|
||||
"integrity": "sha512-p+jPjMG+SI8yvIaxGgeW24u7q9+5+TGpZh8/CuB7RhBKd7RCy8FayNEFNNKrNK/eUcY/4ExQqLmyrvBXKsIcwQ==",
|
||||
"version": "7.23.5",
|
||||
"resolved": "https://registry.npmjs.org/@babel/generator/-/generator-7.23.5.tgz",
|
||||
"integrity": "sha512-BPssCHrBD+0YrxviOa3QzpqwhNIXKEtOa2jQrm4FlmkC2apYgRnQcmPWiGZDlGxiNtltnUFolMe8497Esry+jA==",
|
||||
"dev": true,
|
||||
"dependencies": {
|
||||
"@babel/types": "^7.22.5",
|
||||
"@babel/types": "^7.23.5",
|
||||
"@jridgewell/gen-mapping": "^0.3.2",
|
||||
"@jridgewell/trace-mapping": "^0.3.17",
|
||||
"jsesc": "^2.5.1"
|
||||
@@ -243,22 +244,22 @@
|
||||
}
|
||||
},
|
||||
"node_modules/@babel/helper-environment-visitor": {
|
||||
"version": "7.22.5",
|
||||
"resolved": "https://registry.npmmirror.com/@babel/helper-environment-visitor/-/helper-environment-visitor-7.22.5.tgz",
|
||||
"integrity": "sha512-XGmhECfVA/5sAt+H+xpSg0mfrHq6FzNr9Oxh7PSEBBRUb/mL7Kz3NICXb194rCqAEdxkhPT1a88teizAFyvk8Q==",
|
||||
"version": "7.22.20",
|
||||
"resolved": "https://registry.npmjs.org/@babel/helper-environment-visitor/-/helper-environment-visitor-7.22.20.tgz",
|
||||
"integrity": "sha512-zfedSIzFhat/gFhWfHtgWvlec0nqB9YEIVrpuwjruLlXfUSnA8cJB0miHKwqDnQ7d32aKo2xt88/xZptwxbfhA==",
|
||||
"dev": true,
|
||||
"engines": {
|
||||
"node": ">=6.9.0"
|
||||
}
|
||||
},
|
||||
"node_modules/@babel/helper-function-name": {
|
||||
"version": "7.22.5",
|
||||
"resolved": "https://registry.npmmirror.com/@babel/helper-function-name/-/helper-function-name-7.22.5.tgz",
|
||||
"integrity": "sha512-wtHSq6jMRE3uF2otvfuD3DIvVhOsSNshQl0Qrd7qC9oQJzHvOL4qQXlQn2916+CXGywIjpGuIkoyZRRxHPiNQQ==",
|
||||
"version": "7.23.0",
|
||||
"resolved": "https://registry.npmjs.org/@babel/helper-function-name/-/helper-function-name-7.23.0.tgz",
|
||||
"integrity": "sha512-OErEqsrxjZTJciZ4Oo+eoZqeW9UIiOcuYKRJA4ZAgV9myA+pOXhhmpfNCKjEH/auVfEYVFJ6y1Tc4r0eIApqiw==",
|
||||
"dev": true,
|
||||
"dependencies": {
|
||||
"@babel/template": "^7.22.5",
|
||||
"@babel/types": "^7.22.5"
|
||||
"@babel/template": "^7.22.15",
|
||||
"@babel/types": "^7.23.0"
|
||||
},
|
||||
"engines": {
|
||||
"node": ">=6.9.0"
|
||||
@@ -412,18 +413,18 @@
|
||||
}
|
||||
},
|
||||
"node_modules/@babel/helper-string-parser": {
|
||||
"version": "7.22.5",
|
||||
"resolved": "https://registry.npmmirror.com/@babel/helper-string-parser/-/helper-string-parser-7.22.5.tgz",
|
||||
"integrity": "sha512-mM4COjgZox8U+JcXQwPijIZLElkgEpO5rsERVDJTc2qfCDfERyob6k5WegS14SX18IIjv+XD+GrqNumY5JRCDw==",
|
||||
"version": "7.23.4",
|
||||
"resolved": "https://registry.npmjs.org/@babel/helper-string-parser/-/helper-string-parser-7.23.4.tgz",
|
||||
"integrity": "sha512-803gmbQdqwdf4olxrX4AJyFBV/RTr3rSmOj0rKwesmzlfhYNDEs+/iOcznzpNWlJlIlTJC2QfPFcHB6DlzdVLQ==",
|
||||
"dev": true,
|
||||
"engines": {
|
||||
"node": ">=6.9.0"
|
||||
}
|
||||
},
|
||||
"node_modules/@babel/helper-validator-identifier": {
|
||||
"version": "7.22.5",
|
||||
"resolved": "https://registry.npmmirror.com/@babel/helper-validator-identifier/-/helper-validator-identifier-7.22.5.tgz",
|
||||
"integrity": "sha512-aJXu+6lErq8ltp+JhkJUfk1MTGyuA4v7f3pA+BJ5HLfNC6nAQ0Cpi9uOquUj8Hehg0aUiHzWQbOVJGao6ztBAQ==",
|
||||
"version": "7.22.20",
|
||||
"resolved": "https://registry.npmjs.org/@babel/helper-validator-identifier/-/helper-validator-identifier-7.22.20.tgz",
|
||||
"integrity": "sha512-Y4OZ+ytlatR8AI+8KZfKuL5urKp7qey08ha31L8b3BwewJAoJamTzyvxPR/5D+KkdJCGPq/+8TukHBlY10FX9A==",
|
||||
"dev": true,
|
||||
"engines": {
|
||||
"node": ">=6.9.0"
|
||||
@@ -468,13 +469,13 @@
|
||||
}
|
||||
},
|
||||
"node_modules/@babel/highlight": {
|
||||
"version": "7.22.5",
|
||||
"resolved": "https://registry.npmmirror.com/@babel/highlight/-/highlight-7.22.5.tgz",
|
||||
"integrity": "sha512-BSKlD1hgnedS5XRnGOljZawtag7H1yPfQp0tdNJCHoH6AZ+Pcm9VvkrK59/Yy593Ypg0zMxH2BxD1VPYUQ7UIw==",
|
||||
"version": "7.23.4",
|
||||
"resolved": "https://registry.npmjs.org/@babel/highlight/-/highlight-7.23.4.tgz",
|
||||
"integrity": "sha512-acGdbYSfp2WheJoJm/EBBBLh/ID8KDc64ISZ9DYtBmC8/Q204PZJLHyzeB5qMzJ5trcOkybd78M4x2KWsUq++A==",
|
||||
"dev": true,
|
||||
"dependencies": {
|
||||
"@babel/helper-validator-identifier": "^7.22.5",
|
||||
"chalk": "^2.0.0",
|
||||
"@babel/helper-validator-identifier": "^7.22.20",
|
||||
"chalk": "^2.4.2",
|
||||
"js-tokens": "^4.0.0"
|
||||
},
|
||||
"engines": {
|
||||
@@ -482,9 +483,9 @@
|
||||
}
|
||||
},
|
||||
"node_modules/@babel/parser": {
|
||||
"version": "7.22.7",
|
||||
"resolved": "https://registry.npmmirror.com/@babel/parser/-/parser-7.22.7.tgz",
|
||||
"integrity": "sha512-7NF8pOkHP5o2vpmGgNGcfAeCvOYhGLyA3Z4eBQkT1RJlWu47n63bCs93QfJ2hIAFCil7L5P2IWhs1oToVgrL0Q==",
|
||||
"version": "7.23.5",
|
||||
"resolved": "https://registry.npmjs.org/@babel/parser/-/parser-7.23.5.tgz",
|
||||
"integrity": "sha512-hOOqoiNXrmGdFbhgCzu6GiURxUgM27Xwd/aPuu8RfHEZPBzL1Z54okAHAQjXfcQNwvrlkAmAp4SlRTZ45vlthQ==",
|
||||
"bin": {
|
||||
"parser": "bin/babel-parser.js"
|
||||
},
|
||||
@@ -1773,33 +1774,33 @@
|
||||
}
|
||||
},
|
||||
"node_modules/@babel/template": {
|
||||
"version": "7.22.5",
|
||||
"resolved": "https://registry.npmmirror.com/@babel/template/-/template-7.22.5.tgz",
|
||||
"integrity": "sha512-X7yV7eiwAxdj9k94NEylvbVHLiVG1nvzCV2EAowhxLTwODV1jl9UzZ48leOC0sH7OnuHrIkllaBgneUykIcZaw==",
|
||||
"version": "7.22.15",
|
||||
"resolved": "https://registry.npmjs.org/@babel/template/-/template-7.22.15.tgz",
|
||||
"integrity": "sha512-QPErUVm4uyJa60rkI73qneDacvdvzxshT3kksGqlGWYdOTIUOwJ7RDUL8sGqslY1uXWSL6xMFKEXDS3ox2uF0w==",
|
||||
"dev": true,
|
||||
"dependencies": {
|
||||
"@babel/code-frame": "^7.22.5",
|
||||
"@babel/parser": "^7.22.5",
|
||||
"@babel/types": "^7.22.5"
|
||||
"@babel/code-frame": "^7.22.13",
|
||||
"@babel/parser": "^7.22.15",
|
||||
"@babel/types": "^7.22.15"
|
||||
},
|
||||
"engines": {
|
||||
"node": ">=6.9.0"
|
||||
}
|
||||
},
|
||||
"node_modules/@babel/traverse": {
|
||||
"version": "7.22.8",
|
||||
"resolved": "https://registry.npmmirror.com/@babel/traverse/-/traverse-7.22.8.tgz",
|
||||
"integrity": "sha512-y6LPR+wpM2I3qJrsheCTwhIinzkETbplIgPBbwvqPKc+uljeA5gP+3nP8irdYt1mjQaDnlIcG+dw8OjAco4GXw==",
|
||||
"version": "7.23.5",
|
||||
"resolved": "https://registry.npmjs.org/@babel/traverse/-/traverse-7.23.5.tgz",
|
||||
"integrity": "sha512-czx7Xy5a6sapWWRx61m1Ke1Ra4vczu1mCTtJam5zRTBOonfdJ+S/B6HYmGYu3fJtr8GGET3si6IhgWVBhJ/m8w==",
|
||||
"dev": true,
|
||||
"dependencies": {
|
||||
"@babel/code-frame": "^7.22.5",
|
||||
"@babel/generator": "^7.22.7",
|
||||
"@babel/helper-environment-visitor": "^7.22.5",
|
||||
"@babel/helper-function-name": "^7.22.5",
|
||||
"@babel/code-frame": "^7.23.5",
|
||||
"@babel/generator": "^7.23.5",
|
||||
"@babel/helper-environment-visitor": "^7.22.20",
|
||||
"@babel/helper-function-name": "^7.23.0",
|
||||
"@babel/helper-hoist-variables": "^7.22.5",
|
||||
"@babel/helper-split-export-declaration": "^7.22.6",
|
||||
"@babel/parser": "^7.22.7",
|
||||
"@babel/types": "^7.22.5",
|
||||
"@babel/parser": "^7.23.5",
|
||||
"@babel/types": "^7.23.5",
|
||||
"debug": "^4.1.0",
|
||||
"globals": "^11.1.0"
|
||||
},
|
||||
@@ -1808,13 +1809,13 @@
|
||||
}
|
||||
},
|
||||
"node_modules/@babel/types": {
|
||||
"version": "7.22.5",
|
||||
"resolved": "https://registry.npmmirror.com/@babel/types/-/types-7.22.5.tgz",
|
||||
"integrity": "sha512-zo3MIHGOkPOfoRXitsgHLjEXmlDaD/5KU1Uzuc9GNiZPhSqVxVRtxuPaSBZDsYZ9qV88AjtMtWW7ww98loJ9KA==",
|
||||
"version": "7.23.5",
|
||||
"resolved": "https://registry.npmjs.org/@babel/types/-/types-7.23.5.tgz",
|
||||
"integrity": "sha512-ON5kSOJwVO6xXVRTvOI0eOnWe7VdUcIpsovGo9U/Br4Ie4UVFQTboO2cYnDhAGU6Fp+UxSiT+pMft0SMHfuq6w==",
|
||||
"dev": true,
|
||||
"dependencies": {
|
||||
"@babel/helper-string-parser": "^7.22.5",
|
||||
"@babel/helper-validator-identifier": "^7.22.5",
|
||||
"@babel/helper-string-parser": "^7.23.4",
|
||||
"@babel/helper-validator-identifier": "^7.22.20",
|
||||
"to-fast-properties": "^2.0.0"
|
||||
},
|
||||
"engines": {
|
||||
@@ -3041,9 +3042,9 @@
|
||||
},
|
||||
"node_modules/@vue/vue-loader-v15": {
|
||||
"name": "vue-loader",
|
||||
"version": "15.10.1",
|
||||
"resolved": "https://registry.npmmirror.com/vue-loader/-/vue-loader-15.10.1.tgz",
|
||||
"integrity": "sha512-SaPHK1A01VrNthlix6h1hq4uJu7S/z0kdLUb6klubo738NeQoLbS6V9/d8Pv19tU0XdQKju3D1HSKuI8wJ5wMA==",
|
||||
"version": "15.11.1",
|
||||
"resolved": "https://registry.npmjs.org/vue-loader/-/vue-loader-15.11.1.tgz",
|
||||
"integrity": "sha512-0iw4VchYLePqJfJu9s62ACWUXeSqM30SQqlIftbYWM3C+jpPcEHKSPUZBLjSF9au4HTHQ/naF6OGnO3Q/qGR3Q==",
|
||||
"dev": true,
|
||||
"dependencies": {
|
||||
"@vue/component-compiler-utils": "^3.1.0",
|
||||
@@ -3060,6 +3061,9 @@
|
||||
"cache-loader": {
|
||||
"optional": true
|
||||
},
|
||||
"prettier": {
|
||||
"optional": true
|
||||
},
|
||||
"vue-template-compiler": {
|
||||
"optional": true
|
||||
}
|
||||
@@ -8158,9 +8162,23 @@
|
||||
}
|
||||
},
|
||||
"node_modules/postcss": {
|
||||
"version": "8.4.25",
|
||||
"resolved": "https://registry.npmmirror.com/postcss/-/postcss-8.4.25.tgz",
|
||||
"integrity": "sha512-7taJ/8t2av0Z+sQEvNzCkpDynl0tX3uJMCODi6nT3PfASC7dYCWV9aQ+uiCf+KBD4SEFcu+GvJdGdwzQ6OSjCw==",
|
||||
"version": "8.4.31",
|
||||
"resolved": "https://registry.npmjs.org/postcss/-/postcss-8.4.31.tgz",
|
||||
"integrity": "sha512-PS08Iboia9mts/2ygV3eLpY5ghnUcfLV/EXTOW1E2qYxJKGGBUtNjN76FYHnMs36RmARn41bC0AZmn+rR0OVpQ==",
|
||||
"funding": [
|
||||
{
|
||||
"type": "opencollective",
|
||||
"url": "https://opencollective.com/postcss/"
|
||||
},
|
||||
{
|
||||
"type": "tidelift",
|
||||
"url": "https://tidelift.com/funding/github/npm/postcss"
|
||||
},
|
||||
{
|
||||
"type": "github",
|
||||
"url": "https://github.com/sponsors/ai"
|
||||
}
|
||||
],
|
||||
"dependencies": {
|
||||
"nanoid": "^3.3.6",
|
||||
"picocolors": "^1.0.0",
|
||||
@@ -11210,12 +11228,13 @@
|
||||
}
|
||||
},
|
||||
"@babel/code-frame": {
|
||||
"version": "7.22.5",
|
||||
"resolved": "https://registry.npmmirror.com/@babel/code-frame/-/code-frame-7.22.5.tgz",
|
||||
"integrity": "sha512-Xmwn266vad+6DAqEB2A6V/CcZVp62BbwVmcOJc2RPuwih1kw02TjQvWVWlcKGbBPd+8/0V5DEkOcizRGYsspYQ==",
|
||||
"version": "7.23.5",
|
||||
"resolved": "https://registry.npmjs.org/@babel/code-frame/-/code-frame-7.23.5.tgz",
|
||||
"integrity": "sha512-CgH3s1a96LipHCmSUmYFPwY7MNx8C3avkq7i4Wl3cfa662ldtUe4VM1TPXX70pfmrlWTb6jLqTYrZyT2ZTJBgA==",
|
||||
"dev": true,
|
||||
"requires": {
|
||||
"@babel/highlight": "^7.22.5"
|
||||
"@babel/highlight": "^7.23.4",
|
||||
"chalk": "^2.4.2"
|
||||
}
|
||||
},
|
||||
"@babel/compat-data": {
|
||||
@@ -11259,12 +11278,12 @@
|
||||
}
|
||||
},
|
||||
"@babel/generator": {
|
||||
"version": "7.22.7",
|
||||
"resolved": "https://registry.npmmirror.com/@babel/generator/-/generator-7.22.7.tgz",
|
||||
"integrity": "sha512-p+jPjMG+SI8yvIaxGgeW24u7q9+5+TGpZh8/CuB7RhBKd7RCy8FayNEFNNKrNK/eUcY/4ExQqLmyrvBXKsIcwQ==",
|
||||
"version": "7.23.5",
|
||||
"resolved": "https://registry.npmjs.org/@babel/generator/-/generator-7.23.5.tgz",
|
||||
"integrity": "sha512-BPssCHrBD+0YrxviOa3QzpqwhNIXKEtOa2jQrm4FlmkC2apYgRnQcmPWiGZDlGxiNtltnUFolMe8497Esry+jA==",
|
||||
"dev": true,
|
||||
"requires": {
|
||||
"@babel/types": "^7.22.5",
|
||||
"@babel/types": "^7.23.5",
|
||||
"@jridgewell/gen-mapping": "^0.3.2",
|
||||
"@jridgewell/trace-mapping": "^0.3.17",
|
||||
"jsesc": "^2.5.1"
|
||||
@@ -11343,19 +11362,19 @@
|
||||
}
|
||||
},
|
||||
"@babel/helper-environment-visitor": {
|
||||
"version": "7.22.5",
|
||||
"resolved": "https://registry.npmmirror.com/@babel/helper-environment-visitor/-/helper-environment-visitor-7.22.5.tgz",
|
||||
"integrity": "sha512-XGmhECfVA/5sAt+H+xpSg0mfrHq6FzNr9Oxh7PSEBBRUb/mL7Kz3NICXb194rCqAEdxkhPT1a88teizAFyvk8Q==",
|
||||
"version": "7.22.20",
|
||||
"resolved": "https://registry.npmjs.org/@babel/helper-environment-visitor/-/helper-environment-visitor-7.22.20.tgz",
|
||||
"integrity": "sha512-zfedSIzFhat/gFhWfHtgWvlec0nqB9YEIVrpuwjruLlXfUSnA8cJB0miHKwqDnQ7d32aKo2xt88/xZptwxbfhA==",
|
||||
"dev": true
|
||||
},
|
||||
"@babel/helper-function-name": {
|
||||
"version": "7.22.5",
|
||||
"resolved": "https://registry.npmmirror.com/@babel/helper-function-name/-/helper-function-name-7.22.5.tgz",
|
||||
"integrity": "sha512-wtHSq6jMRE3uF2otvfuD3DIvVhOsSNshQl0Qrd7qC9oQJzHvOL4qQXlQn2916+CXGywIjpGuIkoyZRRxHPiNQQ==",
|
||||
"version": "7.23.0",
|
||||
"resolved": "https://registry.npmjs.org/@babel/helper-function-name/-/helper-function-name-7.23.0.tgz",
|
||||
"integrity": "sha512-OErEqsrxjZTJciZ4Oo+eoZqeW9UIiOcuYKRJA4ZAgV9myA+pOXhhmpfNCKjEH/auVfEYVFJ6y1Tc4r0eIApqiw==",
|
||||
"dev": true,
|
||||
"requires": {
|
||||
"@babel/template": "^7.22.5",
|
||||
"@babel/types": "^7.22.5"
|
||||
"@babel/template": "^7.22.15",
|
||||
"@babel/types": "^7.23.0"
|
||||
}
|
||||
},
|
||||
"@babel/helper-hoist-variables": {
|
||||
@@ -11470,15 +11489,15 @@
|
||||
}
|
||||
},
|
||||
"@babel/helper-string-parser": {
|
||||
"version": "7.22.5",
|
||||
"resolved": "https://registry.npmmirror.com/@babel/helper-string-parser/-/helper-string-parser-7.22.5.tgz",
|
||||
"integrity": "sha512-mM4COjgZox8U+JcXQwPijIZLElkgEpO5rsERVDJTc2qfCDfERyob6k5WegS14SX18IIjv+XD+GrqNumY5JRCDw==",
|
||||
"version": "7.23.4",
|
||||
"resolved": "https://registry.npmjs.org/@babel/helper-string-parser/-/helper-string-parser-7.23.4.tgz",
|
||||
"integrity": "sha512-803gmbQdqwdf4olxrX4AJyFBV/RTr3rSmOj0rKwesmzlfhYNDEs+/iOcznzpNWlJlIlTJC2QfPFcHB6DlzdVLQ==",
|
||||
"dev": true
|
||||
},
|
||||
"@babel/helper-validator-identifier": {
|
||||
"version": "7.22.5",
|
||||
"resolved": "https://registry.npmmirror.com/@babel/helper-validator-identifier/-/helper-validator-identifier-7.22.5.tgz",
|
||||
"integrity": "sha512-aJXu+6lErq8ltp+JhkJUfk1MTGyuA4v7f3pA+BJ5HLfNC6nAQ0Cpi9uOquUj8Hehg0aUiHzWQbOVJGao6ztBAQ==",
|
||||
"version": "7.22.20",
|
||||
"resolved": "https://registry.npmjs.org/@babel/helper-validator-identifier/-/helper-validator-identifier-7.22.20.tgz",
|
||||
"integrity": "sha512-Y4OZ+ytlatR8AI+8KZfKuL5urKp7qey08ha31L8b3BwewJAoJamTzyvxPR/5D+KkdJCGPq/+8TukHBlY10FX9A==",
|
||||
"dev": true
|
||||
},
|
||||
"@babel/helper-validator-option": {
|
||||
@@ -11511,20 +11530,20 @@
|
||||
}
|
||||
},
|
||||
"@babel/highlight": {
|
||||
"version": "7.22.5",
|
||||
"resolved": "https://registry.npmmirror.com/@babel/highlight/-/highlight-7.22.5.tgz",
|
||||
"integrity": "sha512-BSKlD1hgnedS5XRnGOljZawtag7H1yPfQp0tdNJCHoH6AZ+Pcm9VvkrK59/Yy593Ypg0zMxH2BxD1VPYUQ7UIw==",
|
||||
"version": "7.23.4",
|
||||
"resolved": "https://registry.npmjs.org/@babel/highlight/-/highlight-7.23.4.tgz",
|
||||
"integrity": "sha512-acGdbYSfp2WheJoJm/EBBBLh/ID8KDc64ISZ9DYtBmC8/Q204PZJLHyzeB5qMzJ5trcOkybd78M4x2KWsUq++A==",
|
||||
"dev": true,
|
||||
"requires": {
|
||||
"@babel/helper-validator-identifier": "^7.22.5",
|
||||
"chalk": "^2.0.0",
|
||||
"@babel/helper-validator-identifier": "^7.22.20",
|
||||
"chalk": "^2.4.2",
|
||||
"js-tokens": "^4.0.0"
|
||||
}
|
||||
},
|
||||
"@babel/parser": {
|
||||
"version": "7.22.7",
|
||||
"resolved": "https://registry.npmmirror.com/@babel/parser/-/parser-7.22.7.tgz",
|
||||
"integrity": "sha512-7NF8pOkHP5o2vpmGgNGcfAeCvOYhGLyA3Z4eBQkT1RJlWu47n63bCs93QfJ2hIAFCil7L5P2IWhs1oToVgrL0Q=="
|
||||
"version": "7.23.5",
|
||||
"resolved": "https://registry.npmjs.org/@babel/parser/-/parser-7.23.5.tgz",
|
||||
"integrity": "sha512-hOOqoiNXrmGdFbhgCzu6GiURxUgM27Xwd/aPuu8RfHEZPBzL1Z54okAHAQjXfcQNwvrlkAmAp4SlRTZ45vlthQ=="
|
||||
},
|
||||
"@babel/plugin-bugfix-safari-id-destructuring-collision-in-function-expression": {
|
||||
"version": "7.22.5",
|
||||
@@ -12382,42 +12401,42 @@
|
||||
}
|
||||
},
|
||||
"@babel/template": {
|
||||
"version": "7.22.5",
|
||||
"resolved": "https://registry.npmmirror.com/@babel/template/-/template-7.22.5.tgz",
|
||||
"integrity": "sha512-X7yV7eiwAxdj9k94NEylvbVHLiVG1nvzCV2EAowhxLTwODV1jl9UzZ48leOC0sH7OnuHrIkllaBgneUykIcZaw==",
|
||||
"version": "7.22.15",
|
||||
"resolved": "https://registry.npmjs.org/@babel/template/-/template-7.22.15.tgz",
|
||||
"integrity": "sha512-QPErUVm4uyJa60rkI73qneDacvdvzxshT3kksGqlGWYdOTIUOwJ7RDUL8sGqslY1uXWSL6xMFKEXDS3ox2uF0w==",
|
||||
"dev": true,
|
||||
"requires": {
|
||||
"@babel/code-frame": "^7.22.5",
|
||||
"@babel/parser": "^7.22.5",
|
||||
"@babel/types": "^7.22.5"
|
||||
"@babel/code-frame": "^7.22.13",
|
||||
"@babel/parser": "^7.22.15",
|
||||
"@babel/types": "^7.22.15"
|
||||
}
|
||||
},
|
||||
"@babel/traverse": {
|
||||
"version": "7.22.8",
|
||||
"resolved": "https://registry.npmmirror.com/@babel/traverse/-/traverse-7.22.8.tgz",
|
||||
"integrity": "sha512-y6LPR+wpM2I3qJrsheCTwhIinzkETbplIgPBbwvqPKc+uljeA5gP+3nP8irdYt1mjQaDnlIcG+dw8OjAco4GXw==",
|
||||
"version": "7.23.5",
|
||||
"resolved": "https://registry.npmjs.org/@babel/traverse/-/traverse-7.23.5.tgz",
|
||||
"integrity": "sha512-czx7Xy5a6sapWWRx61m1Ke1Ra4vczu1mCTtJam5zRTBOonfdJ+S/B6HYmGYu3fJtr8GGET3si6IhgWVBhJ/m8w==",
|
||||
"dev": true,
|
||||
"requires": {
|
||||
"@babel/code-frame": "^7.22.5",
|
||||
"@babel/generator": "^7.22.7",
|
||||
"@babel/helper-environment-visitor": "^7.22.5",
|
||||
"@babel/helper-function-name": "^7.22.5",
|
||||
"@babel/code-frame": "^7.23.5",
|
||||
"@babel/generator": "^7.23.5",
|
||||
"@babel/helper-environment-visitor": "^7.22.20",
|
||||
"@babel/helper-function-name": "^7.23.0",
|
||||
"@babel/helper-hoist-variables": "^7.22.5",
|
||||
"@babel/helper-split-export-declaration": "^7.22.6",
|
||||
"@babel/parser": "^7.22.7",
|
||||
"@babel/types": "^7.22.5",
|
||||
"@babel/parser": "^7.23.5",
|
||||
"@babel/types": "^7.23.5",
|
||||
"debug": "^4.1.0",
|
||||
"globals": "^11.1.0"
|
||||
}
|
||||
},
|
||||
"@babel/types": {
|
||||
"version": "7.22.5",
|
||||
"resolved": "https://registry.npmmirror.com/@babel/types/-/types-7.22.5.tgz",
|
||||
"integrity": "sha512-zo3MIHGOkPOfoRXitsgHLjEXmlDaD/5KU1Uzuc9GNiZPhSqVxVRtxuPaSBZDsYZ9qV88AjtMtWW7ww98loJ9KA==",
|
||||
"version": "7.23.5",
|
||||
"resolved": "https://registry.npmjs.org/@babel/types/-/types-7.23.5.tgz",
|
||||
"integrity": "sha512-ON5kSOJwVO6xXVRTvOI0eOnWe7VdUcIpsovGo9U/Br4Ie4UVFQTboO2cYnDhAGU6Fp+UxSiT+pMft0SMHfuq6w==",
|
||||
"dev": true,
|
||||
"requires": {
|
||||
"@babel/helper-string-parser": "^7.22.5",
|
||||
"@babel/helper-validator-identifier": "^7.22.5",
|
||||
"@babel/helper-string-parser": "^7.23.4",
|
||||
"@babel/helper-validator-identifier": "^7.22.20",
|
||||
"to-fast-properties": "^2.0.0"
|
||||
}
|
||||
},
|
||||
@@ -13458,9 +13477,9 @@
|
||||
"integrity": "sha512-7OjdcV8vQ74eiz1TZLzZP4JwqM5fA94K6yntPS5Z25r9HDuGNzaGdgvwKYq6S+MxwF0TFRwe50fIR/MYnakdkQ=="
|
||||
},
|
||||
"@vue/vue-loader-v15": {
|
||||
"version": "npm:vue-loader@15.10.1",
|
||||
"resolved": "https://registry.npmmirror.com/vue-loader/-/vue-loader-15.10.1.tgz",
|
||||
"integrity": "sha512-SaPHK1A01VrNthlix6h1hq4uJu7S/z0kdLUb6klubo738NeQoLbS6V9/d8Pv19tU0XdQKju3D1HSKuI8wJ5wMA==",
|
||||
"version": "npm:vue-loader@15.11.1",
|
||||
"resolved": "https://registry.npmjs.org/vue-loader/-/vue-loader-15.11.1.tgz",
|
||||
"integrity": "sha512-0iw4VchYLePqJfJu9s62ACWUXeSqM30SQqlIftbYWM3C+jpPcEHKSPUZBLjSF9au4HTHQ/naF6OGnO3Q/qGR3Q==",
|
||||
"dev": true,
|
||||
"requires": {
|
||||
"@vue/component-compiler-utils": "^3.1.0",
|
||||
@@ -17572,9 +17591,9 @@
|
||||
}
|
||||
},
|
||||
"postcss": {
|
||||
"version": "8.4.25",
|
||||
"resolved": "https://registry.npmmirror.com/postcss/-/postcss-8.4.25.tgz",
|
||||
"integrity": "sha512-7taJ/8t2av0Z+sQEvNzCkpDynl0tX3uJMCODi6nT3PfASC7dYCWV9aQ+uiCf+KBD4SEFcu+GvJdGdwzQ6OSjCw==",
|
||||
"version": "8.4.31",
|
||||
"resolved": "https://registry.npmjs.org/postcss/-/postcss-8.4.31.tgz",
|
||||
"integrity": "sha512-PS08Iboia9mts/2ygV3eLpY5ghnUcfLV/EXTOW1E2qYxJKGGBUtNjN76FYHnMs36RmARn41bC0AZmn+rR0OVpQ==",
|
||||
"requires": {
|
||||
"nanoid": "^3.3.6",
|
||||
"picocolors": "^1.0.0",
|
||||
|
||||
@@ -19,7 +19,7 @@
|
||||
<v-chip v-if="agent.data.experimental" color="warning" size="x-small">experimental</v-chip>
|
||||
</v-list-item>
|
||||
</v-list>
|
||||
<AgentModal :dialog="dialog" :formTitle="formTitle" @save="saveAgent" @update:dialog="updateDialog"></AgentModal>
|
||||
<AgentModal :dialog="state.dialog" :formTitle="state.formTitle" @save="saveAgent" @update:dialog="updateDialog"></AgentModal>
|
||||
</div>
|
||||
</template>
|
||||
|
||||
|
||||
@@ -21,7 +21,7 @@
|
||||
{{ client.type }}
|
||||
<v-chip label size="x-small" variant="outlined" class="ml-1">ctx {{ client.max_token_length }}</v-chip>
|
||||
</v-list-item-subtitle>
|
||||
<v-list-item-content density="compact">
|
||||
<div density="compact">
|
||||
<v-slider
|
||||
hide-details
|
||||
v-model="client.max_token_length"
|
||||
@@ -32,9 +32,15 @@
|
||||
@click.stop
|
||||
density="compact"
|
||||
></v-slider>
|
||||
</v-list-item-content>
|
||||
</div>
|
||||
<v-list-item-subtitle class="text-center">
|
||||
|
||||
<v-tooltip text="No LLM prompt template for this model. Using default. Templates can be added in ./templates/llm-prompt" v-if="client.status === 'idle' && client.data && !client.data.has_prompt_template" max-width="200">
|
||||
<template v-slot:activator="{ props }">
|
||||
<v-icon x-size="14" class="mr-1" v-bind="props" color="orange">mdi-alert</v-icon>
|
||||
</template>
|
||||
</v-tooltip>
|
||||
|
||||
<v-tooltip text="Edit client">
|
||||
<template v-slot:activator="{ props }">
|
||||
<v-btn size="x-small" class="mr-1" v-bind="props" variant="tonal" density="comfortable" rounded="sm" @click.stop="editClient(index)" icon="mdi-cogs"></v-btn>
|
||||
@@ -56,7 +62,7 @@
|
||||
</v-list-item-subtitle>
|
||||
</v-list-item>
|
||||
</v-list>
|
||||
<ClientModal :dialog="dialog" :formTitle="formTitle" @save="saveClient" @update:dialog="updateDialog"></ClientModal>
|
||||
<ClientModal :dialog="state.dialog" :formTitle="state.formTitle" @save="saveClient" @error="propagateError" @update:dialog="updateDialog"></ClientModal>
|
||||
<v-alert type="warning" variant="tonal" v-if="state.clients.length === 0">You have no LLM clients configured. Add one.</v-alert>
|
||||
<v-btn @click="openModal" prepend-icon="mdi-plus-box">Add client</v-btn>
|
||||
</div>
|
||||
@@ -81,6 +87,9 @@ export default {
|
||||
apiUrl: '',
|
||||
model_name: '',
|
||||
max_token_length: 2048,
|
||||
data: {
|
||||
has_prompt_template: false,
|
||||
}
|
||||
}, // Add a new field to store the model name
|
||||
formTitle: ''
|
||||
}
|
||||
@@ -90,7 +99,6 @@ export default {
|
||||
'getWebsocket',
|
||||
'registerMessageHandler',
|
||||
'isConnected',
|
||||
'chekcingStatus',
|
||||
'getAgents',
|
||||
],
|
||||
provide() {
|
||||
@@ -123,10 +131,16 @@ export default {
|
||||
apiUrl: 'http://localhost:5000',
|
||||
model_name: '',
|
||||
max_token_length: 4096,
|
||||
data: {
|
||||
has_prompt_template: false,
|
||||
}
|
||||
};
|
||||
this.state.formTitle = 'Add Client';
|
||||
this.state.dialog = true;
|
||||
},
|
||||
propagateError(error) {
|
||||
this.$emit('error', error);
|
||||
},
|
||||
saveClient(client) {
|
||||
const index = this.state.clients.findIndex(c => c.name === client.name);
|
||||
if (index === -1) {
|
||||
@@ -153,10 +167,13 @@ export default {
|
||||
let agents = this.getAgents();
|
||||
let client = this.state.clients[index];
|
||||
|
||||
this.saveClient(client);
|
||||
|
||||
for (let i = 0; i < agents.length; i++) {
|
||||
agents[i].client = client.name;
|
||||
this.$emit('client-assigned', agents);
|
||||
console.log("Assigning client", client.name, "to agent", agents[i].name);
|
||||
}
|
||||
this.$emit('client-assigned', agents);
|
||||
},
|
||||
updateDialog(newVal) {
|
||||
this.state.dialog = newVal;
|
||||
@@ -175,6 +192,7 @@ export default {
|
||||
client.status = data.status;
|
||||
client.max_token_length = data.max_token_length;
|
||||
client.apiUrl = data.apiUrl;
|
||||
client.data = data.data;
|
||||
} else {
|
||||
console.log("Adding new client", data);
|
||||
this.state.clients.push({
|
||||
@@ -184,6 +202,7 @@ export default {
|
||||
status: data.status,
|
||||
max_token_length: data.max_token_length,
|
||||
apiUrl: data.apiUrl,
|
||||
data: data.data,
|
||||
});
|
||||
// sort the clients by name
|
||||
this.state.clients.sort((a, b) => (a.name > b.name) ? 1 : -1);
|
||||
|
||||
@@ -1,73 +1,209 @@
|
||||
<template>
|
||||
<v-dialog v-model="dialog" scrollable max-width="50%">
|
||||
<v-dialog v-model="dialog" scrollable max-width="960px">
|
||||
<v-card v-if="app_config !== null">
|
||||
|
||||
<v-card-title><v-icon class="mr-1">mdi-cog</v-icon>Settings</v-card-title>
|
||||
<v-tabs color="primary" v-model="tab">
|
||||
<v-tab value="game">
|
||||
<v-icon start>mdi-gamepad-square</v-icon>
|
||||
Game
|
||||
</v-tab>
|
||||
<v-tab value="application">
|
||||
<v-icon start>mdi-application</v-icon>
|
||||
Application
|
||||
</v-tab>
|
||||
<v-tab value="creator">
|
||||
<v-icon start>mdi-palette-outline</v-icon>
|
||||
Creator
|
||||
</v-tab>
|
||||
</v-tabs>
|
||||
<v-window v-model="tab">
|
||||
|
||||
<!-- GAME -->
|
||||
|
||||
<v-window-item value="game">
|
||||
<v-card flat>
|
||||
<v-card-title>
|
||||
Default player character
|
||||
<v-tooltip location="top" max-width="500" text="This will be default player character that will be added to a game if the game does not come with a defined player character. Essentially this is relevant for when you load character-cards that aren't in the talemate scene format.">
|
||||
<template v-slot:activator="{ props }">
|
||||
<v-icon size="x-small" v-bind="props" v-on="on">mdi-help</v-icon>
|
||||
</template>
|
||||
</v-tooltip>
|
||||
</v-card-title>
|
||||
<v-card-text>
|
||||
<v-row>
|
||||
<v-col cols="6">
|
||||
<v-text-field v-model="app_config.game.default_player_character.name" label="Name"></v-text-field>
|
||||
<v-col cols="4">
|
||||
<v-list>
|
||||
<v-list-item @click="gamePageSelected=item.value" :prepend-icon="item.icon" v-for="(item, index) in navigation.game" :key="index">
|
||||
<v-list-item-title>{{ item.title }}</v-list-item-title>
|
||||
</v-list-item>
|
||||
</v-list>
|
||||
</v-col>
|
||||
<v-col cols="6">
|
||||
<v-text-field v-model="app_config.game.default_player_character.gender" label="Gender"></v-text-field>
|
||||
</v-col>
|
||||
</v-row>
|
||||
<v-row>
|
||||
<v-col>
|
||||
<v-textarea v-model="app_config.game.default_player_character.description" auto-grow label="Description"></v-textarea>
|
||||
</v-col>
|
||||
<v-col>
|
||||
<v-color-picker v-model="app_config.game.default_player_character.color" hide-inputs label="Color" elevation="0"></v-color-picker>
|
||||
<v-col cols="8">
|
||||
<div v-if="gamePageSelected === 'character'">
|
||||
<v-alert color="white" variant="text" icon="mdi-human-edit" density="compact">
|
||||
<v-alert-title>Default player character</v-alert-title>
|
||||
<div class="text-grey">
|
||||
This will be default player character that will be added to a game if the game does not come with a defined player character. Essentially this is relevant for when you load character-cards that aren't in the talemate scene format.
|
||||
|
||||
</div>
|
||||
</v-alert>
|
||||
<v-divider class="mb-2"></v-divider>
|
||||
<v-row>
|
||||
<v-col cols="6">
|
||||
<v-text-field v-model="app_config.game.default_player_character.name"
|
||||
label="Name"></v-text-field>
|
||||
</v-col>
|
||||
<v-col cols="6">
|
||||
<v-text-field v-model="app_config.game.default_player_character.gender"
|
||||
label="Gender"></v-text-field>
|
||||
</v-col>
|
||||
</v-row>
|
||||
<v-row>
|
||||
<v-col cols="12">
|
||||
<v-textarea v-model="app_config.game.default_player_character.description"
|
||||
auto-grow label="Description"></v-textarea>
|
||||
</v-col>
|
||||
</v-row>
|
||||
</div>
|
||||
</v-col>
|
||||
|
||||
</v-row>
|
||||
</v-card-text>
|
||||
</v-card>
|
||||
</v-window-item>
|
||||
|
||||
<!-- APPLICATION -->
|
||||
|
||||
<v-window-item value="application">
|
||||
<v-card flat>
|
||||
<v-card-text>
|
||||
<v-row>
|
||||
<v-col cols="4">
|
||||
<v-list>
|
||||
<v-list-subheader>Third Party APIs</v-list-subheader>
|
||||
<v-list-item @click="applicationPageSelected=item.value" :prepend-icon="item.icon" v-for="(item, index) in navigation.application" :key="index">
|
||||
<v-list-item-title>{{ item.title }}</v-list-item-title>
|
||||
</v-list-item>
|
||||
</v-list>
|
||||
</v-col>
|
||||
<v-col cols="8">
|
||||
|
||||
<!-- OPENAI API -->
|
||||
<div v-if="applicationPageSelected === 'openai_api'">
|
||||
<v-alert color="white" variant="text" icon="mdi-api" density="compact">
|
||||
<v-alert-title>OpenAI</v-alert-title>
|
||||
<div class="text-grey">
|
||||
Configure your OpenAI API key here. You can get one from <a href="https://platform.openai.com/" target="_blank">https://platform.openai.com/</a>
|
||||
</div>
|
||||
</v-alert>
|
||||
<v-divider class="mb-2"></v-divider>
|
||||
<v-row>
|
||||
<v-col cols="12">
|
||||
<v-text-field type="password" v-model="app_config.openai.api_key"
|
||||
label="OpenAI API Key"></v-text-field>
|
||||
</v-col>
|
||||
</v-row>
|
||||
</div>
|
||||
|
||||
<!-- ELEVENLABS API -->
|
||||
<div v-if="applicationPageSelected === 'elevenlabs_api'">
|
||||
<v-alert color="white" variant="text" icon="mdi-api" density="compact">
|
||||
<v-alert-title>ElevenLabs</v-alert-title>
|
||||
<div class="text-grey">
|
||||
<p class="mb-1">Generate realistic speech with the most advanced AI voice model ever.</p>
|
||||
Configure your ElevenLabs API key here. You can get one from <a href="https://elevenlabs.io/?from=partnerewing2048" target="_blank">https://elevenlabs.io</a> <span class="text-caption">(affiliate link)</span>
|
||||
</div>
|
||||
</v-alert>
|
||||
<v-divider class="mb-2"></v-divider>
|
||||
<v-row>
|
||||
<v-col cols="12">
|
||||
<v-text-field type="password" v-model="app_config.elevenlabs.api_key"
|
||||
label="ElevenLabs API Key"></v-text-field>
|
||||
</v-col>
|
||||
</v-row>
|
||||
</div>
|
||||
|
||||
<!-- COQUI API -->
|
||||
<div v-if="applicationPageSelected === 'coqui_api'">
|
||||
<v-alert color="white" variant="text" icon="mdi-api" density="compact">
|
||||
<v-alert-title>Coqui Studio</v-alert-title>
|
||||
<div class="text-grey">
|
||||
<p class="mb-1">Realistic, emotive text-to-speech through generative AI.</p>
|
||||
Configure your Coqui API key here. You can get one from <a href="https://app.coqui.ai/account" target="_blank">https://app.coqui.ai/account</a>
|
||||
</div>
|
||||
</v-alert>
|
||||
<v-divider class="mb-2"></v-divider>
|
||||
<v-row>
|
||||
<v-col cols="12">
|
||||
<v-text-field type="password" v-model="app_config.coqui.api_key"
|
||||
label="Coqui API Key"></v-text-field>
|
||||
</v-col>
|
||||
</v-row>
|
||||
</div>
|
||||
|
||||
<!-- RUNPOD API -->
|
||||
<div v-if="applicationPageSelected === 'runpod_api'">
|
||||
<v-alert color="white" variant="text" icon="mdi-api" density="compact">
|
||||
<v-alert-title>RunPod</v-alert-title>
|
||||
<div class="text-grey">
|
||||
<p class="mb-1">Launch a GPU instance in seconds.</p>
|
||||
Configure your RunPod API key here. You can get one from <a href="https://runpod.io?ref=gma8kdu0" target="_blank">https://runpod.io/</a> <span class="text-caption">(affiliate link)</span>
|
||||
</div>
|
||||
</v-alert>
|
||||
<v-divider class="mb-2"></v-divider>
|
||||
<v-row>
|
||||
<v-col cols="12">
|
||||
<v-text-field type="password" v-model="app_config.runpod.api_key"
|
||||
label="RunPod API Key"></v-text-field>
|
||||
</v-col>
|
||||
</v-row>
|
||||
</div>
|
||||
|
||||
</v-col>
|
||||
</v-row>
|
||||
</v-card-text>
|
||||
</v-card>
|
||||
</v-window-item>
|
||||
|
||||
<!-- CREATOR -->
|
||||
|
||||
<v-window-item value="creator">
|
||||
<v-card flat>
|
||||
<v-card-title>
|
||||
Content context
|
||||
<v-tooltip location="top" max-width="500" text="Available choices when generating characters or scenarios within talemate.">
|
||||
<template v-slot:activator="{ props }">
|
||||
<v-icon size="x-small" v-bind="props" v-on="on">mdi-help</v-icon>
|
||||
</template>
|
||||
</v-tooltip>
|
||||
</v-card-title>
|
||||
<v-card-text style="max-height:600px; overflow-y:scroll;">
|
||||
<v-list density="compact">
|
||||
<v-list-item v-for="(value, index) in app_config.creator.content_context" :key="index">
|
||||
<v-list-item-title><v-icon color="red">mdi-delete</v-icon>{{ value }}</v-list-item-title>
|
||||
</v-list-item>
|
||||
</v-list>
|
||||
<v-text-field v-model="content_context_input" label="Add content context" @keyup.enter="app_config.creator.content_context.push(content_context_input); app_config.creator.content_context_input = ''"></v-text-field>
|
||||
<v-card-text>
|
||||
<v-row>
|
||||
<v-col cols="4">
|
||||
<v-list>
|
||||
<v-list-item @click="creatorPageSelected=item.value" :prepend-icon="item.icon" v-for="(item, index) in navigation.creator" :key="index">
|
||||
<v-list-item-title>{{ item.title }}</v-list-item-title>
|
||||
</v-list-item>
|
||||
</v-list>
|
||||
</v-col>
|
||||
<v-col cols="8">
|
||||
<div v-if="creatorPageSelected === 'content_context'">
|
||||
<!-- Content for Content context will go here -->
|
||||
<v-alert color="white" variant="text" icon="mdi-cube-scan" density="compact">
|
||||
<v-alert-title>Content context</v-alert-title>
|
||||
<div class="text-grey">
|
||||
Available content-context choices when generating characters or scenarios. This can strongly influence the content that is generated.
|
||||
</div>
|
||||
</v-alert>
|
||||
<v-divider class="mb-2"></v-divider>
|
||||
<v-row>
|
||||
<v-col cols="12">
|
||||
<v-list density="compact">
|
||||
<v-list-item v-for="(value, index) in app_config.creator.content_context" :key="index">
|
||||
<v-list-item-title><v-icon color="red" class="mr-2" @click="contentContextRemove(index)">mdi-delete</v-icon>{{ value }}</v-list-item-title>
|
||||
</v-list-item>
|
||||
</v-list>
|
||||
<v-divider></v-divider>
|
||||
<v-text-field v-model="content_context_input" label="Add content context (Press enter to add)"
|
||||
@keyup.enter="app_config.creator.content_context.push(content_context_input); content_context_input = ''"></v-text-field>
|
||||
</v-col>
|
||||
</v-row>
|
||||
|
||||
|
||||
</div>
|
||||
</v-col>
|
||||
</v-row>
|
||||
</v-card-text>
|
||||
</v-card>
|
||||
</v-window-item>
|
||||
</v-window>
|
||||
<v-card-actions>
|
||||
<v-btn color="primary" text @click="saveConfig">Save</v-btn>
|
||||
<v-spacer></v-spacer>
|
||||
<v-btn color="primary" text @click="saveConfig" prepend-icon="mdi-check-circle-outline">Save</v-btn>
|
||||
</v-card-actions>
|
||||
</v-card>
|
||||
<v-card v-else>
|
||||
@@ -78,7 +214,7 @@
|
||||
<v-progress-circular indeterminate color="primary" size="20"></v-progress-circular>
|
||||
</v-card-text>
|
||||
</v-card>
|
||||
</v-dialog>
|
||||
</v-dialog>
|
||||
</template>
|
||||
<script>
|
||||
|
||||
@@ -90,6 +226,23 @@ export default {
|
||||
dialog: false,
|
||||
app_config: null,
|
||||
content_context_input: '',
|
||||
navigation: {
|
||||
game: [
|
||||
{title: 'Default Character', icon: 'mdi-human-edit', value: 'character'},
|
||||
],
|
||||
application: [
|
||||
{title: 'OpenAI', icon: 'mdi-api', value: 'openai_api'},
|
||||
{title: 'ElevenLabs', icon: 'mdi-api', value: 'elevenlabs_api'},
|
||||
{title: 'Coqui Studio', icon: 'mdi-api', value: 'coqui_api'},
|
||||
{title: 'RunPod', icon: 'mdi-api', value: 'runpod_api'},
|
||||
],
|
||||
creator: [
|
||||
{title: 'Content Context', icon: 'mdi-cube-scan', value: 'content_context'},
|
||||
]
|
||||
},
|
||||
gamePageSelected: 'character',
|
||||
applicationPageSelected: 'openai_api',
|
||||
creatorPageSelected: 'content_context',
|
||||
}
|
||||
},
|
||||
inject: ['getWebsocket', 'registerMessageHandler', 'setWaitingForInput', 'requestSceneAssets', 'requestAppConfig'],
|
||||
@@ -104,6 +257,10 @@ export default {
|
||||
this.dialog = false
|
||||
},
|
||||
|
||||
contentContextRemove(index) {
|
||||
this.app_config.creator.content_context.splice(index, 1);
|
||||
},
|
||||
|
||||
handleMessage(message) {
|
||||
if (message.type == "app_config") {
|
||||
this.app_config = message.data;
|
||||
@@ -111,7 +268,7 @@ export default {
|
||||
}
|
||||
|
||||
if (message.type == 'config') {
|
||||
if(message.action == 'save_complete') {
|
||||
if (message.action == 'save_complete') {
|
||||
this.exit();
|
||||
}
|
||||
}
|
||||
@@ -138,5 +295,4 @@ export default {
|
||||
|
||||
</script>
|
||||
|
||||
<style scoped>
|
||||
</style>
|
||||
<style scoped></style>
|
||||
@@ -44,10 +44,10 @@
|
||||
<v-window-item value="details">
|
||||
<v-card-text style="max-height:600px; overflow-y:scroll;">
|
||||
<v-list-item v-for="(value, key) in base_attributes" :key="key">
|
||||
<v-list-item-content>
|
||||
<div>
|
||||
<v-list-item-title>{{ key }}</v-list-item-title>
|
||||
<v-list-item-subtitle>{{ value }}</v-list-item-subtitle>
|
||||
</v-list-item-content>
|
||||
</div>
|
||||
</v-list-item>
|
||||
</v-card-text>
|
||||
</v-window-item>
|
||||
|
||||
@@ -2,36 +2,48 @@
|
||||
<v-dialog v-model="localDialog" persistent max-width="600px">
|
||||
<v-card>
|
||||
<v-card-title>
|
||||
<span class="headline">{{ formTitle }}</span>
|
||||
<v-icon>mdi-network-outline</v-icon>
|
||||
<span class="headline">{{ title() }}</span>
|
||||
</v-card-title>
|
||||
<v-card-text>
|
||||
<v-container>
|
||||
<v-row>
|
||||
<v-col cols="6">
|
||||
<v-select v-model="client.type" :disabled="!typeEditable()" :items="['openai', 'textgenwebui', 'lmstudio']" label="Client Type"></v-select>
|
||||
</v-col>
|
||||
<v-col cols="6">
|
||||
<v-text-field v-model="client.name" label="Client Name"></v-text-field>
|
||||
</v-col>
|
||||
<v-row>
|
||||
<v-col cols="6">
|
||||
<v-select v-model="client.type" :disabled="!typeEditable()" :items="['openai', 'textgenwebui', 'lmstudio']" label="Client Type" @update:model-value="resetToDefaults"></v-select>
|
||||
</v-col>
|
||||
<v-col cols="6">
|
||||
<v-text-field v-model="client.name" label="Client Name"></v-text-field>
|
||||
</v-col>
|
||||
|
||||
</v-row>
|
||||
<v-row>
|
||||
<v-col cols="12">
|
||||
<v-text-field v-model="client.apiUrl" v-if="isLocalApiClient(client)" label="API URL"></v-text-field>
|
||||
<v-select v-model="client.model" v-if="client.type === 'openai'" :items="['gpt-4-1106-preview', 'gpt-4', 'gpt-3.5-turbo', 'gpt-3.5-turbo-16k']" label="Model"></v-select>
|
||||
</v-col>
|
||||
</v-row>
|
||||
<v-row>
|
||||
<v-col cols="6">
|
||||
<v-text-field v-model="client.max_token_length" v-if="isLocalApiClient(client)" type="number" label="Context Length"></v-text-field>
|
||||
</v-col>
|
||||
</v-row>
|
||||
</v-row>
|
||||
<v-row>
|
||||
<v-col cols="12">
|
||||
<v-text-field v-model="client.apiUrl" v-if="isLocalApiClient(client)" label="API URL"></v-text-field>
|
||||
<v-select v-model="client.model" v-if="client.type === 'openai'" :items="['gpt-4-1106-preview', 'gpt-4', 'gpt-3.5-turbo', 'gpt-3.5-turbo-16k']" label="Model"></v-select>
|
||||
</v-col>
|
||||
</v-row>
|
||||
<v-row>
|
||||
<v-col cols="4">
|
||||
<v-text-field v-model="client.max_token_length" v-if="isLocalApiClient(client)" type="number" label="Context Length"></v-text-field>
|
||||
</v-col>
|
||||
<v-col cols="8" v-if="!typeEditable() && client.data && client.data.prompt_template_example !== null">
|
||||
<v-card elevation="3" :color="(client.data.has_prompt_template ? 'primary' : 'warning')" variant="tonal">
|
||||
<v-card-title>Prompt Template</v-card-title>
|
||||
|
||||
<v-card-text>
|
||||
<div class="text-caption" v-if="!client.data.has_prompt_template">No matching LLM prompt template found. Using default.</div>
|
||||
<pre>{{ client.data.prompt_template_example }}</pre>
|
||||
</v-card-text>
|
||||
</v-card>
|
||||
|
||||
</v-col>
|
||||
</v-row>
|
||||
</v-container>
|
||||
</v-card-text>
|
||||
<v-card-actions>
|
||||
<v-spacer></v-spacer>
|
||||
<v-btn color="blue darken-1" text @click="close">Close</v-btn>
|
||||
<v-btn color="blue darken-1" text @click="save">Save</v-btn>
|
||||
<v-btn color="primary" text @click="close" prepend-icon="mdi-cancel">Cancel</v-btn>
|
||||
<v-btn color="primary" text @click="save" prepend-icon="mdi-check-circle-outline">Save</v-btn>
|
||||
</v-card-actions>
|
||||
</v-card>
|
||||
</v-dialog>
|
||||
@@ -47,7 +59,25 @@ export default {
|
||||
data() {
|
||||
return {
|
||||
localDialog: this.state.dialog,
|
||||
client: { ...this.state.currentClient } // Define client data property
|
||||
client: { ...this.state.currentClient },
|
||||
defaultValuesByCLientType: {
|
||||
// when client type is changed in the modal, these values will be used
|
||||
// to populate the form
|
||||
'textgenwebui': {
|
||||
apiUrl: 'http://localhost:5000',
|
||||
max_token_length: 4096,
|
||||
name_prefix: 'TextGenWebUI',
|
||||
},
|
||||
'openai': {
|
||||
model: 'gpt-4-1106-preview',
|
||||
name_prefix: 'OpenAI',
|
||||
},
|
||||
'lmstudio': {
|
||||
apiUrl: 'http://localhost:1234',
|
||||
max_token_length: 4096,
|
||||
name_prefix: 'LMStudio',
|
||||
}
|
||||
}
|
||||
};
|
||||
},
|
||||
watch: {
|
||||
@@ -68,13 +98,48 @@ export default {
|
||||
}
|
||||
},
|
||||
methods: {
|
||||
resetToDefaults() {
|
||||
const defaults = this.defaultValuesByCLientType[this.client.type];
|
||||
if (defaults) {
|
||||
this.client.model = defaults.model || '';
|
||||
this.client.apiUrl = defaults.apiUrl || '';
|
||||
this.client.max_token_length = defaults.max_token_length || 4096;
|
||||
// loop and build name from prefix, checking against current clients
|
||||
let name = defaults.name_prefix;
|
||||
let i = 2;
|
||||
while (this.state.clients.find(c => c.name === name)) {
|
||||
name = `${defaults.name_prefix} ${i}`;
|
||||
i++;
|
||||
}
|
||||
this.client.name = name;
|
||||
this.client.data = {};
|
||||
}
|
||||
},
|
||||
validateName() {
|
||||
|
||||
// if we are editing a client, we should exclude the current client from the check
|
||||
if(!this.typeEditable()) {
|
||||
return this.state.clients.findIndex(c => c.name === this.client.name && c.name !== this.state.currentClient.name) === -1;
|
||||
}
|
||||
|
||||
return this.state.clients.findIndex(c => c.name === this.client.name) === -1;
|
||||
},
|
||||
typeEditable() {
|
||||
return this.state.formTitle === 'Add Client';
|
||||
},
|
||||
title() {
|
||||
return this.typeEditable() ? 'Add Client' : 'Edit Client';
|
||||
},
|
||||
close() {
|
||||
this.$emit('update:dialog', false);
|
||||
},
|
||||
save() {
|
||||
|
||||
if(!this.validateName()) {
|
||||
this.$emit('error', 'Client name already exists');
|
||||
return;
|
||||
}
|
||||
|
||||
this.$emit('save', this.client); // Emit save event with client object
|
||||
this.close();
|
||||
},
|
||||
|
||||
92
talemate_frontend/src/components/DefaultCharacter.vue
Normal file
92
talemate_frontend/src/components/DefaultCharacter.vue
Normal file
@@ -0,0 +1,92 @@
|
||||
<template>
|
||||
<v-dialog v-model="showModal" max-width="800px">
|
||||
<v-card>
|
||||
<v-card-title class="headline">Your Character</v-card-title>
|
||||
<v-card-text>
|
||||
<v-alert type="info" variant="tonal" v-if="defaultCharacter.name === ''" density="compact">You have not yet
|
||||
configured a default player character. This character will be used when a scenario is loaded that does not come
|
||||
with a pre-defined player character.</v-alert>
|
||||
<v-container>
|
||||
<v-row>
|
||||
<v-col cols="12" sm="6">
|
||||
<v-text-field v-model="defaultCharacter.name" label="Name" :rules="[rules.required]"></v-text-field>
|
||||
</v-col>
|
||||
<v-col cols="12" sm="6">
|
||||
<v-text-field v-model="defaultCharacter.gender" label="Gender" :rules="[rules.required]"></v-text-field>
|
||||
</v-col>
|
||||
</v-row>
|
||||
<v-row>
|
||||
<v-col cols="12">
|
||||
<v-textarea v-model="defaultCharacter.description" label="Description" auto-grow></v-textarea>
|
||||
</v-col>
|
||||
</v-row>
|
||||
</v-container>
|
||||
</v-card-text>
|
||||
<v-card-actions>
|
||||
<v-spacer></v-spacer>
|
||||
<v-btn color="primary" v-if="!saving" text @click="cancel" prepend-icon="mdi-cancel">Cancel</v-btn>
|
||||
<v-progress-circular v-else indeterminate color="primary" size="20"></v-progress-circular>
|
||||
<v-btn color="primary" text :disabled="saving" @click="saveDefaultCharacter" prepend-icon="mdi-check-circle-outline">Continue</v-btn>
|
||||
</v-card-actions>
|
||||
</v-card>
|
||||
</v-dialog>
|
||||
</template>
|
||||
|
||||
<script>
|
||||
export default {
|
||||
name: 'DefaultCharacter',
|
||||
inject: ['getWebsocket', 'registerMessageHandler'],
|
||||
data() {
|
||||
return {
|
||||
showModal: false,
|
||||
saving: false,
|
||||
defaultCharacter: {
|
||||
name: '',
|
||||
gender: '',
|
||||
description: '',
|
||||
color: '#3362bb'
|
||||
},
|
||||
rules: {
|
||||
required: value => !!value || 'Required.'
|
||||
}
|
||||
};
|
||||
},
|
||||
methods: {
|
||||
saveDefaultCharacter() {
|
||||
// Send the new default character data to the server
|
||||
this.saving = true;
|
||||
this.getWebsocket().send(JSON.stringify({
|
||||
type: 'config',
|
||||
action: 'save_default_character',
|
||||
data: this.defaultCharacter
|
||||
}));
|
||||
},
|
||||
cancel() {
|
||||
this.$emit("cancel");
|
||||
this.closeModal();
|
||||
},
|
||||
open() {
|
||||
this.saving = false;
|
||||
this.showModal = true;
|
||||
},
|
||||
closeModal() {
|
||||
this.showModal = false;
|
||||
},
|
||||
handleMessage(message) {
|
||||
if (message.type == 'config') {
|
||||
if (message.action == 'save_default_character_complete') {
|
||||
this.closeModal();
|
||||
this.$emit("save");
|
||||
}
|
||||
}
|
||||
},
|
||||
},
|
||||
created() {
|
||||
this.registerMessageHandler(this.handleMessage);
|
||||
},
|
||||
};
|
||||
</script>
|
||||
|
||||
<style scoped>
|
||||
/* Add any specific styles for your DefaultCharacter modal here */
|
||||
</style>
|
||||
@@ -1,12 +1,15 @@
|
||||
<template>
|
||||
<v-list-subheader @click="toggle()" class="text-uppercase"><v-icon>mdi-script-text-outline</v-icon> Load
|
||||
<v-list-subheader v-if="appConfig !== null" @click="toggle()" class="text-uppercase"><v-icon>mdi-script-text-outline</v-icon> Load
|
||||
<v-progress-circular v-if="loading" indeterminate color="primary" size="20"></v-progress-circular>
|
||||
<v-icon v-if="expanded" icon="mdi-chevron-down"></v-icon>
|
||||
<v-icon v-else icon="mdi-chevron-up"></v-icon>
|
||||
</v-list-subheader>
|
||||
<v-list-item-group v-if="!loading && isConnected() && expanded && !configurationRequired()">
|
||||
<v-list-subheader class="text-uppercase" v-else>
|
||||
<v-progress-circular indeterminate color="primary" size="20"></v-progress-circular> Waiting for config...
|
||||
</v-list-subheader>
|
||||
<div v-if="!loading && isConnected() && expanded && !configurationRequired() && appConfig !== null">
|
||||
<v-list-item>
|
||||
<v-list-item-content class="mb-3">
|
||||
<div class="mb-3">
|
||||
<!-- Toggle buttons for switching between file upload and path input -->
|
||||
<v-btn-toggle density="compact" class="mb-3" v-model="inputMethod" mandatory>
|
||||
<v-btn value="file">
|
||||
@@ -37,18 +40,24 @@
|
||||
<v-icon left>mdi-palette-outline</v-icon>
|
||||
Creative Mode
|
||||
</v-btn>
|
||||
</v-list-item-content>
|
||||
</div>
|
||||
</v-list-item>
|
||||
</v-list-item-group>
|
||||
</div>
|
||||
<div v-else-if="configurationRequired()">
|
||||
<v-alert type="warning" variant="tonal">You need to configure a Talemate client before you can load scenes.</v-alert>
|
||||
</div>
|
||||
<DefaultCharacter ref="defaultCharacterModal" @save="loadScene" @cancel="loadCanceled"></DefaultCharacter>
|
||||
</template>
|
||||
|
||||
|
||||
<script>
|
||||
import DefaultCharacter from './DefaultCharacter.vue';
|
||||
|
||||
export default {
|
||||
name: 'LoadScene',
|
||||
components: {
|
||||
DefaultCharacter,
|
||||
},
|
||||
data() {
|
||||
return {
|
||||
loading: false,
|
||||
@@ -60,10 +69,19 @@ export default {
|
||||
sceneSearchLoading: false,
|
||||
sceneSaved: null,
|
||||
expanded: true,
|
||||
appConfig: null, // Store the app configuration
|
||||
}
|
||||
},
|
||||
emits: {
|
||||
loading: null,
|
||||
},
|
||||
inject: ['getWebsocket', 'registerMessageHandler', 'isConnected', 'configurationRequired'],
|
||||
methods: {
|
||||
// Method to show the DefaultCharacter modal
|
||||
showDefaultCharacterModal() {
|
||||
this.$refs.defaultCharacterModal.open();
|
||||
},
|
||||
|
||||
toggle() {
|
||||
this.expanded = !this.expanded;
|
||||
},
|
||||
@@ -89,6 +107,12 @@ export default {
|
||||
this.loading = true;
|
||||
this.getWebsocket().send(JSON.stringify({ type: 'load_scene', file_path: "environment:creative" }));
|
||||
},
|
||||
loadCanceled() {
|
||||
console.log("Load canceled");
|
||||
this.loading = false;
|
||||
this.sceneFile = [];
|
||||
},
|
||||
|
||||
loadScene() {
|
||||
|
||||
if(this.sceneSaved === false) {
|
||||
@@ -97,13 +121,26 @@ export default {
|
||||
}
|
||||
}
|
||||
|
||||
this.loading = true;
|
||||
this.sceneSaved = null;
|
||||
|
||||
if (this.inputMethod === 'file' && this.sceneFile.length > 0) { // Check if the input method is "file" and there is at least one file
|
||||
|
||||
// if file is image check if default character is set
|
||||
if(this.sceneFile[0].type.startsWith("image/")) {
|
||||
if(!this.appConfig.game.default_player_character.name) {
|
||||
this.showDefaultCharacterModal();
|
||||
return;
|
||||
}
|
||||
}
|
||||
|
||||
this.loading = true;
|
||||
|
||||
// Convert the uploaded file to base64
|
||||
const reader = new FileReader();
|
||||
reader.readAsDataURL(this.sceneFile[0]); // Access the first file in the array
|
||||
reader.onload = () => {
|
||||
//const base64File = reader.result.split(',')[1];
|
||||
this.$emit("loading", true)
|
||||
this.getWebsocket().send(JSON.stringify({
|
||||
type: 'load_scene',
|
||||
scene_data: reader.result,
|
||||
@@ -112,11 +149,28 @@ export default {
|
||||
this.sceneFile = [];
|
||||
};
|
||||
} else if (this.inputMethod === 'path' && this.sceneInput) { // Check if the input method is "path" and the scene input is not empty
|
||||
|
||||
// if path ends with .png/jpg/webp check if default character is set
|
||||
|
||||
if(this.sceneInput.endsWith(".png") || this.sceneInput.endsWith(".jpg") || this.sceneInput.endsWith(".webp")) {
|
||||
if(!this.appConfig.game.default_player_character.name) {
|
||||
this.showDefaultCharacterModal();
|
||||
return;
|
||||
}
|
||||
}
|
||||
|
||||
this.loading = true;
|
||||
this.$emit("loading", true)
|
||||
this.getWebsocket().send(JSON.stringify({ type: 'load_scene', file_path: this.sceneInput }));
|
||||
this.sceneInput = '';
|
||||
}
|
||||
},
|
||||
handleMessage(data) {
|
||||
// Handle app configuration
|
||||
if (data.type === 'app_config') {
|
||||
this.appConfig = data.data;
|
||||
console.log("App config", this.appConfig);
|
||||
}
|
||||
|
||||
// Scene loaded
|
||||
if (data.type === "system") {
|
||||
@@ -139,10 +193,11 @@ export default {
|
||||
return;
|
||||
}
|
||||
|
||||
}
|
||||
},
|
||||
},
|
||||
created() {
|
||||
this.registerMessageHandler(this.handleMessage);
|
||||
//this.getWebsocket().send(JSON.stringify({ type: 'request_config' })); // Request the current app configuration
|
||||
},
|
||||
mounted() {
|
||||
console.log("Websocket", this.getWebsocket()); // Check if websocket is available
|
||||
|
||||
@@ -18,7 +18,7 @@
|
||||
<v-tooltip v-if="isEnvironment('scene')" :disabled="isInputDisabled()" location="top"
|
||||
text="Redo most recent AI message">
|
||||
<template v-slot:activator="{ props }">
|
||||
<v-btn class="hotkey" v-bind="props" v-on="on" :disabled="isInputDisabled()"
|
||||
<v-btn class="hotkey" v-bind="props" :disabled="isInputDisabled()"
|
||||
@click="sendHotButtonMessage('!rerun')" color="primary" icon>
|
||||
<v-icon>mdi-refresh</v-icon>
|
||||
</v-btn>
|
||||
@@ -28,7 +28,7 @@
|
||||
<v-tooltip v-if="isEnvironment('scene')" :disabled="isInputDisabled()" location="top"
|
||||
text="Redo most recent AI message (Nuke Option - use this to attempt to break out of repetition)">
|
||||
<template v-slot:activator="{ props }">
|
||||
<v-btn class="hotkey" v-bind="props" v-on="on" :disabled="isInputDisabled()"
|
||||
<v-btn class="hotkey" v-bind="props" :disabled="isInputDisabled()"
|
||||
@click="sendHotButtonMessage('!rerun:0.5')" color="primary" icon>
|
||||
<v-icon>mdi-nuke</v-icon>
|
||||
</v-btn>
|
||||
@@ -39,7 +39,7 @@
|
||||
<v-tooltip v-if="commandActive" location="top"
|
||||
text="Abort / end action.">
|
||||
<template v-slot:activator="{ props }">
|
||||
<v-btn class="hotkey mr-3" v-bind="props" v-on="on" :disabled="!isWaitingForInput()"
|
||||
<v-btn class="hotkey mr-3" v-bind="props" :disabled="!isWaitingForInput()"
|
||||
@click="sendHotButtonMessage('!abort')" color="primary" icon>
|
||||
<v-icon>mdi-cancel</v-icon>
|
||||
|
||||
@@ -56,7 +56,7 @@
|
||||
<v-card-actions>
|
||||
<v-tooltip :disabled="isInputDisabled()" location="top" text="Narrate: Progress Story">
|
||||
<template v-slot:activator="{ props }">
|
||||
<v-btn class="hotkey mx-3" v-bind="props" v-on="on" :disabled="isInputDisabled()"
|
||||
<v-btn class="hotkey mx-3" v-bind="props" :disabled="isInputDisabled()"
|
||||
@click="sendHotButtonMessage('!narrate_progress')" color="primary" icon>
|
||||
<v-icon>mdi-script-text-play</v-icon>
|
||||
</v-btn>
|
||||
@@ -64,7 +64,7 @@
|
||||
</v-tooltip>
|
||||
<v-tooltip :disabled="isInputDisabled()" location="top" text="Narrate: Scene">
|
||||
<template v-slot:activator="{ props }">
|
||||
<v-btn class="hotkey mx-3" v-bind="props" v-on="on" :disabled="isInputDisabled()"
|
||||
<v-btn class="hotkey mx-3" v-bind="props" :disabled="isInputDisabled()"
|
||||
@click="sendHotButtonMessage('!narrate')" color="primary" icon>
|
||||
<v-icon>mdi-script-text</v-icon>
|
||||
</v-btn>
|
||||
@@ -72,7 +72,7 @@
|
||||
</v-tooltip>
|
||||
<v-tooltip :disabled="isInputDisabled()" location="top" text="Narrate: Character">
|
||||
<template v-slot:activator="{ props }">
|
||||
<v-btn class="hotkey mx-3" v-bind="props" v-on="on" :disabled="isInputDisabled()"
|
||||
<v-btn class="hotkey mx-3" v-bind="props" :disabled="isInputDisabled()"
|
||||
@click="sendHotButtonMessage('!narrate_c')" color="primary" icon>
|
||||
<v-icon>mdi-account-voice</v-icon>
|
||||
</v-btn>
|
||||
@@ -80,7 +80,7 @@
|
||||
</v-tooltip>
|
||||
<v-tooltip :disabled="isInputDisabled()" location="top" text="Narrate: Query">
|
||||
<template v-slot:activator="{ props }">
|
||||
<v-btn class="hotkey mx-3" v-bind="props" v-on="on" :disabled="isInputDisabled()"
|
||||
<v-btn class="hotkey mx-3" v-bind="props" :disabled="isInputDisabled()"
|
||||
@click="sendHotButtonMessage('!narrate_q')" color="primary" icon>
|
||||
<v-icon>mdi-crystal-ball</v-icon>
|
||||
</v-btn>
|
||||
@@ -103,7 +103,7 @@
|
||||
<v-divider vertical></v-divider>
|
||||
<v-tooltip :disabled="isInputDisabled()" location="top" text="Direct a character">
|
||||
<template v-slot:activator="{ props }">
|
||||
<v-btn class="hotkey mx-3" v-bind="props" v-on="on" :disabled="isInputDisabled()"
|
||||
<v-btn class="hotkey mx-3" v-bind="props" :disabled="isInputDisabled()"
|
||||
@click="sendHotButtonMessage('!director')" color="primary" icon>
|
||||
<v-icon>mdi-bullhorn</v-icon>
|
||||
</v-btn>
|
||||
@@ -118,7 +118,7 @@
|
||||
|
||||
<v-tooltip :disabled="isInputDisabled()" location="top" text="Save">
|
||||
<template v-slot:activator="{ props }">
|
||||
<v-btn class="hotkey mx-3" v-bind="props" v-on="on" :disabled="isInputDisabled()"
|
||||
<v-btn class="hotkey mx-3" v-bind="props" :disabled="isInputDisabled()"
|
||||
@click="sendHotButtonMessage('!save')" color="primary" icon>
|
||||
<v-icon>mdi-content-save</v-icon>
|
||||
</v-btn>
|
||||
@@ -127,7 +127,7 @@
|
||||
|
||||
<v-tooltip :disabled="isInputDisabled()" location="top" text="Save As">
|
||||
<template v-slot:activator="{ props }">
|
||||
<v-btn class="hotkey mx-3" v-bind="props" v-on="on" :disabled="isInputDisabled()"
|
||||
<v-btn class="hotkey mx-3" v-bind="props" :disabled="isInputDisabled()"
|
||||
@click="sendHotButtonMessage('!save_as')" color="primary" icon>
|
||||
<v-icon>mdi-content-save-all</v-icon>
|
||||
</v-btn>
|
||||
@@ -136,7 +136,7 @@
|
||||
|
||||
<v-tooltip v-if="isEnvironment('scene')" :disabled="isInputDisabled()" location="top" text="Switch to creative mode">
|
||||
<template v-slot:activator="{ props }">
|
||||
<v-btn class="hotkey mx-3" v-bind="props" v-on="on" :disabled="isInputDisabled()"
|
||||
<v-btn class="hotkey mx-3" v-bind="props" :disabled="isInputDisabled()"
|
||||
@click="sendHotButtonMessage('!setenv_creative')" color="primary" icon>
|
||||
<v-icon>mdi-palette-outline</v-icon>
|
||||
</v-btn>
|
||||
@@ -145,7 +145,7 @@
|
||||
|
||||
<v-tooltip v-else-if="isEnvironment('creative')" :disabled="isInputDisabled()" location="top" text="Switch to game mode">
|
||||
<template v-slot:activator="{ props }">
|
||||
<v-btn class="hotkey mx-3" v-bind="props" v-on="on" :disabled="isInputDisabled()"
|
||||
<v-btn class="hotkey mx-3" v-bind="props" :disabled="isInputDisabled()"
|
||||
@click="sendHotButtonMessage('!setenv_scene')" color="primary" icon>
|
||||
<v-icon>mdi-gamepad-square</v-icon>
|
||||
</v-btn>
|
||||
|
||||
@@ -11,7 +11,7 @@
|
||||
Make sure the backend process is running.
|
||||
</p>
|
||||
</v-alert>
|
||||
<LoadScene ref="loadScene" />
|
||||
<LoadScene ref="loadScene" @loading="sceneStartedLoading" />
|
||||
<v-divider></v-divider>
|
||||
<div :style="(sceneActive && scene.environment === 'scene' ? 'display:block' : 'display:none')">
|
||||
<!-- <GameOptions v-if="sceneActive" ref="gameOptions" /> -->
|
||||
@@ -37,18 +37,14 @@
|
||||
<v-list>
|
||||
<v-list-subheader class="text-uppercase"><v-icon>mdi-network-outline</v-icon>
|
||||
Clients</v-list-subheader>
|
||||
<v-list-item-group>
|
||||
<v-list-item>
|
||||
<AIClient ref="aiClient" @save="saveClients" @clients-updated="saveClients" @client-assigned="saveAgents"></AIClient>
|
||||
</v-list-item>
|
||||
</v-list-item-group>
|
||||
<v-list-item>
|
||||
<AIClient ref="aiClient" @save="saveClients" @error="uxErrorHandler" @clients-updated="saveClients" @client-assigned="saveAgents"></AIClient>
|
||||
</v-list-item>
|
||||
<v-divider></v-divider>
|
||||
<v-list-subheader class="text-uppercase"><v-icon>mdi-transit-connection-variant</v-icon> Agents</v-list-subheader>
|
||||
<v-list-item-group>
|
||||
<v-list-item>
|
||||
<AIAgent ref="aiAgent" @save="saveAgents" @agents-updated="saveAgents"></AIAgent>
|
||||
</v-list-item>
|
||||
</v-list-item-group>
|
||||
<v-list-item>
|
||||
<AIAgent ref="aiAgent" @save="saveAgents" @agents-updated="saveAgents"></AIAgent>
|
||||
</v-list-item>
|
||||
<!-- More sections can be added here -->
|
||||
</v-list>
|
||||
</v-navigation-drawer>
|
||||
@@ -97,13 +93,12 @@
|
||||
<v-chip size="x-small" v-else-if="scene.environment === 'scene'" class="ml-1"><v-icon text="Play" size="14"
|
||||
class="mr-1">mdi-gamepad-square</v-icon>Game Mode</v-chip>
|
||||
|
||||
<v-btn v-if="scene.environment === 'scene'" class="ml-1" @click="openSceneHistory()"><v-icon size="14"
|
||||
class="mr-1">mdi-playlist-star</v-icon>History</v-btn>
|
||||
|
||||
<v-chip size="x-small" v-if="scene.scene_time !== undefined">
|
||||
<v-icon>mdi-clock</v-icon>
|
||||
{{ scene.scene_time }}
|
||||
</v-chip>
|
||||
<v-tooltip :text="scene.scene_time" v-if="scene.scene_time !== undefined">
|
||||
<template v-slot:activator="{ props }">
|
||||
<v-btn v-bind="props" v-if="scene.environment === 'scene'" class="ml-1" @click="openSceneHistory()"><v-icon size="14"
|
||||
class="mr-1">mdi-clock</v-icon>History</v-btn>
|
||||
</template>
|
||||
</v-tooltip>
|
||||
|
||||
</v-toolbar-title>
|
||||
<v-toolbar-title v-else>
|
||||
@@ -429,7 +424,7 @@ export default {
|
||||
let agent = this.$refs.aiAgent.getActive();
|
||||
|
||||
if (agent) {
|
||||
return agent.name;
|
||||
return agent.label;
|
||||
}
|
||||
return null;
|
||||
},
|
||||
@@ -448,6 +443,14 @@ export default {
|
||||
openAppConfig() {
|
||||
this.$refs.appConfig.show();
|
||||
},
|
||||
uxErrorHandler(error) {
|
||||
this.errorNotification = true;
|
||||
this.errorMessage = error;
|
||||
},
|
||||
sceneStartedLoading() {
|
||||
this.loading = true;
|
||||
this.sceneActive = false;
|
||||
}
|
||||
}
|
||||
}
|
||||
</script>
|
||||
|
||||
@@ -2,10 +2,8 @@
|
||||
<v-list-subheader class="text-uppercase">
|
||||
<v-icon class="mr-1">mdi-earth</v-icon>World
|
||||
<v-progress-circular class="ml-1 mr-3" size="14" v-if="requesting" indeterminate color="primary"></v-progress-circular>
|
||||
<v-btn v-else size="x-small" class="mr-1" v-bind="props" variant="tonal" density="comfortable" rounded="sm" @click.stop="refresh()" icon="mdi-refresh"></v-btn>
|
||||
|
||||
<v-btn v-else :disabled="isInputDisabled()" size="x-small" class="mr-1" variant="tonal" density="comfortable" rounded="sm" @click.stop="refresh()" icon="mdi-refresh"></v-btn>
|
||||
</v-list-subheader>
|
||||
|
||||
<div ref="charactersContainer">
|
||||
|
||||
<v-expansion-panels density="compact" v-for="(character,name) in characters" :key="name">
|
||||
@@ -85,6 +83,7 @@ export default {
|
||||
items: {},
|
||||
location: null,
|
||||
requesting: false,
|
||||
sceneTime: null,
|
||||
}
|
||||
},
|
||||
|
||||
@@ -94,6 +93,7 @@ export default {
|
||||
'setWaitingForInput',
|
||||
'openCharacterSheet',
|
||||
'characterSheet',
|
||||
'isInputDisabled',
|
||||
],
|
||||
|
||||
methods: {
|
||||
@@ -127,6 +127,8 @@ export default {
|
||||
this.items = data.data.items;
|
||||
this.location = data.data.location;
|
||||
this.requesting = (data.status==="requested")
|
||||
} else if (data.type == "scene_status") {
|
||||
this.sceneTime = data.data.scene_time;
|
||||
}
|
||||
},
|
||||
},
|
||||
|
||||
2
templates/llm-prompt/Capybara-Tess-Yi.jinja2
Normal file
2
templates/llm-prompt/Capybara-Tess-Yi.jinja2
Normal file
@@ -0,0 +1,2 @@
|
||||
SYSTEM: {{ system_message }}
|
||||
USER: {{ set_response(prompt, "\nASSISTANT: ") }}
|
||||
4
templates/llm-prompt/Noromaid.jinja2
Normal file
4
templates/llm-prompt/Noromaid.jinja2
Normal file
@@ -0,0 +1,4 @@
|
||||
{{ system_message }}
|
||||
|
||||
### Instruction:
|
||||
{{ set_response(prompt, "\n\n### Response:\n") }}
|
||||
4
templates/llm-prompt/OpenHermes-2.5-neural-chat.jinja2
Normal file
4
templates/llm-prompt/OpenHermes-2.5-neural-chat.jinja2
Normal file
@@ -0,0 +1,4 @@
|
||||
<|im_start|>system
|
||||
{{ system_message }}<|im_end|>
|
||||
<|im_start|>user
|
||||
{{ set_response(prompt, "<|im_end|>\n<|im_start|>assistant\n") }}
|
||||
2
templates/llm-prompt/RpBird-Yi-34B.jinja2
Normal file
2
templates/llm-prompt/RpBird-Yi-34B.jinja2
Normal file
@@ -0,0 +1,2 @@
|
||||
SYSTEM: {{ system_message }}
|
||||
USER: {{ set_response(prompt, "\nASSISTANT: ") }}
|
||||
2
templates/llm-prompt/Tess.jinja2
Normal file
2
templates/llm-prompt/Tess.jinja2
Normal file
@@ -0,0 +1,2 @@
|
||||
SYSTEM: {{ system_message }}
|
||||
USER: {{ set_response(prompt, "\nASSISTANT: ") }}
|
||||
4
templates/llm-prompt/chronomaid-storytelling.jinja2
Normal file
4
templates/llm-prompt/chronomaid-storytelling.jinja2
Normal file
@@ -0,0 +1,4 @@
|
||||
{{ system_message }}
|
||||
|
||||
### Instruction:
|
||||
{{ set_response(prompt, "\n\n### Response:\n") }}
|
||||
1
templates/llm-prompt/deepseek.jinja2
Normal file
1
templates/llm-prompt/deepseek.jinja2
Normal file
@@ -0,0 +1 @@
|
||||
User: {{ system_message }} {{ set_response(prompt, "\nAssistant: ") }}
|
||||
1
templates/llm-prompt/starling.jinja2
Normal file
1
templates/llm-prompt/starling.jinja2
Normal file
@@ -0,0 +1 @@
|
||||
GPT4 Correct System: {{ system_message }}<|end_of_turn|>GPT4 Correct User: {{ set_response(prompt, "<|end_of_turn|>GPT4 Correct Assistant:") }}
|
||||
0
tests/conftest.py
Normal file
0
tests/conftest.py
Normal file
21
tests/test_dialogue_cleanup.py
Normal file
21
tests/test_dialogue_cleanup.py
Normal file
@@ -0,0 +1,21 @@
|
||||
import pytest
|
||||
from talemate.util import ensure_dialog_format
|
||||
|
||||
@pytest.mark.parametrize("input, expected", [
|
||||
('Hello how are you?', 'Hello how are you?'),
|
||||
('"Hello how are you?"', '"Hello how are you?"'),
|
||||
('"Hello how are you?" he asks "I am fine"', '"Hello how are you?" *he asks* "I am fine"'),
|
||||
('Hello how are you? *he asks* I am fine', '"Hello how are you?" *he asks* "I am fine"'),
|
||||
|
||||
('Hello how are you?" *he asks* I am fine', '"Hello how are you?" *he asks* "I am fine"'),
|
||||
('Hello how are you?" *he asks I am fine', '"Hello how are you?" *he asks I am fine*'),
|
||||
('Hello how are you?" *he asks* "I am fine" *', '"Hello how are you?" *he asks* "I am fine"'),
|
||||
|
||||
('"Hello how are you *he asks* I am fine"', '"Hello how are you" *he asks* "I am fine"'),
|
||||
('This is a string without any markers', 'This is a string without any markers'),
|
||||
('This is a string with an ending quote"', '"This is a string with an ending quote"'),
|
||||
('This is a string with an ending asterisk*', '*This is a string with an ending asterisk*'),
|
||||
('"Mixed markers*', '*Mixed markers*'),
|
||||
])
|
||||
def test_dialogue_cleanup(input, expected):
|
||||
assert ensure_dialog_format(input) == expected
|
||||
38
tests/test_isodate.py
Normal file
38
tests/test_isodate.py
Normal file
@@ -0,0 +1,38 @@
|
||||
from talemate.util import (
|
||||
iso8601_add,
|
||||
iso8601_correct_duration,
|
||||
iso8601_diff,
|
||||
iso8601_diff_to_human,
|
||||
iso8601_duration_to_human,
|
||||
parse_duration_to_isodate_duration,
|
||||
timedelta_to_duration,
|
||||
duration_to_timedelta,
|
||||
isodate
|
||||
)
|
||||
|
||||
|
||||
def test_isodate_utils():
|
||||
|
||||
date1 = "P11MT15M"
|
||||
date2 = "PT1S"
|
||||
|
||||
duration1= parse_duration_to_isodate_duration(date1)
|
||||
assert duration1.months == 11
|
||||
assert duration1.tdelta.seconds == 900
|
||||
|
||||
duration2 = parse_duration_to_isodate_duration(date2)
|
||||
assert duration2.seconds == 1
|
||||
|
||||
timedelta1 = duration_to_timedelta(duration1)
|
||||
assert timedelta1.seconds == 900
|
||||
assert timedelta1.days == 11*30, timedelta1.days
|
||||
|
||||
timedelta2 = duration_to_timedelta(duration2)
|
||||
assert timedelta2.seconds == 1
|
||||
|
||||
parsed = parse_duration_to_isodate_duration("P11MT14M59S")
|
||||
assert iso8601_diff(date1, date2) == parsed, parsed
|
||||
|
||||
assert iso8601_duration_to_human(date1) == "11 Months and 15 Minutes ago", iso8601_duration_to_human(date1)
|
||||
assert iso8601_duration_to_human(date2) == "1 Second ago", iso8601_duration_to_human(date2)
|
||||
assert iso8601_duration_to_human(iso8601_diff(date1, date2)) == "11 Months, 14 Minutes and 59 Seconds ago", iso8601_duration_to_human(iso8601_diff(date1, date2))
|
||||
Reference in New Issue
Block a user