mirror of
https://github.com/vegu-ai/talemate.git
synced 2025-12-24 23:49:28 +01:00
Compare commits
13 Commits
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
ddfbd6891b | ||
|
|
143dd47e02 | ||
|
|
cc7cb773d1 | ||
|
|
02c88f75a1 | ||
|
|
419371e0fb | ||
|
|
6e847bf283 | ||
|
|
ceedd3019f | ||
|
|
a28cf2a029 | ||
|
|
60cb271e30 | ||
|
|
1874234d2c | ||
|
|
ef99539e69 | ||
|
|
39bd02722d | ||
|
|
f0b627b900 |
@@ -1,13 +1,19 @@
|
||||
# Use an official node runtime as a parent image
|
||||
FROM node:20
|
||||
|
||||
# Make sure we are in a development environment (this isn't a production ready Dockerfile)
|
||||
ENV NODE_ENV=development
|
||||
|
||||
# Echo that this isn't a production ready Dockerfile
|
||||
RUN echo "This Dockerfile is not production ready. It is intended for development purposes only."
|
||||
|
||||
# Set the working directory in the container
|
||||
WORKDIR /app
|
||||
|
||||
# Copy the frontend directory contents into the container at /app
|
||||
COPY ./talemate_frontend /app
|
||||
|
||||
# Install any needed packages specified in package.json
|
||||
# Install all dependencies
|
||||
RUN npm install
|
||||
|
||||
# Make port 8080 available to the world outside this container
|
||||
|
||||
72
README.md
72
README.md
@@ -7,16 +7,16 @@ Roleplay with AI with a focus on strong narration and consistent world and game
|
||||
|||
|
||||
|||
|
||||
|
||||
> :warning: **It does not run any large language models itself but relies on existing APIs. Currently supports OpenAI, Anthropic, mistral.ai, self-hosted text-generation-webui and LMStudio. 0.18.0 also adds support for generic OpenAI api implementations, but generation quality on that will vary.**
|
||||
|
||||
Supported APIs:
|
||||
- [OpenAI](https://platform.openai.com/overview)
|
||||
- [Anthropic](https://www.anthropic.com/)
|
||||
- [mistral.ai](https://mistral.ai/)
|
||||
- [Cohere](https://www.cohere.com/)
|
||||
- [Groq](https://www.groq.com/)
|
||||
- [Google Gemini](https://console.cloud.google.com/)
|
||||
|
||||
Supported self-hosted APIs:
|
||||
- [KoboldCpp](https://koboldai.org/cpp) ([Local](https://koboldai.org/cpp), [Runpod](https://koboldai.org/runpodcpp), [VastAI](https://koboldai.org/vastcpp), also includes image gen support)
|
||||
- [oobabooga/text-generation-webui](https://github.com/oobabooga/text-generation-webui) (local or with runpod support)
|
||||
- [LMStudio](https://lmstudio.ai/)
|
||||
|
||||
@@ -52,9 +52,12 @@ Please read the documents in the `docs` folder for more advanced configuration a
|
||||
- [Specifying the correct prompt template](#specifying-the-correct-prompt-template)
|
||||
- [Recommended Models](#recommended-models)
|
||||
- [DeepInfra via OpenAI Compatible client](#deepinfra-via-openai-compatible-client)
|
||||
- [Google Gemini](#google-gemini)
|
||||
- [Google Cloud Setup](#google-cloud-setup)
|
||||
- [Ready to go](#ready-to-go)
|
||||
- [Load the introductory scenario "Infinity Quest"](#load-the-introductory-scenario-infinity-quest)
|
||||
- [Loading character cards](#loading-character-cards)
|
||||
- [Configure for hosting](#configure-for-hosting)
|
||||
- [Text-to-Speech (TTS)](docs/tts.md)
|
||||
- [Visual Generation](docs/visual.md)
|
||||
- [ChromaDB (long term memory) configuration](docs/chromadb.md)
|
||||
@@ -92,16 +95,19 @@ There is also a [troubleshooting guide](docs/troubleshoot.md) that might help.
|
||||
|
||||
### Docker
|
||||
|
||||
:warning: Some users currently experience issues with missing dependencies inside the docker container, issue tracked at [#114](https://github.com/vegu-ai/talemate/issues/114)
|
||||
|
||||
1. `git clone https://github.com/vegu-ai/talemate.git`
|
||||
1. `cd talemate`
|
||||
1. `docker-compose up`
|
||||
1. `cp config.example.yaml config.yaml`
|
||||
1. `docker compose up`
|
||||
1. Navigate your browser to http://localhost:8080
|
||||
|
||||
:warning: When connecting local APIs running on the hostmachine (e.g. text-generation-webui), you need to use `host.docker.internal` as the hostname.
|
||||
|
||||
#### To shut down the Docker container
|
||||
|
||||
Just closing the terminal window will not stop the Docker container. You need to run `docker-compose down` to stop the container.
|
||||
Just closing the terminal window will not stop the Docker container. You need to run `docker compose down` to stop the container.
|
||||
|
||||
#### How to install Docker
|
||||
|
||||
@@ -167,19 +173,9 @@ In the case for `bartowski_Nous-Hermes-2-Mistral-7B-DPO-exl2_8_0` that is `ChatM
|
||||
|
||||
### Recommended Models
|
||||
|
||||
As of 2024.03.07 my personal regular drivers (the ones i test with) are:
|
||||
Any of the top models in any of the size classes here should work well (i wouldn't recommend going lower than 7B):
|
||||
|
||||
- Kunoichi-7B
|
||||
- sparsetral-16x7B
|
||||
- Nous-Hermes-2-Mistral-7B-DPO
|
||||
- brucethemoose_Yi-34B-200K-RPMerge
|
||||
- dolphin-2.7-mixtral-8x7b
|
||||
- rAIfle_Verdict-8x7B
|
||||
- Mixtral-8x7B-instruct
|
||||
|
||||
That said, any of the top models in any of the size classes here should work well (i wouldn't recommend going lower than 7B):
|
||||
|
||||
https://www.reddit.com/r/LocalLLaMA/comments/18yp9u4/llm_comparisontest_api_edition_gpt4_vs_gemini_vs/
|
||||
[https://oobabooga.github.io/benchmark.html](https://oobabooga.github.io/benchmark.html)
|
||||
|
||||
## DeepInfra via OpenAI Compatible client
|
||||
|
||||
@@ -197,6 +193,36 @@ Models on DeepInfra that work well with Talemate:
|
||||
- [cognitivecomputations/dolphin-2.6-mixtral-8x7b](https://deepinfra.com/cognitivecomputations/dolphin-2.6-mixtral-8x7b) (max context 32k, 8k recommended)
|
||||
- [lizpreciatior/lzlv_70b_fp16_hf](https://deepinfra.com/lizpreciatior/lzlv_70b_fp16_hf) (max context 4k)
|
||||
|
||||
## Google Gemini
|
||||
|
||||
### Google Cloud Setup
|
||||
|
||||
Unlike the other clients the setup for Google Gemini is a bit more involved as you will need to set up a google cloud project and credentials for it.
|
||||
|
||||
Please follow their [instructions for setup](https://cloud.google.com/vertex-ai/docs/start/client-libraries) - which includes setting up a project, enabling the Vertex AI API, creating a service account, and downloading the credentials.
|
||||
|
||||
Once you have downloaded the credentials, copy the JSON file into the talemate directory. You can rename it to something that's easier to remember, like `my-credentials.json`.
|
||||
|
||||
### Add the client
|
||||
|
||||

|
||||
|
||||
The `Disable Safety Settings` option will turn off the google reponse validation for what they consider harmful content. Use at your own risk.
|
||||
|
||||
### Conmplete the google cloud setup in talemate
|
||||
|
||||

|
||||
|
||||
Click the `SETUP GOOGLE API CREDENTIALS` button that will appear on the client.
|
||||
|
||||
The google cloud setup modal will appear, fill in the path to the credentials file and select a location that is close to you.
|
||||
|
||||

|
||||
|
||||
Click save and after a moment the client should have a green dot next to it, indicating that it is ready to go.
|
||||
|
||||

|
||||
|
||||
## Ready to go
|
||||
|
||||
You will know you are good to go when the client and all the agents have a green dot next to them.
|
||||
@@ -222,3 +248,17 @@ Expand the "Load" menu in the top left corner and either click on "Upload a char
|
||||
Once a character is uploaded, talemate may actually take a moment because it needs to convert it to a talemate format and will also run additional LLM prompts to generate character attributes and world state.
|
||||
|
||||
Make sure you save the scene after the character is loaded as it can then be loaded as normal talemate scenario in the future.
|
||||
|
||||
## Configure for hosting
|
||||
|
||||
By default talemate is configured to run locally. If you want to host it behind a reverse proxy or on a server, you will need create some environment variables in the `talemate_frontend/.env.development.local` file
|
||||
|
||||
Start by copying `talemate_frontend/example.env.development.local` to `talemate_frontend/.env.development.local`.
|
||||
|
||||
Then open the file and edit the `ALLOWED_HOSTS` and `VUE_APP_TALEMATE_BACKEND_WEBSOCKET_URL` variables.
|
||||
|
||||
```sh
|
||||
ALLOWED_HOSTS=example.com
|
||||
# wss if behind ssl, ws if not
|
||||
VUE_APP_TALEMATE_BACKEND_WEBSOCKET_URL=wss://example.com:5050
|
||||
```
|
||||
|
||||
@@ -23,5 +23,5 @@ services:
|
||||
dockerfile: Dockerfile.frontend
|
||||
ports:
|
||||
- "8080:8080"
|
||||
volumes:
|
||||
- ./talemate_frontend:/app
|
||||
#volumes:
|
||||
# - ./talemate_frontend:/app
|
||||
|
||||
BIN
docs/img/0.25.0/google-add-client.png
Normal file
BIN
docs/img/0.25.0/google-add-client.png
Normal file
Binary file not shown.
|
After Width: | Height: | Size: 25 KiB |
BIN
docs/img/0.25.0/google-cloud-setup.png
Normal file
BIN
docs/img/0.25.0/google-cloud-setup.png
Normal file
Binary file not shown.
|
After Width: | Height: | Size: 59 KiB |
BIN
docs/img/0.25.0/google-ready.png
Normal file
BIN
docs/img/0.25.0/google-ready.png
Normal file
Binary file not shown.
|
After Width: | Height: | Size: 5.3 KiB |
BIN
docs/img/0.25.0/google-setup-incomplete.png
Normal file
BIN
docs/img/0.25.0/google-setup-incomplete.png
Normal file
Binary file not shown.
|
After Width: | Height: | Size: 7.5 KiB |
1704
poetry.lock
generated
1704
poetry.lock
generated
File diff suppressed because it is too large
Load Diff
@@ -4,7 +4,7 @@ build-backend = "poetry.masonry.api"
|
||||
|
||||
[tool.poetry]
|
||||
name = "talemate"
|
||||
version = "0.24.0"
|
||||
version = "0.25.5"
|
||||
description = "AI-backed roleplay and narrative tools"
|
||||
authors = ["FinalWombat"]
|
||||
license = "GNU Affero General Public License v3.0"
|
||||
@@ -37,6 +37,7 @@ python-dotenv = "^1.0.0"
|
||||
websockets = "^11.0.3"
|
||||
structlog = "^23.1.0"
|
||||
runpod = "^1.2.0"
|
||||
google-cloud-aiplatform = ">=1.50.0"
|
||||
nest_asyncio = "^1.5.7"
|
||||
isodate = ">=0.6.1"
|
||||
thefuzz = ">=0.20.0"
|
||||
@@ -50,7 +51,8 @@ chromadb = ">=0.4.17,<1"
|
||||
InstructorEmbedding = "^1.0.1"
|
||||
torch = ">=2.1.0"
|
||||
torchaudio = ">=2.3.0"
|
||||
sentence-transformers="^2.2.2"
|
||||
# locked for instructor embeddings
|
||||
sentence-transformers="==2.2.2"
|
||||
|
||||
[tool.poetry.dev-dependencies]
|
||||
pytest = "^6.2"
|
||||
|
||||
@@ -2,4 +2,4 @@ from .agents import Agent
|
||||
from .client import TextGeneratorWebuiClient
|
||||
from .tale_mate import *
|
||||
|
||||
VERSION = "0.24.0"
|
||||
VERSION = "0.25.5"
|
||||
|
||||
@@ -221,6 +221,9 @@ class Agent(ABC):
|
||||
if callback:
|
||||
await callback()
|
||||
|
||||
async def setup_check(self):
|
||||
return False
|
||||
|
||||
async def ready_check(self, task: asyncio.Task = None):
|
||||
self.ready_check_error = None
|
||||
if task:
|
||||
|
||||
@@ -668,7 +668,9 @@ class ConversationAgent(Agent):
|
||||
|
||||
total_result = util.handle_endofline_special_delimiter(total_result)
|
||||
|
||||
if total_result.startswith(":\n"):
|
||||
log.info("conversation agent", total_result=total_result)
|
||||
|
||||
if total_result.startswith(":\n") or total_result.startswith(": "):
|
||||
total_result = total_result[2:]
|
||||
|
||||
# movie script format
|
||||
|
||||
@@ -18,10 +18,11 @@ class ContentGenerationContext(pydantic.BaseModel):
|
||||
"""
|
||||
|
||||
context: str
|
||||
instructions: str
|
||||
length: int
|
||||
instructions: str = ""
|
||||
length: int = 100
|
||||
character: Union[str, None] = None
|
||||
original: Union[str, None] = None
|
||||
partial: str = ""
|
||||
|
||||
@property
|
||||
def computed_context(self) -> Tuple[str, str]:
|
||||
@@ -37,10 +38,11 @@ class AssistantMixin:
|
||||
async def contextual_generate_from_args(
|
||||
self,
|
||||
context: str,
|
||||
instructions: str,
|
||||
instructions: str = "",
|
||||
length: int = 100,
|
||||
character: Union[str, None] = None,
|
||||
original: Union[str, None] = None,
|
||||
partial: str = "",
|
||||
):
|
||||
"""
|
||||
Request content from the assistant.
|
||||
@@ -52,6 +54,7 @@ class AssistantMixin:
|
||||
length=length,
|
||||
character=character,
|
||||
original=original,
|
||||
partial=partial,
|
||||
)
|
||||
|
||||
return await self.contextual_generate(generation_context)
|
||||
@@ -86,6 +89,7 @@ class AssistantMixin:
|
||||
"generation_context": generation_context,
|
||||
"context_typ": context_typ,
|
||||
"context_name": context_name,
|
||||
"can_coerce": self.client.can_be_coerced,
|
||||
"character": (
|
||||
self.scene.get_character(generation_context.character)
|
||||
if generation_context.character
|
||||
@@ -94,7 +98,8 @@ class AssistantMixin:
|
||||
},
|
||||
)
|
||||
|
||||
content = util.strip_partial_sentences(content)
|
||||
if not generation_context.partial:
|
||||
content = util.strip_partial_sentences(content)
|
||||
|
||||
return content.strip()
|
||||
|
||||
@@ -139,3 +144,39 @@ class AssistantMixin:
|
||||
emit("autocomplete_suggestion", response)
|
||||
|
||||
return response
|
||||
|
||||
@set_processing
|
||||
async def autocomplete_narrative(
|
||||
self,
|
||||
input: str,
|
||||
emit_signal: bool = True,
|
||||
) -> str:
|
||||
"""
|
||||
Autocomplete narrative.
|
||||
"""
|
||||
|
||||
response = await Prompt.request(
|
||||
f"creator.autocomplete-narrative",
|
||||
self.client,
|
||||
"create_short",
|
||||
vars={
|
||||
"scene": self.scene,
|
||||
"max_tokens": self.client.max_token_length,
|
||||
"input": input.strip(),
|
||||
"can_coerce": self.client.can_be_coerced,
|
||||
},
|
||||
pad_prepended_response=False,
|
||||
dedupe_enabled=False,
|
||||
)
|
||||
|
||||
if response.startswith(input):
|
||||
response = response[len(input) :]
|
||||
|
||||
self.scene.log.debug(
|
||||
"autocomplete_suggestion", suggestion=response, input=input
|
||||
)
|
||||
|
||||
if emit_signal:
|
||||
emit("autocomplete_suggestion", response)
|
||||
|
||||
return response
|
||||
|
||||
@@ -204,6 +204,8 @@ class CharacterCreatorMixin:
|
||||
"create_concise",
|
||||
vars={
|
||||
"character": character,
|
||||
"scene": self.scene,
|
||||
"max_tokens": self.client.max_token_length,
|
||||
},
|
||||
)
|
||||
|
||||
|
||||
@@ -201,7 +201,12 @@ class EditorAgent(Agent):
|
||||
|
||||
@set_processing
|
||||
async def check_continuity_errors(
|
||||
self, content: str, character: Character, force: bool = False, fix: bool = True
|
||||
self,
|
||||
content: str,
|
||||
character: Character,
|
||||
force: bool = False,
|
||||
fix: bool = True,
|
||||
message_id: int = None,
|
||||
) -> str:
|
||||
"""
|
||||
Edits a text to ensure that it is consistent with the scene
|
||||
@@ -223,15 +228,25 @@ class EditorAgent(Agent):
|
||||
)
|
||||
return content
|
||||
|
||||
log.debug(
|
||||
"check_continuity_errors START",
|
||||
content=content,
|
||||
character=character,
|
||||
force=force,
|
||||
fix=fix,
|
||||
message_id=message_id,
|
||||
)
|
||||
|
||||
response = await Prompt.request(
|
||||
"editor.check-continuity-errors",
|
||||
self.client,
|
||||
"basic_deterministic_medium2",
|
||||
"basic_analytical_medium2",
|
||||
vars={
|
||||
"content": content,
|
||||
"character": character,
|
||||
"scene": self.scene,
|
||||
"max_tokens": self.client.max_token_length,
|
||||
"message_id": message_id,
|
||||
},
|
||||
)
|
||||
|
||||
@@ -241,7 +256,7 @@ class EditorAgent(Agent):
|
||||
errors = []
|
||||
|
||||
for line in response.split("\n"):
|
||||
if not line.startswith("ERROR"):
|
||||
if "ERROR" not in line:
|
||||
continue
|
||||
|
||||
errors.append(line)
|
||||
@@ -274,8 +289,14 @@ class EditorAgent(Agent):
|
||||
content_fix_identifer = state.get("content_fix_identifier")
|
||||
|
||||
try:
|
||||
content = response.split("```")[0].strip()
|
||||
content = response.strip().strip("```").split("```")[0].strip()
|
||||
content = content.replace(content_fix_identifer, "").strip()
|
||||
content = content.strip(":")
|
||||
|
||||
# if content doesnt start with {character_name}: then add it
|
||||
if not content.startswith(f"{character.name}:"):
|
||||
content = f"{character.name}: {content}"
|
||||
|
||||
except Exception as e:
|
||||
log.error(
|
||||
"check_continuity_errors FAILED",
|
||||
|
||||
@@ -720,6 +720,11 @@ class ChromaDBMemoryAgent(MemoryAgent):
|
||||
|
||||
doc = _results["documents"][0][i]
|
||||
meta = _results["metadatas"][0][i]
|
||||
|
||||
if not meta:
|
||||
log.warning("chromadb agent get", error="no meta", doc=doc)
|
||||
continue
|
||||
|
||||
ts = meta.get("ts")
|
||||
|
||||
# skip pin_only entries
|
||||
|
||||
@@ -61,6 +61,7 @@ class SummarizeAgent(Agent):
|
||||
{"label": "Short & Concise", "value": "short"},
|
||||
{"label": "Balanced", "value": "balanced"},
|
||||
{"label": "Lengthy & Detailed", "value": "long"},
|
||||
{"label": "Factual List", "value": "facts"},
|
||||
],
|
||||
),
|
||||
"include_previous": AgentActionConfig(
|
||||
@@ -77,6 +78,15 @@ class SummarizeAgent(Agent):
|
||||
)
|
||||
}
|
||||
|
||||
@property
|
||||
def threshold(self):
|
||||
return self.actions["archive"].config["threshold"].value
|
||||
|
||||
@property
|
||||
def estimated_entry_count(self):
|
||||
all_tokens = sum([util.count_tokens(entry) for entry in self.scene.history])
|
||||
return all_tokens // self.threshold
|
||||
|
||||
def connect(self, scene):
|
||||
super().connect(scene)
|
||||
talemate.emit.async_signals.get("game_loop").connect(self.on_game_loop)
|
||||
|
||||
@@ -80,6 +80,11 @@ class VisualBase(Agent):
|
||||
),
|
||||
},
|
||||
),
|
||||
"automatic_setup": AgentAction(
|
||||
enabled=True,
|
||||
label="Automatic Setup",
|
||||
description="Automatically setup the visual agent if the selected client has an implementation of the selected backend. (Like the KoboldCpp Automatic1111 api)",
|
||||
),
|
||||
"automatic_generation": AgentAction(
|
||||
enabled=False,
|
||||
label="Automatic Generation",
|
||||
@@ -187,8 +192,10 @@ class VisualBase(Agent):
|
||||
prev_ready = self.backend_ready
|
||||
self.backend_ready = False
|
||||
self.ready_check_error = str(error)
|
||||
await self.setup_check()
|
||||
if prev_ready:
|
||||
await self.emit_status()
|
||||
|
||||
|
||||
async def ready_check(self):
|
||||
if not self.enabled:
|
||||
@@ -198,6 +205,15 @@ class VisualBase(Agent):
|
||||
task = asyncio.create_task(fn())
|
||||
await super().ready_check(task)
|
||||
|
||||
async def setup_check(self):
|
||||
|
||||
if not self.actions["automatic_setup"].enabled:
|
||||
return
|
||||
|
||||
backend = self.backend
|
||||
if self.client and hasattr(self.client, f"visual_{backend.lower()}_setup"):
|
||||
await getattr(self.client, f"visual_{backend.lower()}_setup")(self)
|
||||
|
||||
async def apply_config(self, *args, **kwargs):
|
||||
|
||||
try:
|
||||
|
||||
@@ -109,29 +109,10 @@ class OpenAIImageMixin:
|
||||
size=f"{resolution.width}x{resolution.height}",
|
||||
quality=self.openai_quality,
|
||||
n=1,
|
||||
response_format="b64_json",
|
||||
)
|
||||
|
||||
download_url = response.data[0].url
|
||||
|
||||
# decode url because httpx will encode it again
|
||||
download_url = unquote(download_url)
|
||||
parsed = urlparse(download_url)
|
||||
query = parse_qs(parsed.query)
|
||||
|
||||
log.debug("openai_image_generate", download_url=download_url, query=query)
|
||||
|
||||
async with httpx.AsyncClient() as client:
|
||||
response = await client.get(download_url, params=query, timeout=90)
|
||||
log.debug("openai_image_generate", status_code=response.status_code)
|
||||
if response.status_code >= 400:
|
||||
log.error(
|
||||
f"Error downloading image",
|
||||
content=response.content,
|
||||
status=response.status_code,
|
||||
)
|
||||
# bytes to base64encoded
|
||||
image = base64.b64encode(response.content).decode("utf-8")
|
||||
await self.emit_image(image)
|
||||
await self.emit_image(response.data[0].b64_json)
|
||||
|
||||
async def openai_image_ready(self) -> bool:
|
||||
"""
|
||||
|
||||
@@ -3,10 +3,12 @@ import os
|
||||
import talemate.client.runpod
|
||||
from talemate.client.anthropic import AnthropicClient
|
||||
from talemate.client.cohere import CohereClient
|
||||
from talemate.client.google import GoogleClient
|
||||
from talemate.client.groq import GroqClient
|
||||
from talemate.client.koboldcpp import KoboldCppClient
|
||||
from talemate.client.lmstudio import LMStudioClient
|
||||
from talemate.client.mistral import MistralAIClient
|
||||
from talemate.client.openai import OpenAIClient
|
||||
from talemate.client.openai_compat import OpenAICompatibleClient
|
||||
from talemate.client.registry import CLIENT_CLASSES, get_client_class, register
|
||||
from talemate.client.textgenwebui import TextGeneratorWebuiClient
|
||||
from talemate.client.textgenwebui import TextGeneratorWebuiClient
|
||||
@@ -2,6 +2,7 @@
|
||||
A unified client base, based on the openai API
|
||||
"""
|
||||
|
||||
import ipaddress
|
||||
import logging
|
||||
import random
|
||||
import time
|
||||
@@ -9,6 +10,7 @@ from typing import Callable, Union
|
||||
|
||||
import pydantic
|
||||
import structlog
|
||||
import urllib3
|
||||
from openai import AsyncOpenAI, PermissionDeniedError
|
||||
|
||||
import talemate.client.presets as presets
|
||||
@@ -25,11 +27,6 @@ logging.getLogger("httpx").setLevel(logging.WARNING)
|
||||
|
||||
log = structlog.get_logger("client.base")
|
||||
|
||||
REMOTE_SERVICES = [
|
||||
# TODO: runpod.py should add this to the list
|
||||
".runpod.net"
|
||||
]
|
||||
|
||||
STOPPING_STRINGS = ["<|im_end|>", "</s>"]
|
||||
|
||||
|
||||
@@ -55,7 +52,7 @@ class ErrorAction(pydantic.BaseModel):
|
||||
|
||||
class Defaults(pydantic.BaseModel):
|
||||
api_url: str = "http://localhost:5000"
|
||||
max_token_length: int = 4096
|
||||
max_token_length: int = 8192
|
||||
double_coercion: str = None
|
||||
|
||||
|
||||
@@ -74,7 +71,7 @@ class ClientBase:
|
||||
name: str = None
|
||||
enabled: bool = True
|
||||
current_status: str = None
|
||||
max_token_length: int = 4096
|
||||
max_token_length: int = 8192
|
||||
processing: bool = False
|
||||
connected: bool = False
|
||||
conversation_retries: int = 0
|
||||
@@ -106,7 +103,7 @@ class ClientBase:
|
||||
self.double_coercion = kwargs.get("double_coercion", None)
|
||||
if "max_token_length" in kwargs:
|
||||
self.max_token_length = (
|
||||
int(kwargs["max_token_length"]) if kwargs["max_token_length"] else 4096
|
||||
int(kwargs["max_token_length"]) if kwargs["max_token_length"] else 8192
|
||||
)
|
||||
self.set_client(max_token_length=self.max_token_length)
|
||||
|
||||
@@ -125,6 +122,10 @@ class ClientBase:
|
||||
"""
|
||||
return self.Meta().requires_prompt_template
|
||||
|
||||
@property
|
||||
def max_tokens_param_name(self):
|
||||
return "max_tokens"
|
||||
|
||||
def set_client(self, **kwargs):
|
||||
self.client = AsyncOpenAI(base_url=self.api_url, api_key="sk-1111")
|
||||
|
||||
@@ -179,21 +180,46 @@ class ClientBase:
|
||||
if "double_coercion" in kwargs:
|
||||
self.double_coercion = kwargs["double_coercion"]
|
||||
|
||||
def host_is_remote(self, url: str) -> bool:
|
||||
"""
|
||||
Returns whether or not the host is a remote service.
|
||||
|
||||
It checks common local hostnames / ip prefixes.
|
||||
|
||||
- localhost
|
||||
"""
|
||||
|
||||
host = urllib3.util.parse_url(url).host
|
||||
|
||||
if host.lower() == "localhost":
|
||||
return False
|
||||
|
||||
# use ipaddress module to check for local ip prefixes
|
||||
try:
|
||||
ip = ipaddress.ip_address(host)
|
||||
except ValueError:
|
||||
return True
|
||||
|
||||
if ip.is_loopback or ip.is_private:
|
||||
return False
|
||||
|
||||
return True
|
||||
|
||||
def toggle_disabled_if_remote(self):
|
||||
"""
|
||||
If the client is targeting a remote recognized service, this
|
||||
will disable the client.
|
||||
"""
|
||||
|
||||
for service in REMOTE_SERVICES:
|
||||
if service in self.api_url:
|
||||
if self.enabled:
|
||||
self.log.warn(
|
||||
"remote service unreachable, disabling client", client=self.name
|
||||
)
|
||||
self.enabled = False
|
||||
if not self.api_url:
|
||||
return False
|
||||
|
||||
return True
|
||||
if self.host_is_remote(self.api_url) and self.enabled:
|
||||
self.log.warn(
|
||||
"remote service unreachable, disabling client", client=self.name
|
||||
)
|
||||
self.enabled = False
|
||||
return True
|
||||
|
||||
return False
|
||||
|
||||
@@ -344,6 +370,14 @@ class ClientBase:
|
||||
if status_change:
|
||||
instance.emit_agent_status_by_client(self)
|
||||
|
||||
def populate_extra_fields(self, data: dict):
|
||||
"""
|
||||
Updates data with the extra fields from the client's Meta
|
||||
"""
|
||||
|
||||
for field_name in getattr(self.Meta(), "extra_fields", {}).keys():
|
||||
data[field_name] = getattr(self, field_name, None)
|
||||
|
||||
def determine_prompt_template(self):
|
||||
if not self.model_name:
|
||||
return
|
||||
@@ -380,14 +414,12 @@ class ClientBase:
|
||||
self.log.warning("client status error", e=e, client=self.name)
|
||||
self.model_name = None
|
||||
self.connected = False
|
||||
self.toggle_disabled_if_remote()
|
||||
self.emit_status()
|
||||
return
|
||||
|
||||
self.connected = True
|
||||
|
||||
if not self.model_name or self.model_name == "None":
|
||||
self.log.warning("client model not loaded", client=self)
|
||||
self.emit_status()
|
||||
return
|
||||
|
||||
@@ -512,9 +544,8 @@ class ClientBase:
|
||||
max_token_length=self.max_token_length,
|
||||
parameters=prompt_param,
|
||||
)
|
||||
response = await self.generate(
|
||||
self.repetition_adjustment(finalized_prompt), prompt_param, kind
|
||||
)
|
||||
prompt_sent = self.repetition_adjustment(finalized_prompt)
|
||||
response = await self.generate(prompt_sent, prompt_param, kind)
|
||||
|
||||
response, finalized_prompt = await self.auto_break_repetition(
|
||||
finalized_prompt, prompt_param, response, kind, retries
|
||||
@@ -536,7 +567,7 @@ class ClientBase:
|
||||
"prompt_sent",
|
||||
data=PromptData(
|
||||
kind=kind,
|
||||
prompt=finalized_prompt,
|
||||
prompt=prompt_sent,
|
||||
response=response,
|
||||
prompt_tokens=self._returned_prompt_tokens or token_length,
|
||||
response_tokens=self._returned_response_tokens
|
||||
@@ -598,7 +629,7 @@ class ClientBase:
|
||||
is_repetition, similarity_score, matched_line = util.similarity_score(
|
||||
response, finalized_prompt.split("\n"), similarity_threshold=80
|
||||
)
|
||||
|
||||
|
||||
if not is_repetition:
|
||||
# not a repetition, return the response
|
||||
|
||||
@@ -632,7 +663,7 @@ class ClientBase:
|
||||
|
||||
# then we pad the max_tokens by the pad_max_tokens amount
|
||||
|
||||
prompt_param["max_tokens"] += pad_max_tokens
|
||||
prompt_param[self.max_tokens_param_name] += pad_max_tokens
|
||||
|
||||
# send the prompt again
|
||||
# we use the repetition_adjustment method to further encourage
|
||||
@@ -654,7 +685,7 @@ class ClientBase:
|
||||
|
||||
# a lot of the times the response will now contain the repetition + something new
|
||||
# so we dedupe the response to remove the repetition on sentences level
|
||||
|
||||
|
||||
response = util.dedupe_sentences(
|
||||
response, matched_line, similarity_threshold=85, debug=True
|
||||
)
|
||||
@@ -714,7 +745,6 @@ class ClientBase:
|
||||
|
||||
lines = prompt.split("\n")
|
||||
new_lines = []
|
||||
|
||||
for line in lines:
|
||||
if line.startswith("[$REPETITION|"):
|
||||
if is_repetitive:
|
||||
@@ -725,3 +755,29 @@ class ClientBase:
|
||||
new_lines.append(line)
|
||||
|
||||
return "\n".join(new_lines)
|
||||
|
||||
|
||||
def process_response_for_indirect_coercion(self, prompt:str, response:str) -> str:
|
||||
|
||||
"""
|
||||
A lot of remote APIs don't let us control the prompt template and we cannot directly
|
||||
append the beginning of the desired response to the prompt.
|
||||
|
||||
With indirect coercion we tell the LLM what the beginning of the response should be
|
||||
and then hopefully it will adhere to it and we can strip it off the actual response.
|
||||
"""
|
||||
|
||||
_, right = prompt.split("\nStart your response with: ")
|
||||
expected_response = right.strip()
|
||||
if (
|
||||
expected_response
|
||||
and expected_response.startswith("{")
|
||||
):
|
||||
if response.startswith("```json") and response.endswith("```"):
|
||||
response = response[7:-3].strip()
|
||||
|
||||
if right and response.startswith(right):
|
||||
response = response[len(right) :].strip()
|
||||
|
||||
return response
|
||||
|
||||
312
src/talemate/client/google.py
Normal file
312
src/talemate/client/google.py
Normal file
@@ -0,0 +1,312 @@
|
||||
import json
|
||||
import os
|
||||
|
||||
import pydantic
|
||||
import structlog
|
||||
import vertexai
|
||||
from google.api_core.exceptions import ResourceExhausted
|
||||
from vertexai.generative_models import (
|
||||
ChatSession,
|
||||
GenerativeModel,
|
||||
ResponseValidationError,
|
||||
SafetySetting,
|
||||
)
|
||||
|
||||
from talemate.client.base import ClientBase, ErrorAction, ExtraField
|
||||
from talemate.client.registry import register
|
||||
from talemate.client.remote import RemoteServiceMixin
|
||||
from talemate.config import Client as BaseClientConfig
|
||||
from talemate.config import load_config
|
||||
from talemate.emit import emit
|
||||
from talemate.emit.signals import handlers
|
||||
from talemate.util import count_tokens
|
||||
|
||||
__all__ = [
|
||||
"GoogleClient",
|
||||
]
|
||||
log = structlog.get_logger("talemate")
|
||||
|
||||
# Edit this to add new models / remove old models
|
||||
SUPPORTED_MODELS = [
|
||||
"gemini-1.0-pro",
|
||||
"gemini-1.5-pro-preview-0409",
|
||||
]
|
||||
|
||||
|
||||
class Defaults(pydantic.BaseModel):
|
||||
max_token_length: int = 16384
|
||||
model: str = "gemini-1.0-pro"
|
||||
disable_safety_settings: bool = False
|
||||
|
||||
|
||||
class ClientConfig(BaseClientConfig):
|
||||
disable_safety_settings: bool = False
|
||||
|
||||
|
||||
@register()
|
||||
class GoogleClient(RemoteServiceMixin, ClientBase):
|
||||
"""
|
||||
Google client for generating text.
|
||||
"""
|
||||
|
||||
client_type = "google"
|
||||
conversation_retries = 0
|
||||
auto_break_repetition_enabled = False
|
||||
decensor_enabled = True
|
||||
config_cls = ClientConfig
|
||||
|
||||
class Meta(ClientBase.Meta):
|
||||
name_prefix: str = "Google"
|
||||
title: str = "Google"
|
||||
manual_model: bool = True
|
||||
manual_model_choices: list[str] = SUPPORTED_MODELS
|
||||
requires_prompt_template: bool = False
|
||||
defaults: Defaults = Defaults()
|
||||
extra_fields: dict[str, ExtraField] = {
|
||||
"disable_safety_settings": ExtraField(
|
||||
name="disable_safety_settings",
|
||||
type="bool",
|
||||
label="Disable Safety Settings",
|
||||
required=False,
|
||||
description="Disable Google's safety settings for responses generated by the model.",
|
||||
),
|
||||
}
|
||||
|
||||
def __init__(self, model="gemini-1.0-pro", **kwargs):
|
||||
self.model_name = model
|
||||
self.setup_status = None
|
||||
self.model_instance = None
|
||||
self.disable_safety_settings = kwargs.get("disable_safety_settings", False)
|
||||
self.google_credentials_read = False
|
||||
self.google_project_id = None
|
||||
self.config = load_config()
|
||||
super().__init__(**kwargs)
|
||||
|
||||
handlers["config_saved"].connect(self.on_config_saved)
|
||||
|
||||
@property
|
||||
def google_credentials(self):
|
||||
path = self.google_credentials_path
|
||||
if not path:
|
||||
return None
|
||||
with open(path) as f:
|
||||
return json.load(f)
|
||||
|
||||
@property
|
||||
def google_credentials_path(self):
|
||||
return self.config.get("google").get("gcloud_credentials_path")
|
||||
|
||||
@property
|
||||
def google_location(self):
|
||||
return self.config.get("google").get("gcloud_location")
|
||||
|
||||
@property
|
||||
def ready(self):
|
||||
# all google settings must be set
|
||||
return all(
|
||||
[
|
||||
self.google_credentials_path,
|
||||
self.google_location,
|
||||
]
|
||||
)
|
||||
|
||||
@property
|
||||
def safety_settings(self):
|
||||
if not self.disable_safety_settings:
|
||||
return None
|
||||
|
||||
safety_settings = [
|
||||
SafetySetting(
|
||||
category=SafetySetting.HarmCategory.HARM_CATEGORY_SEXUALLY_EXPLICIT,
|
||||
threshold=SafetySetting.HarmBlockThreshold.BLOCK_NONE,
|
||||
),
|
||||
SafetySetting(
|
||||
category=SafetySetting.HarmCategory.HARM_CATEGORY_DANGEROUS_CONTENT,
|
||||
threshold=SafetySetting.HarmBlockThreshold.BLOCK_NONE,
|
||||
),
|
||||
SafetySetting(
|
||||
category=SafetySetting.HarmCategory.HARM_CATEGORY_HARASSMENT,
|
||||
threshold=SafetySetting.HarmBlockThreshold.BLOCK_NONE,
|
||||
),
|
||||
SafetySetting(
|
||||
category=SafetySetting.HarmCategory.HARM_CATEGORY_HATE_SPEECH,
|
||||
threshold=SafetySetting.HarmBlockThreshold.BLOCK_NONE,
|
||||
),
|
||||
SafetySetting(
|
||||
category=SafetySetting.HarmCategory.HARM_CATEGORY_UNSPECIFIED,
|
||||
threshold=SafetySetting.HarmBlockThreshold.BLOCK_NONE,
|
||||
),
|
||||
]
|
||||
|
||||
return safety_settings
|
||||
|
||||
def emit_status(self, processing: bool = None):
|
||||
error_action = None
|
||||
if processing is not None:
|
||||
self.processing = processing
|
||||
|
||||
if self.ready:
|
||||
status = "busy" if self.processing else "idle"
|
||||
model_name = self.model_name
|
||||
else:
|
||||
status = "error"
|
||||
model_name = "Setup incomplete"
|
||||
error_action = ErrorAction(
|
||||
title="Setup Google API credentials",
|
||||
action_name="openAppConfig",
|
||||
icon="mdi-key-variant",
|
||||
arguments=[
|
||||
"application",
|
||||
"google_api",
|
||||
],
|
||||
)
|
||||
|
||||
if not self.model_name:
|
||||
status = "error"
|
||||
model_name = "No model loaded"
|
||||
|
||||
self.current_status = status
|
||||
data = {
|
||||
"error_action": error_action.model_dump() if error_action else None,
|
||||
"meta": self.Meta().model_dump(),
|
||||
}
|
||||
|
||||
self.populate_extra_fields(data)
|
||||
|
||||
emit(
|
||||
"client_status",
|
||||
message=self.client_type,
|
||||
id=self.name,
|
||||
details=model_name,
|
||||
status=status,
|
||||
data=data,
|
||||
)
|
||||
|
||||
def set_client(self, max_token_length: int = None, **kwargs):
|
||||
if not self.ready:
|
||||
log.error("Google cloud setup incomplete")
|
||||
if self.setup_status:
|
||||
self.setup_status = False
|
||||
emit("request_client_status")
|
||||
emit("request_agent_status")
|
||||
return
|
||||
|
||||
if not self.model_name:
|
||||
self.model_name = "gemini-1.0-pro"
|
||||
|
||||
if max_token_length and not isinstance(max_token_length, int):
|
||||
max_token_length = int(max_token_length)
|
||||
|
||||
if self.google_credentials_path:
|
||||
os.environ["GOOGLE_APPLICATION_CREDENTIALS"] = self.google_credentials_path
|
||||
|
||||
model = self.model_name
|
||||
|
||||
self.max_token_length = max_token_length or 16384
|
||||
|
||||
if not self.setup_status:
|
||||
if self.setup_status is False:
|
||||
project_id = self.google_credentials.get("project_id")
|
||||
self.google_project_id = project_id
|
||||
if self.google_credentials_path:
|
||||
vertexai.init(project=project_id, location=self.google_location)
|
||||
emit("request_client_status")
|
||||
emit("request_agent_status")
|
||||
self.setup_status = True
|
||||
|
||||
self.model_instance = GenerativeModel(model_name=model)
|
||||
|
||||
log.info(
|
||||
"google set client",
|
||||
max_token_length=self.max_token_length,
|
||||
provided_max_token_length=max_token_length,
|
||||
model=model,
|
||||
)
|
||||
|
||||
def response_tokens(self, response: str):
|
||||
return count_tokens(response.text)
|
||||
|
||||
def prompt_tokens(self, prompt: str):
|
||||
return count_tokens(prompt)
|
||||
|
||||
def reconfigure(self, **kwargs):
|
||||
if kwargs.get("model"):
|
||||
self.model_name = kwargs["model"]
|
||||
self.set_client(kwargs.get("max_token_length"))
|
||||
|
||||
if "disable_safety_settings" in kwargs:
|
||||
self.disable_safety_settings = kwargs["disable_safety_settings"]
|
||||
|
||||
async def generate(self, prompt: str, parameters: dict, kind: str):
|
||||
"""
|
||||
Generates text from the given prompt and parameters.
|
||||
"""
|
||||
|
||||
if not self.ready:
|
||||
raise Exception("Google cloud setup incomplete")
|
||||
|
||||
right = None
|
||||
expected_response = None
|
||||
try:
|
||||
_, right = prompt.split("\nStart your response with: ")
|
||||
expected_response = right.strip()
|
||||
except (IndexError, ValueError):
|
||||
pass
|
||||
|
||||
human_message = prompt.strip()
|
||||
system_message = self.get_system_message(kind)
|
||||
|
||||
self.log.debug(
|
||||
"generate",
|
||||
prompt=prompt[:128] + " ...",
|
||||
parameters=parameters,
|
||||
system_message=system_message,
|
||||
disable_safety_settings=self.disable_safety_settings,
|
||||
safety_settings=self.safety_settings,
|
||||
)
|
||||
|
||||
try:
|
||||
|
||||
chat = self.model_instance.start_chat()
|
||||
|
||||
response = await chat.send_message_async(
|
||||
human_message,
|
||||
safety_settings=self.safety_settings,
|
||||
)
|
||||
|
||||
self._returned_prompt_tokens = self.prompt_tokens(prompt)
|
||||
self._returned_response_tokens = self.response_tokens(response)
|
||||
|
||||
response = response.text
|
||||
|
||||
log.debug("generated response", response=response)
|
||||
|
||||
if expected_response and expected_response.startswith("{"):
|
||||
if response.startswith("```json") and response.endswith("```"):
|
||||
response = response[7:-3].strip()
|
||||
|
||||
if right and response.startswith(right):
|
||||
response = response[len(right) :].strip()
|
||||
|
||||
return response
|
||||
|
||||
# except PermissionDeniedError as e:
|
||||
# self.log.error("generate error", e=e)
|
||||
# emit("status", message="google API: Permission Denied", status="error")
|
||||
# return ""
|
||||
except ResourceExhausted as e:
|
||||
self.log.error("generate error", e=e)
|
||||
emit("status", message="google API: Quota Limit reached", status="error")
|
||||
return ""
|
||||
except ResponseValidationError as e:
|
||||
self.log.error("generate error", e=e)
|
||||
emit(
|
||||
"status",
|
||||
message="google API: Response Validation Error",
|
||||
status="error",
|
||||
)
|
||||
if not self.disable_safety_settings:
|
||||
return "Failed to generate response. Probably due to safety settings, you can turn them off in the client settings."
|
||||
return "Failed to generate response. Please check logs."
|
||||
except Exception as e:
|
||||
raise
|
||||
@@ -1,16 +0,0 @@
|
||||
import asyncio
|
||||
import json
|
||||
import logging
|
||||
import random
|
||||
from abc import ABC, abstractmethod
|
||||
from typing import Callable, Union
|
||||
|
||||
import requests
|
||||
|
||||
import talemate.client.system_prompts as system_prompts
|
||||
import talemate.util as util
|
||||
from talemate.client.registry import register
|
||||
from talemate.client.textgenwebui import RESTTaleMateClient
|
||||
from talemate.emit import Emission, emit
|
||||
|
||||
# NOT IMPLEMENTED AT THIS POINT
|
||||
306
src/talemate/client/koboldcpp.py
Normal file
306
src/talemate/client/koboldcpp.py
Normal file
@@ -0,0 +1,306 @@
|
||||
import random
|
||||
import re
|
||||
from typing import TYPE_CHECKING
|
||||
|
||||
# import urljoin
|
||||
from urllib.parse import urljoin, urlparse
|
||||
import httpx
|
||||
import structlog
|
||||
|
||||
from talemate.client.base import STOPPING_STRINGS, ClientBase, Defaults, ExtraField
|
||||
from talemate.client.registry import register
|
||||
import talemate.util as util
|
||||
|
||||
if TYPE_CHECKING:
|
||||
from talemate.agents.visual import VisualBase
|
||||
|
||||
log = structlog.get_logger("talemate.client.koboldcpp")
|
||||
|
||||
|
||||
class KoboldCppClientDefaults(Defaults):
|
||||
api_url: str = "http://localhost:5001"
|
||||
api_key: str = ""
|
||||
|
||||
|
||||
@register()
|
||||
class KoboldCppClient(ClientBase):
|
||||
auto_determine_prompt_template: bool = True
|
||||
client_type = "koboldcpp"
|
||||
|
||||
class Meta(ClientBase.Meta):
|
||||
name_prefix: str = "KoboldCpp"
|
||||
title: str = "KoboldCpp"
|
||||
enable_api_auth: bool = True
|
||||
defaults: KoboldCppClientDefaults = KoboldCppClientDefaults()
|
||||
|
||||
@property
|
||||
def request_headers(self):
|
||||
headers = {}
|
||||
headers["Content-Type"] = "application/json"
|
||||
if self.api_key:
|
||||
headers["Authorization"] = f"Bearer {self.api_key}"
|
||||
return headers
|
||||
|
||||
@property
|
||||
def url(self) -> str:
|
||||
parts = urlparse(self.api_url)
|
||||
return f"{parts.scheme}://{parts.netloc}"
|
||||
|
||||
@property
|
||||
def is_openai(self) -> bool:
|
||||
"""
|
||||
kcpp has two apis
|
||||
|
||||
open-ai implementation at /v1
|
||||
their own implenation at /api/v1
|
||||
"""
|
||||
return "/api/v1" not in self.api_url
|
||||
|
||||
@property
|
||||
def api_url_for_model(self) -> str:
|
||||
if self.is_openai:
|
||||
# join /model to url
|
||||
return urljoin(self.api_url, "models")
|
||||
else:
|
||||
# join /models to url
|
||||
return urljoin(self.api_url, "model")
|
||||
|
||||
@property
|
||||
def api_url_for_generation(self) -> str:
|
||||
if self.is_openai:
|
||||
# join /v1/completions
|
||||
return urljoin(self.api_url, "completions")
|
||||
else:
|
||||
# join /api/v1/generate
|
||||
return urljoin(self.api_url, "generate")
|
||||
|
||||
@property
|
||||
def max_tokens_param_name(self):
|
||||
if self.is_openai:
|
||||
return "max_tokens"
|
||||
else:
|
||||
return "max_length"
|
||||
|
||||
def api_endpoint_specified(self, url: str) -> bool:
|
||||
return "/v1" in self.api_url
|
||||
|
||||
def ensure_api_endpoint_specified(self):
|
||||
if not self.api_endpoint_specified(self.api_url):
|
||||
# url doesn't specify the api endpoint
|
||||
# use the koboldcpp united api
|
||||
self.api_url = urljoin(self.api_url.rstrip("/") + "/", "/api/v1/")
|
||||
if not self.api_url.endswith("/"):
|
||||
self.api_url += "/"
|
||||
|
||||
def __init__(self, **kwargs):
|
||||
self.api_key = kwargs.pop("api_key", "")
|
||||
super().__init__(**kwargs)
|
||||
self.ensure_api_endpoint_specified()
|
||||
|
||||
def tune_prompt_parameters(self, parameters: dict, kind: str):
|
||||
super().tune_prompt_parameters(parameters, kind)
|
||||
if not self.is_openai:
|
||||
# adjustments for united api
|
||||
parameters["max_length"] = parameters.pop("max_tokens")
|
||||
parameters["max_context_length"] = self.max_token_length
|
||||
if "repetition_penalty_range" in parameters:
|
||||
parameters["rep_pen_range"] = parameters.pop("repetition_penalty_range")
|
||||
if "repetition_penalty" in parameters:
|
||||
parameters["rep_pen"] = parameters.pop("repetition_penalty")
|
||||
if parameters.get("stop_sequence"):
|
||||
parameters["stop_sequence"] = parameters.pop("stopping_strings")
|
||||
|
||||
if parameters.get("extra_stopping_strings"):
|
||||
if "stop_sequence" in parameters:
|
||||
parameters["stop_sequence"] += parameters.pop("extra_stopping_strings")
|
||||
else:
|
||||
parameters["stop_sequence"] = parameters.pop("extra_stopping_strings")
|
||||
|
||||
|
||||
allowed_params = [
|
||||
"max_length",
|
||||
"max_context_length",
|
||||
"rep_pen",
|
||||
"rep_pen_range",
|
||||
"top_p",
|
||||
"top_k",
|
||||
"temperature",
|
||||
"stop_sequence",
|
||||
]
|
||||
else:
|
||||
allowed_params = ["max_tokens", "presence_penalty", "top_p", "temperature"]
|
||||
|
||||
# drop unsupported params
|
||||
for param in list(parameters.keys()):
|
||||
if param not in allowed_params:
|
||||
del parameters[param]
|
||||
|
||||
def set_client(self, **kwargs):
|
||||
self.api_key = kwargs.get("api_key", self.api_key)
|
||||
self.ensure_api_endpoint_specified()
|
||||
|
||||
|
||||
|
||||
|
||||
async def get_model_name(self):
|
||||
self.ensure_api_endpoint_specified()
|
||||
async with httpx.AsyncClient() as client:
|
||||
response = await client.get(
|
||||
self.api_url_for_model,
|
||||
timeout=2,
|
||||
headers=self.request_headers,
|
||||
)
|
||||
|
||||
if response.status_code == 404:
|
||||
raise KeyError(f"Could not find model info at: {self.api_url_for_model}")
|
||||
|
||||
response_data = response.json()
|
||||
if self.is_openai:
|
||||
# {"object": "list", "data": [{"id": "koboldcpp/dolphin-2.8-mistral-7b", "object": "model", "created": 1, "owned_by": "koboldcpp", "permission": [], "root": "koboldcpp"}]}
|
||||
model_name = response_data.get("data")[0].get("id")
|
||||
else:
|
||||
# {"result": "koboldcpp/dolphin-2.8-mistral-7b"}
|
||||
model_name = response_data.get("result")
|
||||
|
||||
# split by "/" and take last
|
||||
if model_name:
|
||||
model_name = model_name.split("/")[-1]
|
||||
|
||||
return model_name
|
||||
|
||||
async def tokencount(self, content:str) -> int:
|
||||
"""
|
||||
KoboldCpp has a tokencount endpoint we can use to count tokens
|
||||
for the prompt and response
|
||||
|
||||
If the endpoint is not available, we will use the default token count estimate
|
||||
"""
|
||||
|
||||
# extract scheme and host from api url
|
||||
|
||||
parts = urlparse(self.api_url)
|
||||
|
||||
url_tokencount = f"{parts.scheme}://{parts.netloc}/api/extra/tokencount"
|
||||
|
||||
async with httpx.AsyncClient() as client:
|
||||
response = await client.post(
|
||||
url_tokencount,
|
||||
json={"prompt":content},
|
||||
timeout=None,
|
||||
headers=self.request_headers,
|
||||
)
|
||||
|
||||
if response.status_code == 404:
|
||||
# kobold united doesn't have tokencount endpoint
|
||||
return util.count_tokens(content)
|
||||
|
||||
tokencount = len(response.json().get("ids",[]))
|
||||
return tokencount
|
||||
|
||||
async def generate(self, prompt: str, parameters: dict, kind: str):
|
||||
"""
|
||||
Generates text from the given prompt and parameters.
|
||||
"""
|
||||
|
||||
parameters["prompt"] = prompt.strip(" ")
|
||||
|
||||
self._returned_prompt_tokens = await self.tokencount(parameters["prompt"] )
|
||||
|
||||
async with httpx.AsyncClient() as client:
|
||||
response = await client.post(
|
||||
self.api_url_for_generation,
|
||||
json=parameters,
|
||||
timeout=None,
|
||||
headers=self.request_headers,
|
||||
)
|
||||
response_data = response.json()
|
||||
try:
|
||||
if self.is_openai:
|
||||
response_text = response_data["choices"][0]["text"]
|
||||
else:
|
||||
response_text = response_data["results"][0]["text"]
|
||||
except (TypeError, KeyError) as exc:
|
||||
log.error("Failed to generate text", exc=exc, response_data=response_data, response_status=response.status_code)
|
||||
response_text = ""
|
||||
|
||||
self._returned_response_tokens = await self.tokencount(response_text)
|
||||
return response_text
|
||||
|
||||
|
||||
def jiggle_randomness(self, prompt_config: dict, offset: float = 0.3) -> dict:
|
||||
"""
|
||||
adjusts temperature and repetition_penalty
|
||||
by random values using the base value as a center
|
||||
"""
|
||||
|
||||
temp = prompt_config["temperature"]
|
||||
|
||||
if "rep_pen" in prompt_config:
|
||||
rep_pen_key = "rep_pen"
|
||||
elif "presence_penalty" in prompt_config:
|
||||
rep_pen_key = "presence_penalty"
|
||||
else:
|
||||
rep_pen_key = "repetition_penalty"
|
||||
|
||||
min_offset = offset * 0.3
|
||||
|
||||
prompt_config["temperature"] = random.uniform(temp + min_offset, temp + offset)
|
||||
try:
|
||||
if rep_pen_key == "presence_penalty":
|
||||
presence_penalty = prompt_config["presence_penalty"]
|
||||
prompt_config["presence_penalty"] = round(random.uniform(
|
||||
presence_penalty + 0.1, presence_penalty + offset
|
||||
),1)
|
||||
else:
|
||||
rep_pen = prompt_config[rep_pen_key]
|
||||
prompt_config[rep_pen_key] = random.uniform(
|
||||
rep_pen + min_offset * 0.3, rep_pen + offset * 0.3
|
||||
)
|
||||
except KeyError:
|
||||
pass
|
||||
|
||||
def reconfigure(self, **kwargs):
|
||||
if "api_key" in kwargs:
|
||||
self.api_key = kwargs.pop("api_key")
|
||||
|
||||
super().reconfigure(**kwargs)
|
||||
|
||||
|
||||
async def visual_automatic1111_setup(self, visual_agent:"VisualBase") -> bool:
|
||||
|
||||
"""
|
||||
Automatically configure the visual agent for automatic1111
|
||||
if the koboldcpp server has a SD model available
|
||||
"""
|
||||
|
||||
if not self.connected:
|
||||
return False
|
||||
|
||||
sd_models_url = urljoin(self.url, "/sdapi/v1/sd-models")
|
||||
|
||||
async with httpx.AsyncClient() as client:
|
||||
|
||||
try:
|
||||
response = await client.get(
|
||||
url=sd_models_url, timeout=2
|
||||
)
|
||||
except Exception as exc:
|
||||
log.error(f"Failed to fetch sd models from {sd_models_url}", exc=exc)
|
||||
return False
|
||||
|
||||
if response.status_code != 200:
|
||||
return False
|
||||
|
||||
response_data = response.json()
|
||||
|
||||
sd_model = response_data[0].get("model_name") if response_data else None
|
||||
|
||||
if not sd_model:
|
||||
return False
|
||||
|
||||
log.info("automatic1111_setup", sd_model=sd_model)
|
||||
|
||||
visual_agent.actions["automatic1111"].config["api_url"].value = self.url
|
||||
visual_agent.is_enabled = True
|
||||
return True
|
||||
|
||||
@@ -7,7 +7,7 @@ from talemate.client.registry import register
|
||||
|
||||
class Defaults(pydantic.BaseModel):
|
||||
api_url: str = "http://localhost:1234"
|
||||
max_token_length: int = 4096
|
||||
max_token_length: int = 8192
|
||||
|
||||
|
||||
@register()
|
||||
|
||||
@@ -19,17 +19,22 @@ log = structlog.get_logger("talemate")
|
||||
SUPPORTED_MODELS = [
|
||||
"open-mistral-7b",
|
||||
"open-mixtral-8x7b",
|
||||
"open-mixtral-8x22b",
|
||||
"mistral-small-latest",
|
||||
"mistral-medium-latest",
|
||||
"mistral-large-latest",
|
||||
]
|
||||
|
||||
JSON_OBJECT_RESPONSE_MODELS = SUPPORTED_MODELS
|
||||
|
||||
JSON_OBJECT_RESPONSE_MODELS = [
|
||||
"open-mixtral-8x22b",
|
||||
"mistral-small-latest",
|
||||
"mistral-medium-latest",
|
||||
"mistral-large-latest",
|
||||
]
|
||||
|
||||
class Defaults(pydantic.BaseModel):
|
||||
max_token_length: int = 16384
|
||||
model: str = "open-mixtral-8x7b"
|
||||
model: str = "open-mixtral-8x22b"
|
||||
|
||||
|
||||
@register()
|
||||
@@ -52,7 +57,7 @@ class MistralAIClient(ClientBase):
|
||||
requires_prompt_template: bool = False
|
||||
defaults: Defaults = Defaults()
|
||||
|
||||
def __init__(self, model="open-mixtral-8x7b", **kwargs):
|
||||
def __init__(self, model="open-mixtral-8x22b", **kwargs):
|
||||
self.model_name = model
|
||||
self.api_key_status = None
|
||||
self.config = load_config()
|
||||
@@ -114,7 +119,7 @@ class MistralAIClient(ClientBase):
|
||||
return
|
||||
|
||||
if not self.model_name:
|
||||
self.model_name = "open-mixtral-8x7b"
|
||||
self.model_name = "open-mixtral-8x22b"
|
||||
|
||||
if max_token_length and not isinstance(max_token_length, int):
|
||||
max_token_length = int(max_token_length)
|
||||
@@ -174,6 +179,12 @@ class MistralAIClient(ClientBase):
|
||||
if key not in valid_keys:
|
||||
del parameters[key]
|
||||
|
||||
# clamp temperature to 0.1 and 1.0
|
||||
# Unhandled Error: Status: 422. Message: {"object":"error","message":{"detail":[{"type":"less_than_equal","loc":["body","temperature"],"msg":"Input should be less than or equal to 1","input":1.31,"ctx":{"le":1.0},"url":"https://errors.pydantic.dev/2.6/v/less_than_equal"}]},"type":"invalid_request_error","param":null,"code":null}
|
||||
|
||||
if "temperature" in parameters:
|
||||
parameters["temperature"] = min(1.0, max(0.1, parameters["temperature"]))
|
||||
|
||||
async def generate(self, prompt: str, parameters: dict, kind: str):
|
||||
"""
|
||||
Generates text from the given prompt and parameters.
|
||||
|
||||
@@ -136,13 +136,15 @@ class ModelPrompt:
|
||||
"""
|
||||
|
||||
matches = []
|
||||
|
||||
cleaned_model_name = model_name.replace("/", "__")
|
||||
|
||||
# Iterate over all templates in the loader's directory
|
||||
for template_name in self.env.list_templates():
|
||||
# strip extension
|
||||
template_name_match = os.path.splitext(template_name)[0]
|
||||
# Check if the model name is in the template filename
|
||||
if template_name_match.lower() in model_name.lower():
|
||||
if template_name_match.lower() in cleaned_model_name.lower():
|
||||
matches.append(template_name)
|
||||
|
||||
# If there are no matches, return None
|
||||
@@ -163,16 +165,17 @@ class ModelPrompt:
|
||||
"""
|
||||
|
||||
template_name = template_name.split(".jinja2")[0]
|
||||
|
||||
cleaned_model_name = model_name.replace("/", "__")
|
||||
|
||||
shutil.copyfile(
|
||||
os.path.join(STD_TEMPLATE_PATH, template_name + ".jinja2"),
|
||||
os.path.join(USER_TEMPLATE_PATH, model_name + ".jinja2"),
|
||||
os.path.join(USER_TEMPLATE_PATH, cleaned_model_name + ".jinja2"),
|
||||
)
|
||||
|
||||
return os.path.join(USER_TEMPLATE_PATH, model_name + ".jinja2")
|
||||
return os.path.join(USER_TEMPLATE_PATH, cleaned_model_name + ".jinja2")
|
||||
|
||||
def query_hf_for_prompt_template_suggestion(self, model_name: str):
|
||||
print("query_hf_for_prompt_template_suggestion", model_name)
|
||||
api = huggingface_hub.HfApi()
|
||||
|
||||
try:
|
||||
|
||||
@@ -28,12 +28,14 @@ SUPPORTED_MODELS = [
|
||||
"gpt-4-turbo-preview",
|
||||
"gpt-4-turbo-2024-04-09",
|
||||
"gpt-4-turbo",
|
||||
"gpt-4o-2024-05-13",
|
||||
"gpt-4o",
|
||||
]
|
||||
|
||||
# any model starting with gpt-4- is assumed to support 'json_object'
|
||||
# for others we need to explicitly state the model name
|
||||
JSON_OBJECT_RESPONSE_MODELS = [
|
||||
"gpt-4-1106-preview",
|
||||
"gpt-4-0125-preview",
|
||||
"gpt-4-turbo-preview",
|
||||
"gpt-4o",
|
||||
"gpt-3.5-turbo-0125",
|
||||
]
|
||||
|
||||
|
||||
@@ -1,5 +1,5 @@
|
||||
import urllib
|
||||
|
||||
import random
|
||||
import pydantic
|
||||
import structlog
|
||||
from openai import AsyncOpenAI, NotFoundError, PermissionDeniedError
|
||||
@@ -17,9 +17,10 @@ EXPERIMENTAL_DESCRIPTION = """Use this client if you want to connect to a servic
|
||||
class Defaults(pydantic.BaseModel):
|
||||
api_url: str = "http://localhost:5000"
|
||||
api_key: str = ""
|
||||
max_token_length: int = 4096
|
||||
max_token_length: int = 8192
|
||||
model: str = ""
|
||||
api_handles_prompt_template: bool = False
|
||||
double_coercion: str = None
|
||||
|
||||
|
||||
class ClientConfig(BaseClientConfig):
|
||||
@@ -43,9 +44,9 @@ class OpenAICompatibleClient(ClientBase):
|
||||
"api_handles_prompt_template": ExtraField(
|
||||
name="api_handles_prompt_template",
|
||||
type="bool",
|
||||
label="API Handles Prompt Template",
|
||||
label="API handles prompt template (chat/completions)",
|
||||
required=False,
|
||||
description="The API handles the prompt template, meaning your choice in the UI for the prompt template below will be ignored.",
|
||||
description="The API handles the prompt template, meaning your choice in the UI for the prompt template below will be ignored. This is not recommended and should only be used if the API does not support the `completions` andpoint or you don't know which prompt template to use.",
|
||||
)
|
||||
}
|
||||
|
||||
@@ -83,13 +84,12 @@ class OpenAICompatibleClient(ClientBase):
|
||||
def tune_prompt_parameters(self, parameters: dict, kind: str):
|
||||
super().tune_prompt_parameters(parameters, kind)
|
||||
|
||||
keys = list(parameters.keys())
|
||||
allowed_params = ["max_tokens", "presence_penalty", "top_p", "temperature"]
|
||||
|
||||
valid_keys = ["temperature", "top_p", "max_tokens"]
|
||||
|
||||
for key in keys:
|
||||
if key not in valid_keys:
|
||||
del parameters[key]
|
||||
# drop unsupported params
|
||||
for param in list(parameters.keys()):
|
||||
if param not in allowed_params:
|
||||
del parameters[param]
|
||||
|
||||
def prompt_template(self, system_message: str, prompt: str):
|
||||
|
||||
@@ -117,16 +117,27 @@ class OpenAICompatibleClient(ClientBase):
|
||||
"""
|
||||
Generates text from the given prompt and parameters.
|
||||
"""
|
||||
human_message = {"role": "user", "content": prompt.strip()}
|
||||
|
||||
self.log.debug("generate", prompt=prompt[:128] + " ...", parameters=parameters)
|
||||
|
||||
try:
|
||||
response = await self.client.chat.completions.create(
|
||||
model=self.model_name, messages=[human_message], **parameters
|
||||
)
|
||||
|
||||
return response.choices[0].message.content
|
||||
if self.api_handles_prompt_template:
|
||||
# OpenAI API handles prompt template
|
||||
# Use the chat completions endpoint
|
||||
self.log.debug("generate (chat/completions)", prompt=prompt[:128] + " ...", parameters=parameters)
|
||||
human_message = {"role": "user", "content": prompt.strip()}
|
||||
response = await self.client.chat.completions.create(
|
||||
model=self.model_name, messages=[human_message], **parameters
|
||||
)
|
||||
response = response.choices[0].message.content
|
||||
return self.process_response_for_indirect_coercion(prompt, response)
|
||||
else:
|
||||
# Talemate handles prompt template
|
||||
# Use the completions endpoint
|
||||
self.log.debug("generate (completions)", prompt=prompt[:128] + " ...", parameters=parameters)
|
||||
parameters["prompt"] = prompt
|
||||
response = await self.client.completions.create(
|
||||
model=self.model_name, **parameters
|
||||
)
|
||||
return response.choices[0].text
|
||||
except PermissionDeniedError as e:
|
||||
self.log.error("generate error", e=e)
|
||||
emit("status", message="Client API: Permission Denied", status="error")
|
||||
@@ -145,13 +156,39 @@ class OpenAICompatibleClient(ClientBase):
|
||||
self.api_url = kwargs["api_url"]
|
||||
if "max_token_length" in kwargs:
|
||||
self.max_token_length = (
|
||||
int(kwargs["max_token_length"]) if kwargs["max_token_length"] else 4096
|
||||
int(kwargs["max_token_length"]) if kwargs["max_token_length"] else 8192
|
||||
)
|
||||
if "api_key" in kwargs:
|
||||
self.api_auth = kwargs["api_key"]
|
||||
self.api_key = kwargs["api_key"]
|
||||
if "api_handles_prompt_template" in kwargs:
|
||||
self.api_handles_prompt_template = kwargs["api_handles_prompt_template"]
|
||||
# TODO: why isn't this calling super()?
|
||||
if "enabled" in kwargs:
|
||||
self.enabled = bool(kwargs["enabled"])
|
||||
|
||||
if "double_coercion" in kwargs:
|
||||
self.double_coercion = kwargs["double_coercion"]
|
||||
|
||||
log.warning("reconfigure", kwargs=kwargs)
|
||||
|
||||
self.set_client(**kwargs)
|
||||
|
||||
def jiggle_randomness(self, prompt_config: dict, offset: float = 0.3) -> dict:
|
||||
"""
|
||||
adjusts temperature and presence penalty
|
||||
by random values using the base value as a center
|
||||
"""
|
||||
|
||||
temp = prompt_config["temperature"]
|
||||
|
||||
min_offset = offset * 0.3
|
||||
|
||||
prompt_config["temperature"] = random.uniform(temp + min_offset, temp + offset)
|
||||
|
||||
try:
|
||||
presence_penalty = prompt_config["presence_penalty"]
|
||||
prompt_config["presence_penalty"] = round(random.uniform(
|
||||
presence_penalty + 0.1, presence_penalty + offset
|
||||
),1)
|
||||
except KeyError:
|
||||
pass
|
||||
@@ -11,10 +11,15 @@ __all__ = [
|
||||
"PRESET_SIMPLE_1",
|
||||
]
|
||||
|
||||
# TODO: refactor abstraction and make configurable
|
||||
|
||||
PRESENCE_PENALTY_BASE = 0.2
|
||||
|
||||
PRESET_TALEMATE_CONVERSATION = {
|
||||
"temperature": 0.65,
|
||||
"top_p": 0.47,
|
||||
"top_k": 42,
|
||||
"presence_penalty": PRESENCE_PENALTY_BASE,
|
||||
"repetition_penalty": 1.18,
|
||||
"repetition_penalty_range": 2048,
|
||||
}
|
||||
@@ -23,6 +28,7 @@ PRESET_TALEMATE_CREATOR = {
|
||||
"temperature": 0.7,
|
||||
"top_p": 0.9,
|
||||
"top_k": 20,
|
||||
"presence_penalty": PRESENCE_PENALTY_BASE,
|
||||
"repetition_penalty": 1.15,
|
||||
"repetition_penalty_range": 512,
|
||||
}
|
||||
@@ -31,12 +37,13 @@ PRESET_LLAMA_PRECISE = {
|
||||
"temperature": 0.7,
|
||||
"top_p": 0.1,
|
||||
"top_k": 40,
|
||||
"presence_penalty": PRESENCE_PENALTY_BASE,
|
||||
"repetition_penalty": 1.18,
|
||||
}
|
||||
|
||||
PRESET_DETERMINISTIC = {
|
||||
"temperature": 0.01,
|
||||
"top_p": 0.01,
|
||||
"temperature": 0.1,
|
||||
"top_p": 1,
|
||||
"top_k": 0,
|
||||
"repetition_penalty": 1.0,
|
||||
}
|
||||
@@ -45,6 +52,7 @@ PRESET_DIVINE_INTELLECT = {
|
||||
"temperature": 1.31,
|
||||
"top_p": 0.14,
|
||||
"top_k": 49,
|
||||
"presence_penalty": PRESENCE_PENALTY_BASE,
|
||||
"repetition_penalty_range": 1024,
|
||||
"repetition_penalty": 1.17,
|
||||
}
|
||||
@@ -53,9 +61,16 @@ PRESET_SIMPLE_1 = {
|
||||
"temperature": 0.7,
|
||||
"top_p": 0.9,
|
||||
"top_k": 20,
|
||||
"presence_penalty": PRESENCE_PENALTY_BASE,
|
||||
"repetition_penalty": 1.15,
|
||||
}
|
||||
|
||||
PRESET_ANALYTICAL = {
|
||||
"temperature": 0.1,
|
||||
"top_p": 0.9,
|
||||
"top_k": 20,
|
||||
}
|
||||
|
||||
|
||||
def configure(config: dict, kind: str, total_budget: int):
|
||||
"""
|
||||
@@ -82,7 +97,17 @@ def set_preset(config: dict, kind: str):
|
||||
|
||||
|
||||
def preset_for_kind(kind: str):
|
||||
if kind == "conversation":
|
||||
|
||||
# tag based
|
||||
if "deterministic" in kind:
|
||||
return PRESET_DETERMINISTIC
|
||||
elif "creative" in kind:
|
||||
return PRESET_DIVINE_INTELLECT
|
||||
elif "simple" in kind:
|
||||
return PRESET_SIMPLE_1
|
||||
elif "analytical" in kind:
|
||||
return PRESET_ANALYTICAL
|
||||
elif kind == "conversation":
|
||||
return PRESET_TALEMATE_CONVERSATION
|
||||
elif kind == "conversation_old":
|
||||
return PRESET_TALEMATE_CONVERSATION # Assuming old conversation uses the same preset
|
||||
@@ -133,11 +158,6 @@ def preset_for_kind(kind: str):
|
||||
elif kind == "visualize":
|
||||
return PRESET_SIMPLE_1
|
||||
|
||||
# tag based
|
||||
elif "deterministic" in kind:
|
||||
return PRESET_DETERMINISTIC
|
||||
elif "creative" in kind:
|
||||
return PRESET_DIVINE_INTELLECT
|
||||
else:
|
||||
return PRESET_SIMPLE_1 # Default preset if none of the kinds match
|
||||
|
||||
|
||||
35
src/talemate/client/remote.py
Normal file
35
src/talemate/client/remote.py
Normal file
@@ -0,0 +1,35 @@
|
||||
__all__ = ["RemoteServiceMixin"]
|
||||
|
||||
|
||||
class RemoteServiceMixin:
|
||||
|
||||
def prompt_template(self, system_message: str, prompt: str):
|
||||
if "<|BOT|>" in prompt:
|
||||
_, right = prompt.split("<|BOT|>", 1)
|
||||
if right:
|
||||
prompt = prompt.replace("<|BOT|>", "\nStart your response with: ")
|
||||
else:
|
||||
prompt = prompt.replace("<|BOT|>", "")
|
||||
|
||||
return prompt
|
||||
|
||||
def reconfigure(self, **kwargs):
|
||||
if kwargs.get("model"):
|
||||
self.model_name = kwargs["model"]
|
||||
self.set_client(kwargs.get("max_token_length"))
|
||||
|
||||
def on_config_saved(self, event):
|
||||
config = event.data
|
||||
self.config = config
|
||||
self.set_client(max_token_length=self.max_token_length)
|
||||
|
||||
def tune_prompt_parameters(self, parameters: dict, kind: str):
|
||||
super().tune_prompt_parameters(parameters, kind)
|
||||
keys = list(parameters.keys())
|
||||
valid_keys = ["temperature", "max_tokens"]
|
||||
for key in keys:
|
||||
if key not in valid_keys:
|
||||
del parameters[key]
|
||||
|
||||
async def status(self):
|
||||
self.emit_status()
|
||||
@@ -5,12 +5,16 @@ import httpx
|
||||
import structlog
|
||||
from openai import AsyncOpenAI
|
||||
|
||||
from talemate.client.base import STOPPING_STRINGS, ClientBase, ExtraField
|
||||
from talemate.client.base import STOPPING_STRINGS, ClientBase, Defaults, ExtraField
|
||||
from talemate.client.registry import register
|
||||
|
||||
log = structlog.get_logger("talemate.client.textgenwebui")
|
||||
|
||||
|
||||
class TextGeneratorWebuiClientDefaults(Defaults):
|
||||
api_key: str = ""
|
||||
|
||||
|
||||
@register()
|
||||
class TextGeneratorWebuiClient(ClientBase):
|
||||
auto_determine_prompt_template: bool = True
|
||||
@@ -24,6 +28,20 @@ class TextGeneratorWebuiClient(ClientBase):
|
||||
class Meta(ClientBase.Meta):
|
||||
name_prefix: str = "TextGenWebUI"
|
||||
title: str = "Text-Generation-WebUI (ooba)"
|
||||
enable_api_auth: bool = True
|
||||
defaults: TextGeneratorWebuiClientDefaults = TextGeneratorWebuiClientDefaults()
|
||||
|
||||
@property
|
||||
def request_headers(self):
|
||||
headers = {}
|
||||
headers["Content-Type"] = "application/json"
|
||||
if self.api_key:
|
||||
headers["Authorization"] = f"Bearer {self.api_key}"
|
||||
return headers
|
||||
|
||||
def __init__(self, **kwargs):
|
||||
self.api_key = kwargs.pop("api_key", "")
|
||||
super().__init__(**kwargs)
|
||||
|
||||
def tune_prompt_parameters(self, parameters: dict, kind: str):
|
||||
super().tune_prompt_parameters(parameters, kind)
|
||||
@@ -33,8 +51,42 @@ class TextGeneratorWebuiClient(ClientBase):
|
||||
# is this needed?
|
||||
parameters["max_new_tokens"] = parameters["max_tokens"]
|
||||
parameters["stop"] = parameters["stopping_strings"]
|
||||
|
||||
|
||||
# textgenwebui does not error on unsupported parameters
|
||||
# but we should still drop them so they don't get passed to the API
|
||||
# and show up in our prompt debugging tool.
|
||||
|
||||
# note that this is not the full list of their supported parameters
|
||||
# but only those we send.
|
||||
|
||||
allowed_params = [
|
||||
"temperature",
|
||||
"top_p",
|
||||
"top_k",
|
||||
"max_tokens",
|
||||
"repetition_penalty",
|
||||
"repetition_penalty_range",
|
||||
"max_tokens",
|
||||
"stopping_strings",
|
||||
"skip_special_tokens",
|
||||
"stream",
|
||||
# is this needed?
|
||||
"max_new_tokens",
|
||||
"stop",
|
||||
# talemate internal
|
||||
# These will be removed before sending to the API
|
||||
# but we keep them here since they are used during the prompt finalization
|
||||
"extra_stopping_strings",
|
||||
]
|
||||
|
||||
# drop unsupported params
|
||||
for param in list(parameters.keys()):
|
||||
if param not in allowed_params:
|
||||
del parameters[param]
|
||||
|
||||
def set_client(self, **kwargs):
|
||||
self.api_key = kwargs.get("api_key", self.api_key)
|
||||
self.client = AsyncOpenAI(base_url=self.api_url + "/v1", api_key="sk-1111")
|
||||
|
||||
def finalize_llama3(self, parameters: dict, prompt: str) -> tuple[str, bool]:
|
||||
@@ -72,9 +124,12 @@ class TextGeneratorWebuiClient(ClientBase):
|
||||
return prompt, True
|
||||
|
||||
async def get_model_name(self):
|
||||
|
||||
async with httpx.AsyncClient() as client:
|
||||
response = await client.get(
|
||||
f"{self.api_url}/v1/internal/model/info", timeout=2
|
||||
f"{self.api_url}/v1/internal/model/info",
|
||||
timeout=2,
|
||||
headers=self.request_headers,
|
||||
)
|
||||
if response.status_code == 404:
|
||||
raise Exception("Could not find model info (wrong api version?)")
|
||||
@@ -91,9 +146,6 @@ class TextGeneratorWebuiClient(ClientBase):
|
||||
Generates text from the given prompt and parameters.
|
||||
"""
|
||||
|
||||
headers = {}
|
||||
headers["Content-Type"] = "application/json"
|
||||
|
||||
parameters["prompt"] = prompt.strip(" ")
|
||||
|
||||
async with httpx.AsyncClient() as client:
|
||||
@@ -101,7 +153,7 @@ class TextGeneratorWebuiClient(ClientBase):
|
||||
f"{self.api_url}/v1/completions",
|
||||
json=parameters,
|
||||
timeout=None,
|
||||
headers=headers,
|
||||
headers=self.request_headers,
|
||||
)
|
||||
response_data = response.json()
|
||||
return response_data["choices"][0]["text"]
|
||||
@@ -121,3 +173,9 @@ class TextGeneratorWebuiClient(ClientBase):
|
||||
prompt_config["repetition_penalty"] = random.uniform(
|
||||
rep_pen + min_offset * 0.3, rep_pen + offset * 0.3
|
||||
)
|
||||
|
||||
def reconfigure(self, **kwargs):
|
||||
if "api_key" in kwargs:
|
||||
self.api_key = kwargs.pop("api_key")
|
||||
|
||||
super().reconfigure(**kwargs)
|
||||
|
||||
@@ -7,6 +7,8 @@ from __future__ import annotations
|
||||
from abc import ABC, abstractmethod
|
||||
from typing import TYPE_CHECKING
|
||||
|
||||
import pydantic
|
||||
|
||||
from talemate.emit import Emitter, emit
|
||||
|
||||
if TYPE_CHECKING:
|
||||
@@ -21,17 +23,23 @@ class TalemateCommand(Emitter, ABC):
|
||||
manager: CommandManager = None
|
||||
label: str = None
|
||||
sets_scene_unsaved: bool = True
|
||||
argument_cls: pydantic.BaseModel | None = None
|
||||
|
||||
def __init__(
|
||||
self,
|
||||
manager,
|
||||
*args,
|
||||
**kwargs,
|
||||
):
|
||||
self.scene = manager.scene
|
||||
self.manager = manager
|
||||
self.args = args
|
||||
self.setup_emitter(self.scene)
|
||||
|
||||
if self.argument_cls is not None:
|
||||
self.args = self.argument_cls(**kwargs)
|
||||
else:
|
||||
self.args = args
|
||||
|
||||
@classmethod
|
||||
def is_command(cls, name):
|
||||
return name == cls.name or name in cls.aliases
|
||||
|
||||
@@ -1,10 +1,9 @@
|
||||
from talemate.agents.creator.assistant import ContentGenerationContext
|
||||
from talemate.commands.base import TalemateCommand
|
||||
from talemate.commands.manager import register
|
||||
from talemate.emit import emit
|
||||
|
||||
__all__ = [
|
||||
"CmdAutocompleteDialogue",
|
||||
]
|
||||
__all__ = ["CmdAutocompleteDialogue", "CmdAutocomplete"]
|
||||
|
||||
|
||||
@register
|
||||
@@ -20,7 +19,63 @@ class CmdAutocompleteDialogue(TalemateCommand):
|
||||
async def run(self):
|
||||
|
||||
input = self.args[0]
|
||||
if len(self.args) > 1:
|
||||
character_name = self.args[1]
|
||||
character = self.scene.get_character(character_name)
|
||||
else:
|
||||
character = self.scene.get_player_character()
|
||||
|
||||
creator = self.scene.get_helper("creator").agent
|
||||
character = self.scene.get_player_character()
|
||||
|
||||
await creator.autocomplete_dialogue(input, character, emit_signal=True)
|
||||
|
||||
|
||||
@register
|
||||
class CmdAutocomplete(TalemateCommand):
|
||||
"""
|
||||
Command class for the 'autocomplete' command
|
||||
"""
|
||||
|
||||
name = "autocomplete"
|
||||
description = "Generate information for an AI selected actor"
|
||||
aliases = ["ac"]
|
||||
argument_cls = ContentGenerationContext
|
||||
|
||||
async def run(self):
|
||||
|
||||
try:
|
||||
creator = self.scene.get_helper("creator").agent
|
||||
context_type, context_name = self.args.computed_context
|
||||
|
||||
if context_type == "dialogue":
|
||||
|
||||
if not self.args.character:
|
||||
character = self.scene.get_player_character()
|
||||
else:
|
||||
character = self.scene.get_character(self.args.character)
|
||||
|
||||
self.scene.log.info(
|
||||
"Running autocomplete dialogue",
|
||||
partial=self.args.partial,
|
||||
character=character,
|
||||
)
|
||||
await creator.autocomplete_dialogue(
|
||||
self.args.partial, character, emit_signal=True
|
||||
)
|
||||
return
|
||||
|
||||
# force length to 35
|
||||
self.args.length = 35
|
||||
self.scene.log.info("Running autocomplete context", args=self.args)
|
||||
completion = await creator.contextual_generate(self.args)
|
||||
self.scene.log.info("Autocomplete context complete", completion=completion)
|
||||
completion = (
|
||||
completion.replace(f"{context_name}: {self.args.partial}", "")
|
||||
.lstrip(".")
|
||||
.strip()
|
||||
)
|
||||
|
||||
emit("autocomplete_suggestion", completion)
|
||||
except Exception as e:
|
||||
self.scene.log.error("Error running autocomplete", error=str(e))
|
||||
emit("autocomplete_suggestion", "")
|
||||
|
||||
@@ -39,7 +39,7 @@ class CmdFixContinuityErrors(TalemateCommand):
|
||||
character = None
|
||||
|
||||
fixed_message = await editor.check_continuity_errors(
|
||||
str(message), character, force=True
|
||||
str(message), character, force=True, message_id=message_id
|
||||
)
|
||||
|
||||
self.scene.edit_message(message_id, fixed_message)
|
||||
|
||||
@@ -2,6 +2,7 @@ import asyncio
|
||||
|
||||
from talemate.commands.base import TalemateCommand
|
||||
from talemate.commands.manager import register
|
||||
from talemate.emit import emit
|
||||
|
||||
|
||||
@register
|
||||
@@ -32,11 +33,19 @@ class CmdRebuildArchive(TalemateCommand):
|
||||
else "PT0S"
|
||||
)
|
||||
|
||||
entries = 0
|
||||
total_entries = summarizer.agent.estimated_entry_count
|
||||
while True:
|
||||
emit(
|
||||
"status",
|
||||
message=f"Rebuilding historical archive... {entries}/{total_entries}",
|
||||
status="busy",
|
||||
)
|
||||
more = await summarizer.agent.build_archive(self.scene)
|
||||
|
||||
entries += 1
|
||||
if not more:
|
||||
break
|
||||
|
||||
self.scene.sync_time()
|
||||
await self.scene.commit_to_memory()
|
||||
emit("status", message="Historical archive rebuilt", status="success")
|
||||
|
||||
@@ -1,3 +1,5 @@
|
||||
import json
|
||||
|
||||
import structlog
|
||||
|
||||
from talemate.emit import AbortCommand, Emitter
|
||||
@@ -38,20 +40,28 @@ class Manager(Emitter):
|
||||
# commands start with ! and are followed by a command name
|
||||
cmd = cmd.strip()
|
||||
cmd_args = ""
|
||||
cmd_kwargs = {}
|
||||
if not self.is_command(cmd):
|
||||
return False
|
||||
|
||||
if ":" in cmd:
|
||||
# split command name and args which are separated by a colon
|
||||
cmd_name, cmd_args = cmd[1:].split(":", 1)
|
||||
cmd_args_unsplit = cmd_args
|
||||
cmd_args = cmd_args.split(":")
|
||||
|
||||
else:
|
||||
cmd_name = cmd[1:]
|
||||
cmd_args = []
|
||||
|
||||
for command_cls in self.command_classes:
|
||||
if command_cls.is_command(cmd_name):
|
||||
command = command_cls(self, *cmd_args)
|
||||
|
||||
if command_cls.argument_cls:
|
||||
cmd_kwargs = json.loads(cmd_args_unsplit)
|
||||
cmd_args = []
|
||||
|
||||
command = command_cls(self, *cmd_args, **cmd_kwargs)
|
||||
try:
|
||||
self.processing_command = True
|
||||
command.command_start()
|
||||
|
||||
@@ -162,6 +162,11 @@ class CoquiConfig(BaseModel):
|
||||
api_key: Union[str, None] = None
|
||||
|
||||
|
||||
class GoogleConfig(BaseModel):
|
||||
gcloud_credentials_path: Union[str, None] = None
|
||||
gcloud_location: Union[str, None] = None
|
||||
|
||||
|
||||
class TTSVoiceSamples(BaseModel):
|
||||
label: str
|
||||
value: str
|
||||
@@ -337,6 +342,8 @@ class Config(BaseModel):
|
||||
|
||||
runpod: RunPodConfig = RunPodConfig()
|
||||
|
||||
google: GoogleConfig = GoogleConfig()
|
||||
|
||||
chromadb: ChromaDB = ChromaDB()
|
||||
|
||||
elevenlabs: ElevenLabsConfig = ElevenLabsConfig()
|
||||
|
||||
@@ -187,3 +187,5 @@ async def agent_ready_checks():
|
||||
for agent in AGENTS.values():
|
||||
if agent and agent.enabled:
|
||||
await agent.ready_check()
|
||||
elif agent and not agent.enabled:
|
||||
await agent.setup_check()
|
||||
|
||||
@@ -33,6 +33,7 @@ from talemate.util import (
|
||||
fix_faulty_json,
|
||||
remove_extra_linebreaks,
|
||||
)
|
||||
from talemate.util.prompt import condensed
|
||||
|
||||
__all__ = [
|
||||
"Prompt",
|
||||
@@ -96,14 +97,6 @@ def validate_line(line):
|
||||
)
|
||||
|
||||
|
||||
def condensed(s):
|
||||
"""Replace all line breaks in a string with spaces."""
|
||||
r = s.replace("\n", " ").replace("\r", "")
|
||||
|
||||
# also replace multiple spaces with a single space
|
||||
return re.sub(r"\s+", " ", r)
|
||||
|
||||
|
||||
def clean_response(response):
|
||||
# remove invalid lines
|
||||
cleaned = "\n".join(
|
||||
@@ -379,6 +372,7 @@ class Prompt:
|
||||
env.globals["len"] = lambda x: len(x)
|
||||
env.globals["max"] = lambda x, y: max(x, y)
|
||||
env.globals["min"] = lambda x, y: min(x, y)
|
||||
env.globals["join"] = lambda x, y: y.join(x)
|
||||
env.globals["make_list"] = lambda: JoinableList()
|
||||
env.globals["make_dict"] = lambda: {}
|
||||
env.globals["count_tokens"] = lambda x: count_tokens(
|
||||
@@ -389,6 +383,9 @@ class Prompt:
|
||||
env.globals["emit_system"] = lambda status, message: emit(
|
||||
"system", status=status, message=message
|
||||
)
|
||||
env.globals["llm_can_be_coerced"] = lambda: (
|
||||
self.client.can_be_coerced if self.client else False
|
||||
)
|
||||
env.globals["emit_narrator"] = lambda message: emit("system", message=message)
|
||||
env.filters["condensed"] = condensed
|
||||
ctx.update(self.vars)
|
||||
|
||||
@@ -56,7 +56,9 @@ Emotions and actions should be written in italics. For example:
|
||||
|
||||
STAY IN THE SCENE. YOU MUST NOT BREAK CHARACTER. YOU MUST NOT BREAK THE FOURTH WALL.
|
||||
|
||||
YOU MUST DELIMIT YOUR CONTRIBUTION WITH "-- endofline --" AT THE END OF YOUR CONTRIBUTION.
|
||||
YOU MUST MARK YOUR CONTRIBUTION WITH "-- endofline --" AT THE END OF YOUR CONTRIBUTION.
|
||||
|
||||
YOU MUST ONLY WRITE NEW DIALOGUE FOR {{ talking_character.name.upper() }}.
|
||||
|
||||
{% if scene.count_messages() >= 5 and not talking_character.dialogue_instructions %}Use an informal and colloquial register with a conversational tone. Overall, {{ talking_character.name }}'s dialog is informal, conversational, natural, and spontaneous, with a sense of immediacy.
|
||||
{% endif -%}
|
||||
@@ -109,7 +111,7 @@ YOU MUST DELIMIT YOUR CONTRIBUTION WITH "-- endofline --" AT THE END OF YOUR CON
|
||||
{%- endif -%}
|
||||
{% endif -%}
|
||||
{% for scene_line in scene_context -%}
|
||||
{{ scene_line }}
|
||||
{{ scene_line }}-- endofline --
|
||||
|
||||
{% endfor %}
|
||||
{% endblock -%}
|
||||
|
||||
@@ -13,9 +13,9 @@
|
||||
<|SECTION:TASK|>
|
||||
Continue {{ character.name }}'s unfinished line in this screenplay.
|
||||
|
||||
Your response MUST only be the new parts of the dialogue, not the entire line.
|
||||
Your response MUST only be the new parts of {{ character.name }}'s dialogue, not the entire line.
|
||||
|
||||
Partial line: {{ character.name }}: {{ input }}
|
||||
Continue this text: {{ character.name }}: {{ input }}
|
||||
{% if not can_coerce -%}
|
||||
Continuation:
|
||||
<|CLOSE_SECTION|>
|
||||
|
||||
@@ -0,0 +1,25 @@
|
||||
{% block rendered_context -%}
|
||||
<|SECTION:CONTEXT|>
|
||||
{%- with memory_query=scene.snapshot() -%}
|
||||
{% include "extra-context.jinja2" %}
|
||||
{% endwith %}
|
||||
<|CLOSE_SECTION|>
|
||||
{% endblock -%}
|
||||
<|SECTION:SCENE|>
|
||||
{% for scene_context in scene.context_history(budget=min(2048, max_tokens-300-count_tokens(self.rendered_context())), min_dialogue=20, sections=False) -%}
|
||||
{{ scene_context }}
|
||||
{% endfor %}
|
||||
<|CLOSE_SECTION|>
|
||||
<|SECTION:TASK|>
|
||||
Continue the unfinished section of the next narrative.
|
||||
|
||||
Your response MUST only be the new parts of the narrative completion, not the entire line. Never add dialogue.
|
||||
|
||||
Continue this text: {{ input }}
|
||||
{% if not can_coerce -%}
|
||||
Continuation:
|
||||
<|CLOSE_SECTION|>
|
||||
{%- else -%}
|
||||
<|CLOSE_SECTION|>
|
||||
{{ bot_token }}{{ input }}
|
||||
{%- endif -%}
|
||||
@@ -44,16 +44,18 @@ Use a simple, easy to read writing format.
|
||||
{% elif context_typ == "character dialogue" %}
|
||||
Generate a new line of example dialogue for {{ character.name }}.
|
||||
|
||||
{%- if character.example_dialogue -%}
|
||||
Exisiting Dialogue Examples:
|
||||
{% for line in character.example_dialogue %}
|
||||
{{ line }}
|
||||
{% endfor %}
|
||||
{%- endif %}
|
||||
|
||||
You must only respond with the generated dialogue example.
|
||||
Always contain actions in asterisks. For example, *{{ character.name}} smiles*.
|
||||
Always contain dialogue in quotation marks. For example, {{ character.name}}: "Hello!"
|
||||
|
||||
{%- if character.dialogue_instructions -%}
|
||||
{% if character.dialogue_instructions %}
|
||||
Dialogue instructions for {{ character.name }}: {{ character.dialogue_instructions }}
|
||||
{% endif -%}
|
||||
{#- GENERAL CONTEXT -#}
|
||||
@@ -67,6 +69,22 @@ Use a simple, easy to read writing format.
|
||||
{% endif %}
|
||||
{% if generation_context.instructions %}Additional instructions: {{ generation_context.instructions }}{% endif %}
|
||||
<|CLOSE_SECTION|>
|
||||
{% if can_coerce -%}
|
||||
{{ bot_token }}
|
||||
{%- if context_typ == 'character attribute' -%}
|
||||
{{ character.name }}'s {{ context_name }}:{{ generation_context.partial }}
|
||||
{%- elif context_typ == 'character dialogue' -%}
|
||||
{{ character.name }}:{{ generation_context.partial }}
|
||||
{%- else -%}
|
||||
{{ context_name }}:{{ generation_context.partial }}
|
||||
{%- endif -%}
|
||||
{%- elif generation_context.partial -%}
|
||||
Continue the partially generated text for "{{ context_name }}".
|
||||
|
||||
Your response MUST only be the new parts of the text, not the entire text.
|
||||
|
||||
Continue this text: {{ generation_context.partial }}
|
||||
{%- else -%}
|
||||
{{ bot_token }}
|
||||
{%- if context_typ == 'character attribute' -%}
|
||||
{{ character.name }}'s {{ context_name }}:
|
||||
@@ -74,4 +92,5 @@ Use a simple, easy to read writing format.
|
||||
{{ character.name }}:
|
||||
{%- else -%}
|
||||
{{ context_name }}:
|
||||
{%- endif -%}
|
||||
{%- endif -%}
|
||||
@@ -3,13 +3,20 @@
|
||||
{{ character.description }}
|
||||
<|CLOSE_SECTION|>
|
||||
<|SECTION:TASK|>
|
||||
Your task is to determine fitting dialogue instructions for this character.
|
||||
Your task is to determine fitting dialogue instructions for {{ character.name }}.
|
||||
|
||||
By default all actors are given the following instructions for their character(s):
|
||||
|
||||
Dialogue instructions: "Use an informal and colloquial register with a conversational tone. Overall, {{ character.name }}'s dialog is informal, conversational, natural, and spontaneous, with a sense of immediacy."
|
||||
|
||||
However you can override this default instruction by providing your own instructions below.
|
||||
|
||||
{{ character.name }} is a character in {{ scene.context }}. The goal is always for {{ character.name }} to feel like a believable character in the context of the scene.
|
||||
|
||||
The character MUST feel relatable to the audience.
|
||||
|
||||
You must use simple language to describe the character's dialogue instructions.
|
||||
|
||||
Keep the format similar and stick to one paragraph.
|
||||
<|CLOSE_SECTION|>
|
||||
{{ bot_token }}Dialogue instructions:
|
||||
@@ -1,56 +1,37 @@
|
||||
{% if character -%}
|
||||
{% set content_block_identifier = character.name + "'s next dialogue" %}
|
||||
{% set content_block_identifier = character.name + "'s next scene" %}
|
||||
{% else -%}
|
||||
{% set content_block_identifier = "next narrative" %}
|
||||
{% endif -%}
|
||||
{% block rendered_context -%}
|
||||
<|SECTION:CONTEXT|>
|
||||
{%- with memory_query=scene.snapshot() -%}
|
||||
{% include "extra-context.jinja2" %}
|
||||
{% endwith %}
|
||||
{% if character %}
|
||||
{{ character.name }}'s description: {{ character.description|condensed }}
|
||||
{% endif %}
|
||||
|
||||
{{ text }}
|
||||
<|CLOSE_SECTION|>
|
||||
{% endblock -%}
|
||||
<|SECTION:SCENE|>
|
||||
{% set scene_history=scene.context_history(budget=max_tokens-512-count_tokens(self.rendered_context())) -%}
|
||||
{% set final_line_number=len(scene_history) -%}
|
||||
{% for scene_context in scene_history -%}
|
||||
{{ loop.index }}. {{ scene_context }}
|
||||
{% endfor -%}
|
||||
{% if not scene.history -%}
|
||||
No dialogue so far
|
||||
{% endif -%}
|
||||
<|SECTION:STORY DEVELOPMENT|>
|
||||
{% set scene_history=scene.context_history(budget=max_tokens-512-count_tokens(self.rendered_context()), message_id=message_id, include_reinfocements=False) -%}
|
||||
{{ agent_action("summarizer", "summarize", text=join(scene_history, '\n\n'), method="facts") }}
|
||||
<|CLOSE_SECTION|>
|
||||
<|SECTION:TASK|>
|
||||
What are continuity errors?
|
||||
|
||||
Continuity errors are mistakes in a story that occur when something changes from one scene to the next. This could be a character's appearance, state of clothing, the time of day, or even the weather. These errors can be distracting for the reader and can take them out of the story. It's important to catch these errors and fix them before the story is published.
|
||||
{% if character -%}
|
||||
CAREFULLY Analyze {{ character.name }}'s next line in the scene for continuity errors.
|
||||
CAREFULLY Analyze {{ character.name }}'s next scene for logical continuity errors in the context of the story developments so far.
|
||||
{% else -%}
|
||||
CAREFULLY Analyze the next line in the scene for continuity errors.
|
||||
{% endif -%}
|
||||
|
||||
YOU MUST DO THIS LINE BY LINE PROVIDING ANALYSIS FOR EACH LINE SEPARATELY.
|
||||
CAREFULLY Analyze the next scene for continuity errors.
|
||||
{% endif %}
|
||||
|
||||
```{{ content_block_identifier }}
|
||||
{{ content }}
|
||||
{{ instruct_text("Create a highly accurate one line summary for the scene above. Important is anything that causes a state change in the scene, characters or objects. Use simple, clear language, and note details. Use exact words. YOUR RESPONSE MUST ONLY BE THE SUMMARY.", content)}}
|
||||
```
|
||||
|
||||
YOU MUST NOT PROVIDE REPLACEMENT SUGGESTIONS WHEN YOU FIND CONTINUITY ERRORS.
|
||||
You are looking for clear mistakes in objects or characters' state.
|
||||
|
||||
THINK CAREFULLY, consider state of the scene, the characters, clothing, items present or not longer present. If you find any continuity errors, list them in the response.
|
||||
For example:
|
||||
|
||||
It is possible for the text to have multiple continuity errors. You must identify all of them.
|
||||
- Characters interacting with objects in a way that contradicts the object's state as per the story developments.
|
||||
- Characters forgetting something they said / agreed to earlier.
|
||||
|
||||
Always analyze the full dialogue, don't stop if you find one error.
|
||||
THINK CAREFULLY, consider the chronological order of the story. If you find any logical continuity mistakes specifically in {{ content_block_identifier }}.
|
||||
|
||||
You response must be in the following format:
|
||||
Your response must be in the following format:
|
||||
|
||||
ERROR 1: explanation of error
|
||||
ERROR 2: explanation of error
|
||||
ERROR 3: explanation of error
|
||||
ERROR: [Description of the logical contradiction] - one per line
|
||||
{% if llm_can_be_coerced() -%}
|
||||
{{ bot_token }}I carefully analyzed the story developments and compared against the next proposed scene, and i found that there are
|
||||
{% endif -%}
|
||||
@@ -1,8 +1,8 @@
|
||||
{% if character -%}
|
||||
{% set content_block_identifier = character.name + "'s next dialogue" %}
|
||||
{% set content_block_identifier = character.name + "'s next scene (ID 11)" %}
|
||||
{% set content_fix_identifier = character.name + "'s adjusted dialogue" %}
|
||||
{% else -%}
|
||||
{% set content_block_identifier = "next narrative" %}
|
||||
{% set content_block_identifier = "next narrative (ID 11)" %}
|
||||
{% set content_fix_identifier = "adjusted narrative" %}
|
||||
{% endif -%}
|
||||
{% set _ = set_state("content_fix_identifier", content_fix_identifier) %}
|
||||
@@ -33,19 +33,19 @@ No dialogue so far
|
||||
```{{ content_block_identifier }}
|
||||
{{ content }}
|
||||
```
|
||||
|
||||
The following continuity errors have been identified in "{{ content_block_identifier }}":
|
||||
<|CLOSE_SECTION|>
|
||||
<|SECTION:TASK|>
|
||||
Write a revised draft of "{{ content_block_identifier }}" and fix the continuity errors identified in "{{ content_block_identifier }}":
|
||||
|
||||
{% for error in errors -%}
|
||||
{{ error }}
|
||||
|
||||
{% endfor %}
|
||||
<|CLOSE_SECTION|>
|
||||
<|SECTION:TASK|>
|
||||
Write a revised draft of "{{ content_block_identifier }}" and fix the continuity errors identified.
|
||||
|
||||
YOU MUST NOT CHANGE THE MEANING, PLOT DIRECTION OR TONE OF THE TEXT.
|
||||
YOU MUST ONLY FIX CONTINUITY ERRORS, KEEP THE TONE, STYLE, AND MEANING THE SAME.
|
||||
|
||||
YOU MUST ONLY FIX CONTINUITY ERRORS.
|
||||
Your revision must be framed between "```{{ content_fix_identifier }}" and "```". Your revision must only be {{ character.name }}'s dialogue and must not include any other character's dialogue.
|
||||
<|CLOSE_SECTION|>
|
||||
{{ bot_token }}```{{ content_fix_identifier }}<|TRAILING_NEW_LINE|>
|
||||
{% if llm_can_be_coerced() -%}
|
||||
{{ bot_token }}```{{ content_fix_identifier }}<|TRAILING_NEW_LINE|>
|
||||
{% endif -%}
|
||||
@@ -24,9 +24,9 @@ Use an informal and colloquial register with a conversational tone. Overall, the
|
||||
|
||||
Narration style should be that of a 90s point and click adventure game. You are omniscient and can describe the scene in detail.
|
||||
|
||||
[$REPETITION|Narration is getting repetitive. Try to choose different words to break up the repetitive text.]
|
||||
Only generate new narration. {{ extra_instructions }}
|
||||
{% include "rerun-context.jinja2" -%}
|
||||
[$REPETITION|Narration is getting repetitive. Try to choose different words to break up the repetitive text.]
|
||||
|
||||
<|CLOSE_SECTION|>
|
||||
{{ bot_token }}New Narration:
|
||||
@@ -1,12 +1,19 @@
|
||||
{% if summarization_method == "facts" -%}
|
||||
{% set output_type = "factual list" -%}
|
||||
{% else -%}
|
||||
{% set output_type = "narrative description" -%}
|
||||
{% endif -%}
|
||||
{% if extra_context -%}
|
||||
<|SECTION:PREVIOUS CONTEXT|>
|
||||
{{ extra_context }}
|
||||
<|CLOSE_SECTION|>
|
||||
{% endif -%}
|
||||
<|SECTION:TASK|>
|
||||
Question: What happens explicitly within the dialogue section alpha below? Summarize into narrative description.
|
||||
Question: What happens explicitly within the dialogue section alpha below? Summarize into a {{output_type}}.
|
||||
Content Context: This is a specific scene from {{ scene.context }}
|
||||
{% if output_type == "narrative description" %}
|
||||
Use an informal and colloquial register with a conversational tone. Overall, the narrative is informal, conversational, natural, and spontaneous, with a sense of immediacy.
|
||||
{% endif %}
|
||||
|
||||
{% if summarization_method == "long" -%}
|
||||
This should be a detailed summary of the dialogue, including all the juicy details.
|
||||
@@ -16,7 +23,11 @@ This should be a short and specific summary of the dialogue, including the most
|
||||
|
||||
YOU MUST ONLY SUMMARIZE THE CONTENT IN DIALOGUE SECTION ALPHA.
|
||||
|
||||
Expected Answer: A summarized narrative description of the dialogue section alpha, that can be inserted into the ongoing story in place of the dialogue.
|
||||
{% if output_type == "narrative description" %}
|
||||
Expected Answer: A summarized {{output_type}} of the dialogue section alpha, that can be inserted into the ongoing story in place of the dialogue.
|
||||
{% elif output_type == "factual list" %}
|
||||
Expected Answer: A highly accurate numerical chronological list of the events and state changes that occur in the dialogue section alpha. Important is anything that causes a state change in the scene, characters or objects. Use simple, clear language, and note details. Use exact words. Note all the state changes. Leave nothing out.
|
||||
{% endif %}
|
||||
{% if extra_instructions -%}
|
||||
{{ extra_instructions }}
|
||||
{% endif -%}
|
||||
|
||||
@@ -2,6 +2,7 @@ import pydantic
|
||||
import structlog
|
||||
|
||||
from talemate.agents.creator.assistant import ContentGenerationContext
|
||||
from talemate.emit import emit
|
||||
from talemate.instance import get_agent
|
||||
|
||||
log = structlog.get_logger("talemate.server.assistant")
|
||||
@@ -41,3 +42,46 @@ class AssistantPlugin:
|
||||
},
|
||||
}
|
||||
)
|
||||
|
||||
async def handle_autocomplete(self, data: dict):
|
||||
data = ContentGenerationContext(**data)
|
||||
try:
|
||||
creator = self.scene.get_helper("creator").agent
|
||||
context_type, context_name = data.computed_context
|
||||
|
||||
if context_type == "dialogue":
|
||||
|
||||
if not data.character:
|
||||
character = self.scene.get_player_character()
|
||||
else:
|
||||
character = self.scene.get_character(data.character)
|
||||
|
||||
log.info(
|
||||
"Running autocomplete dialogue",
|
||||
partial=data.partial,
|
||||
character=character,
|
||||
)
|
||||
await creator.autocomplete_dialogue(
|
||||
data.partial, character, emit_signal=True
|
||||
)
|
||||
return
|
||||
elif context_type == "narrative":
|
||||
log.info("Running autocomplete narrative", partial=data.partial)
|
||||
await creator.autocomplete_narrative(data.partial, emit_signal=True)
|
||||
return
|
||||
|
||||
# force length to 35
|
||||
data.length = 35
|
||||
log.info("Running autocomplete context", args=data)
|
||||
completion = await creator.contextual_generate(data)
|
||||
log.info("Autocomplete context complete", completion=completion)
|
||||
completion = (
|
||||
completion.replace(f"{context_name}: {data.partial}", "")
|
||||
.lstrip(".")
|
||||
.strip()
|
||||
)
|
||||
|
||||
emit("autocomplete_suggestion", completion)
|
||||
except Exception as e:
|
||||
log.error("Error running autocomplete", error=str(e))
|
||||
emit("autocomplete_suggestion", "")
|
||||
|
||||
@@ -11,6 +11,20 @@ class TestPromptPayload(pydantic.BaseModel):
|
||||
kind: str
|
||||
|
||||
|
||||
def ensure_number(v):
|
||||
"""
|
||||
if v is a str but digit turn into into or float
|
||||
"""
|
||||
|
||||
if isinstance(v, str):
|
||||
if v.isdigit():
|
||||
return int(v)
|
||||
try:
|
||||
return float(v)
|
||||
except ValueError:
|
||||
return v
|
||||
return v
|
||||
|
||||
class DevToolsPlugin:
|
||||
router = "devtools"
|
||||
|
||||
@@ -30,6 +44,14 @@ class DevToolsPlugin:
|
||||
async def handle_test_prompt(self, data):
|
||||
payload = TestPromptPayload(**data)
|
||||
client = self.websocket_handler.llm_clients[payload.client_name]["client"]
|
||||
|
||||
log.info(
|
||||
"Testing prompt",
|
||||
payload={
|
||||
k: ensure_number(v) for k, v in payload.generation_parameters.items() if k != "prompt"
|
||||
},
|
||||
)
|
||||
|
||||
response = await client.generate(
|
||||
payload.prompt,
|
||||
payload.generation_parameters,
|
||||
|
||||
@@ -515,7 +515,7 @@ class WebsocketHandler(Receiver):
|
||||
"name": emission.id,
|
||||
"status": emission.status,
|
||||
"data": emission.data,
|
||||
"max_token_length": client.max_token_length if client else 4096,
|
||||
"max_token_length": client.max_token_length if client else 8192,
|
||||
"api_url": getattr(client, "api_url", None) if client else None,
|
||||
"api_url": getattr(client, "api_url", None) if client else None,
|
||||
"api_key": getattr(client, "api_key", None) if client else None,
|
||||
@@ -757,6 +757,18 @@ class WebsocketHandler(Receiver):
|
||||
self.scene.delete_message(message_id)
|
||||
|
||||
def edit_message(self, message_id, new_text):
|
||||
|
||||
message = self.scene.get_message(message_id)
|
||||
|
||||
editor = instance.get_agent("editor")
|
||||
|
||||
if editor.enabled and message.typ == "character":
|
||||
character = self.scene.get_character(message.character_name)
|
||||
loop = asyncio.get_event_loop()
|
||||
new_text = loop.run_until_complete(
|
||||
editor.fix_exposition(new_text, character)
|
||||
)
|
||||
|
||||
self.scene.edit_message(message_id, new_text)
|
||||
|
||||
def apply_scene_config(self, scene_config: dict):
|
||||
|
||||
@@ -4,6 +4,7 @@ from typing import Any, Union
|
||||
import pydantic
|
||||
import structlog
|
||||
|
||||
from talemate.instance import get_agent
|
||||
from talemate.world_state.manager import (
|
||||
StateReinforcementTemplate,
|
||||
WorldStateManager,
|
||||
@@ -105,6 +106,10 @@ class DeleteWorldStateTemplatePayload(pydantic.BaseModel):
|
||||
template: StateReinforcementTemplate
|
||||
|
||||
|
||||
class GenerateCharacterDialogueInstructionsPayload(pydantic.BaseModel):
|
||||
name: str
|
||||
|
||||
|
||||
class WorldStateManagerPlugin:
|
||||
router = "world_state_manager"
|
||||
|
||||
@@ -602,3 +607,36 @@ class WorldStateManagerPlugin:
|
||||
|
||||
await self.handle_get_templates({})
|
||||
await self.signal_operation_done()
|
||||
|
||||
async def handle_generate_character_dialogue_instructions(self, data):
|
||||
payload = GenerateCharacterDialogueInstructionsPayload(**data)
|
||||
|
||||
log.debug("Generate character dialogue instructions", name=payload.name)
|
||||
|
||||
character = self.scene.get_character(payload.name)
|
||||
|
||||
if not character:
|
||||
log.error("Character not found", name=payload.name)
|
||||
return
|
||||
|
||||
creator = get_agent("creator")
|
||||
|
||||
instructions = await creator.determine_character_dialogue_instructions(
|
||||
character
|
||||
)
|
||||
|
||||
character.dialogue_instructions = instructions
|
||||
|
||||
self.websocket_handler.queue_put(
|
||||
{
|
||||
"type": "world_state_manager",
|
||||
"action": "character_dialogue_instructions_generated",
|
||||
"data": {
|
||||
"name": payload.name,
|
||||
"instructions": instructions,
|
||||
},
|
||||
}
|
||||
)
|
||||
|
||||
await self.signal_operation_done()
|
||||
self.scene.emit_status()
|
||||
|
||||
@@ -46,6 +46,7 @@ from talemate.scene_message import (
|
||||
TimePassageMessage,
|
||||
)
|
||||
from talemate.util import colored_text, count_tokens, extract_metadata, wrap_text
|
||||
from talemate.util.prompt import condensed
|
||||
from talemate.world_state import WorldState
|
||||
from talemate.world_state.manager import WorldStateManager
|
||||
|
||||
@@ -1385,11 +1386,29 @@ class Scene(Emitter):
|
||||
conversation_format = self.conversation_format
|
||||
actor_direction_mode = self.get_helper("director").agent.actor_direction_mode
|
||||
|
||||
history_offset = kwargs.get("history_offset", 0)
|
||||
message_id = kwargs.get("message_id")
|
||||
include_reinfocements = kwargs.get("include_reinfocements", True)
|
||||
|
||||
# if message id is provided, find the message in the history
|
||||
if message_id:
|
||||
|
||||
if history_offset:
|
||||
log.warning(
|
||||
"context_history",
|
||||
message="history_offset is ignored when message_id is provided",
|
||||
)
|
||||
|
||||
message_index = self.message_index(message_id)
|
||||
history_start = message_index - 1
|
||||
else:
|
||||
history_start = len(self.history) - (1 + history_offset)
|
||||
|
||||
# collect dialogue
|
||||
|
||||
count = 0
|
||||
|
||||
for i in range(len(self.history) - 1, -1, -1):
|
||||
for i in range(history_start, -1, -1):
|
||||
count += 1
|
||||
|
||||
message = self.history[i]
|
||||
@@ -1397,6 +1416,9 @@ class Scene(Emitter):
|
||||
if message.hidden:
|
||||
continue
|
||||
|
||||
if isinstance(message, ReinforcementMessage) and not include_reinfocements:
|
||||
continue
|
||||
|
||||
if isinstance(message, DirectorMessage):
|
||||
if not keep_director:
|
||||
continue
|
||||
@@ -1441,7 +1463,7 @@ class Scene(Emitter):
|
||||
if count_tokens(parts_context) + count_tokens(text) > budget_context:
|
||||
break
|
||||
|
||||
parts_context.insert(0, text)
|
||||
parts_context.insert(0, condensed(text))
|
||||
|
||||
if count_tokens(parts_context + parts_dialogue) < 1024:
|
||||
intro = self.get_intro()
|
||||
@@ -2101,7 +2123,7 @@ class Scene(Emitter):
|
||||
|
||||
async def add_to_recent_scenes(self):
|
||||
log.debug("add_to_recent_scenes", filename=self.filename)
|
||||
config = Config(**self.config)
|
||||
config = load_config(as_model=True)
|
||||
config.recent_scenes.push(self)
|
||||
config.save()
|
||||
|
||||
@@ -2202,6 +2224,7 @@ class Scene(Emitter):
|
||||
"ts": scene.ts,
|
||||
"help": scene.help,
|
||||
"experimental": scene.experimental,
|
||||
"restore_from": scene.restore_from,
|
||||
}
|
||||
|
||||
@property
|
||||
|
||||
@@ -5,7 +5,7 @@ import json
|
||||
import re
|
||||
import textwrap
|
||||
from typing import List, Union
|
||||
|
||||
import struct
|
||||
import isodate
|
||||
import structlog
|
||||
from colorama import Back, Fore, Style, init
|
||||
@@ -179,6 +179,29 @@ def color_emotes(text: str, color: str = "blue") -> str:
|
||||
def extract_metadata(img_path, img_format):
|
||||
return chara_read(img_path)
|
||||
|
||||
def read_metadata_from_png_text(image_path:str) -> dict:
|
||||
|
||||
"""
|
||||
Reads the character metadata from the tEXt chunk of a PNG image.
|
||||
"""
|
||||
|
||||
# Read the image
|
||||
with open(image_path, 'rb') as f:
|
||||
png_data = f.read()
|
||||
|
||||
# Split the PNG data into chunks
|
||||
offset = 8 # Skip the PNG signature
|
||||
while offset < len(png_data):
|
||||
length = struct.unpack('!I', png_data[offset:offset+4])[0]
|
||||
chunk_type = png_data[offset+4:offset+8]
|
||||
chunk_data = png_data[offset+8:offset+8+length]
|
||||
if chunk_type == b'tEXt':
|
||||
keyword, text_data = chunk_data.split(b'\x00', 1)
|
||||
if keyword == b'chara':
|
||||
return json.loads(base64.b64decode(text_data).decode('utf-8'))
|
||||
offset += 12 + length
|
||||
|
||||
raise ValueError('No character metadata found.')
|
||||
|
||||
def chara_read(img_url, input_format=None):
|
||||
if input_format is None:
|
||||
@@ -194,7 +217,6 @@ def chara_read(img_url, input_format=None):
|
||||
image = Image.open(io.BytesIO(image_data))
|
||||
|
||||
exif_data = image.getexif()
|
||||
|
||||
if format == "webp":
|
||||
try:
|
||||
if 37510 in exif_data:
|
||||
@@ -235,7 +257,15 @@ def chara_read(img_url, input_format=None):
|
||||
return base64_decoded_data
|
||||
else:
|
||||
log.warn("chara_load", msg="No chara data found in PNG image.")
|
||||
return False
|
||||
log.warn("chara_load", msg="Trying to read from PNG text.")
|
||||
|
||||
try:
|
||||
return read_metadata_from_png_text(img_url)
|
||||
except ValueError:
|
||||
return False
|
||||
except Exception as exc:
|
||||
log.error("chara_load", msg="Error reading metadata from PNG text.", exc_info=exc)
|
||||
return False
|
||||
else:
|
||||
return None
|
||||
|
||||
@@ -949,6 +979,7 @@ def ensure_dialog_line_format(line: str, default_wrap: str = None) -> str:
|
||||
|
||||
line = line.replace('*, "', '* "')
|
||||
line = line.replace('*. "', '* "')
|
||||
line = line.replace("*.", ".*")
|
||||
|
||||
# if the line ends with a whitespace followed by a classifier, strip both from the end
|
||||
# as this indicates the remnants of a partial segment that was removed.
|
||||
|
||||
@@ -1,4 +1,6 @@
|
||||
__all__ = ["replace_special_tokens"]
|
||||
import re
|
||||
|
||||
__all__ = ["condensed", "replace_special_tokens"]
|
||||
|
||||
|
||||
def replace_special_tokens(prompt: str):
|
||||
@@ -12,3 +14,11 @@ def replace_special_tokens(prompt: str):
|
||||
return prompt.replace("<|TRAILING_NEW_LINE|>", "\n").replace(
|
||||
"<|TRAILING_SPACE|>", " "
|
||||
)
|
||||
|
||||
|
||||
def condensed(s):
|
||||
"""Replace all line breaks in a string with spaces."""
|
||||
r = s.replace("\n", " ").replace("\r", "")
|
||||
|
||||
# also replace multiple spaces with a single space
|
||||
return re.sub(r"\s+", " ", r)
|
||||
|
||||
3
talemate_frontend/example.env.development.local
Normal file
3
talemate_frontend/example.env.development.local
Normal file
@@ -0,0 +1,3 @@
|
||||
ALLOWED_HOSTS=example.com
|
||||
# wss if behind ssl, ws if not
|
||||
VUE_APP_TALEMATE_BACKEND_WEBSOCKET_URL=wss://example.com:5050
|
||||
492
talemate_frontend/package-lock.json
generated
492
talemate_frontend/package-lock.json
generated
@@ -1,18 +1,22 @@
|
||||
{
|
||||
"name": "talemate_frontend",
|
||||
"version": "0.24.0",
|
||||
"version": "0.25.5",
|
||||
"lockfileVersion": 2,
|
||||
"requires": true,
|
||||
"packages": {
|
||||
"": {
|
||||
"name": "talemate_frontend",
|
||||
"version": "0.24.0",
|
||||
"version": "0.25.5",
|
||||
"dependencies": {
|
||||
"@codemirror/lang-markdown": "^6.2.5",
|
||||
"@codemirror/theme-one-dark": "^6.1.2",
|
||||
"@mdi/font": "7.4.47",
|
||||
"codemirror": "^6.0.1",
|
||||
"core-js": "^3.8.3",
|
||||
"dot-prop": "^8.0.2",
|
||||
"roboto-fontface": "*",
|
||||
"vue": "^3.2.13",
|
||||
"vue-codemirror": "^6.1.1",
|
||||
"vuetify": "^3.5.0",
|
||||
"webfontloader": "^1.0.0"
|
||||
},
|
||||
@@ -1823,6 +1827,149 @@
|
||||
"node": ">=6.9.0"
|
||||
}
|
||||
},
|
||||
"node_modules/@codemirror/autocomplete": {
|
||||
"version": "6.16.0",
|
||||
"resolved": "https://registry.npmjs.org/@codemirror/autocomplete/-/autocomplete-6.16.0.tgz",
|
||||
"integrity": "sha512-P/LeCTtZHRTCU4xQsa89vSKWecYv1ZqwzOd5topheGRf+qtacFgBeIMQi3eL8Kt/BUNvxUWkx+5qP2jlGoARrg==",
|
||||
"dependencies": {
|
||||
"@codemirror/language": "^6.0.0",
|
||||
"@codemirror/state": "^6.0.0",
|
||||
"@codemirror/view": "^6.17.0",
|
||||
"@lezer/common": "^1.0.0"
|
||||
},
|
||||
"peerDependencies": {
|
||||
"@codemirror/language": "^6.0.0",
|
||||
"@codemirror/state": "^6.0.0",
|
||||
"@codemirror/view": "^6.0.0",
|
||||
"@lezer/common": "^1.0.0"
|
||||
}
|
||||
},
|
||||
"node_modules/@codemirror/commands": {
|
||||
"version": "6.5.0",
|
||||
"resolved": "https://registry.npmjs.org/@codemirror/commands/-/commands-6.5.0.tgz",
|
||||
"integrity": "sha512-rK+sj4fCAN/QfcY9BEzYMgp4wwL/q5aj/VfNSoH1RWPF9XS/dUwBkvlL3hpWgEjOqlpdN1uLC9UkjJ4tmyjJYg==",
|
||||
"dependencies": {
|
||||
"@codemirror/language": "^6.0.0",
|
||||
"@codemirror/state": "^6.4.0",
|
||||
"@codemirror/view": "^6.0.0",
|
||||
"@lezer/common": "^1.1.0"
|
||||
}
|
||||
},
|
||||
"node_modules/@codemirror/lang-css": {
|
||||
"version": "6.2.1",
|
||||
"resolved": "https://registry.npmjs.org/@codemirror/lang-css/-/lang-css-6.2.1.tgz",
|
||||
"integrity": "sha512-/UNWDNV5Viwi/1lpr/dIXJNWiwDxpw13I4pTUAsNxZdg6E0mI2kTQb0P2iHczg1Tu+H4EBgJR+hYhKiHKko7qg==",
|
||||
"dependencies": {
|
||||
"@codemirror/autocomplete": "^6.0.0",
|
||||
"@codemirror/language": "^6.0.0",
|
||||
"@codemirror/state": "^6.0.0",
|
||||
"@lezer/common": "^1.0.2",
|
||||
"@lezer/css": "^1.0.0"
|
||||
}
|
||||
},
|
||||
"node_modules/@codemirror/lang-html": {
|
||||
"version": "6.4.9",
|
||||
"resolved": "https://registry.npmjs.org/@codemirror/lang-html/-/lang-html-6.4.9.tgz",
|
||||
"integrity": "sha512-aQv37pIMSlueybId/2PVSP6NPnmurFDVmZwzc7jszd2KAF8qd4VBbvNYPXWQq90WIARjsdVkPbw29pszmHws3Q==",
|
||||
"dependencies": {
|
||||
"@codemirror/autocomplete": "^6.0.0",
|
||||
"@codemirror/lang-css": "^6.0.0",
|
||||
"@codemirror/lang-javascript": "^6.0.0",
|
||||
"@codemirror/language": "^6.4.0",
|
||||
"@codemirror/state": "^6.0.0",
|
||||
"@codemirror/view": "^6.17.0",
|
||||
"@lezer/common": "^1.0.0",
|
||||
"@lezer/css": "^1.1.0",
|
||||
"@lezer/html": "^1.3.0"
|
||||
}
|
||||
},
|
||||
"node_modules/@codemirror/lang-javascript": {
|
||||
"version": "6.2.2",
|
||||
"resolved": "https://registry.npmjs.org/@codemirror/lang-javascript/-/lang-javascript-6.2.2.tgz",
|
||||
"integrity": "sha512-VGQfY+FCc285AhWuwjYxQyUQcYurWlxdKYT4bqwr3Twnd5wP5WSeu52t4tvvuWmljT4EmgEgZCqSieokhtY8hg==",
|
||||
"dependencies": {
|
||||
"@codemirror/autocomplete": "^6.0.0",
|
||||
"@codemirror/language": "^6.6.0",
|
||||
"@codemirror/lint": "^6.0.0",
|
||||
"@codemirror/state": "^6.0.0",
|
||||
"@codemirror/view": "^6.17.0",
|
||||
"@lezer/common": "^1.0.0",
|
||||
"@lezer/javascript": "^1.0.0"
|
||||
}
|
||||
},
|
||||
"node_modules/@codemirror/lang-markdown": {
|
||||
"version": "6.2.5",
|
||||
"resolved": "https://registry.npmjs.org/@codemirror/lang-markdown/-/lang-markdown-6.2.5.tgz",
|
||||
"integrity": "sha512-Hgke565YcO4fd9pe2uLYxnMufHO5rQwRr+AAhFq8ABuhkrjyX8R5p5s+hZUTdV60O0dMRjxKhBLxz8pu/MkUVA==",
|
||||
"dependencies": {
|
||||
"@codemirror/autocomplete": "^6.7.1",
|
||||
"@codemirror/lang-html": "^6.0.0",
|
||||
"@codemirror/language": "^6.3.0",
|
||||
"@codemirror/state": "^6.0.0",
|
||||
"@codemirror/view": "^6.0.0",
|
||||
"@lezer/common": "^1.2.1",
|
||||
"@lezer/markdown": "^1.0.0"
|
||||
}
|
||||
},
|
||||
"node_modules/@codemirror/language": {
|
||||
"version": "6.10.1",
|
||||
"resolved": "https://registry.npmjs.org/@codemirror/language/-/language-6.10.1.tgz",
|
||||
"integrity": "sha512-5GrXzrhq6k+gL5fjkAwt90nYDmjlzTIJV8THnxNFtNKWotMIlzzN+CpqxqwXOECnUdOndmSeWntVrVcv5axWRQ==",
|
||||
"dependencies": {
|
||||
"@codemirror/state": "^6.0.0",
|
||||
"@codemirror/view": "^6.23.0",
|
||||
"@lezer/common": "^1.1.0",
|
||||
"@lezer/highlight": "^1.0.0",
|
||||
"@lezer/lr": "^1.0.0",
|
||||
"style-mod": "^4.0.0"
|
||||
}
|
||||
},
|
||||
"node_modules/@codemirror/lint": {
|
||||
"version": "6.7.0",
|
||||
"resolved": "https://registry.npmjs.org/@codemirror/lint/-/lint-6.7.0.tgz",
|
||||
"integrity": "sha512-LTLOL2nT41ADNSCCCCw8Q/UmdAFzB23OUYSjsHTdsVaH0XEo+orhuqbDNWzrzodm14w6FOxqxpmy4LF8Lixqjw==",
|
||||
"dependencies": {
|
||||
"@codemirror/state": "^6.0.0",
|
||||
"@codemirror/view": "^6.0.0",
|
||||
"crelt": "^1.0.5"
|
||||
}
|
||||
},
|
||||
"node_modules/@codemirror/search": {
|
||||
"version": "6.5.6",
|
||||
"resolved": "https://registry.npmjs.org/@codemirror/search/-/search-6.5.6.tgz",
|
||||
"integrity": "sha512-rpMgcsh7o0GuCDUXKPvww+muLA1pDJaFrpq/CCHtpQJYz8xopu4D1hPcKRoDD0YlF8gZaqTNIRa4VRBWyhyy7Q==",
|
||||
"dependencies": {
|
||||
"@codemirror/state": "^6.0.0",
|
||||
"@codemirror/view": "^6.0.0",
|
||||
"crelt": "^1.0.5"
|
||||
}
|
||||
},
|
||||
"node_modules/@codemirror/state": {
|
||||
"version": "6.4.1",
|
||||
"resolved": "https://registry.npmjs.org/@codemirror/state/-/state-6.4.1.tgz",
|
||||
"integrity": "sha512-QkEyUiLhsJoZkbumGZlswmAhA7CBU02Wrz7zvH4SrcifbsqwlXShVXg65f3v/ts57W3dqyamEriMhij1Z3Zz4A=="
|
||||
},
|
||||
"node_modules/@codemirror/theme-one-dark": {
|
||||
"version": "6.1.2",
|
||||
"resolved": "https://registry.npmjs.org/@codemirror/theme-one-dark/-/theme-one-dark-6.1.2.tgz",
|
||||
"integrity": "sha512-F+sH0X16j/qFLMAfbciKTxVOwkdAS336b7AXTKOZhy8BR3eH/RelsnLgLFINrpST63mmN2OuwUt0W2ndUgYwUA==",
|
||||
"dependencies": {
|
||||
"@codemirror/language": "^6.0.0",
|
||||
"@codemirror/state": "^6.0.0",
|
||||
"@codemirror/view": "^6.0.0",
|
||||
"@lezer/highlight": "^1.0.0"
|
||||
}
|
||||
},
|
||||
"node_modules/@codemirror/view": {
|
||||
"version": "6.26.3",
|
||||
"resolved": "https://registry.npmjs.org/@codemirror/view/-/view-6.26.3.tgz",
|
||||
"integrity": "sha512-gmqxkPALZjkgSxIeeweY/wGQXBfwTUaLs8h7OKtSwfbj9Ct3L11lD+u1sS7XHppxFQoMDiMDp07P9f3I2jWOHw==",
|
||||
"dependencies": {
|
||||
"@codemirror/state": "^6.4.0",
|
||||
"style-mod": "^4.1.0",
|
||||
"w3c-keyname": "^2.2.4"
|
||||
}
|
||||
},
|
||||
"node_modules/@discoveryjs/json-ext": {
|
||||
"version": "0.5.7",
|
||||
"resolved": "https://registry.npmmirror.com/@discoveryjs/json-ext/-/json-ext-0.5.7.tgz",
|
||||
@@ -1986,6 +2133,66 @@
|
||||
"integrity": "sha512-Hcv+nVC0kZnQ3tD9GVu5xSMR4VVYOteQIr/hwFPVEvPdlXqgGEuRjiheChHgdM+JyqdgNcmzZOX/tnl0JOiI7A==",
|
||||
"dev": true
|
||||
},
|
||||
"node_modules/@lezer/common": {
|
||||
"version": "1.2.1",
|
||||
"resolved": "https://registry.npmjs.org/@lezer/common/-/common-1.2.1.tgz",
|
||||
"integrity": "sha512-yemX0ZD2xS/73llMZIK6KplkjIjf2EvAHcinDi/TfJ9hS25G0388+ClHt6/3but0oOxinTcQHJLDXh6w1crzFQ=="
|
||||
},
|
||||
"node_modules/@lezer/css": {
|
||||
"version": "1.1.8",
|
||||
"resolved": "https://registry.npmjs.org/@lezer/css/-/css-1.1.8.tgz",
|
||||
"integrity": "sha512-7JhxupKuMBaWQKjQoLtzhGj83DdnZY9MckEOG5+/iLKNK2ZJqKc6hf6uc0HjwCX7Qlok44jBNqZhHKDhEhZYLA==",
|
||||
"dependencies": {
|
||||
"@lezer/common": "^1.2.0",
|
||||
"@lezer/highlight": "^1.0.0",
|
||||
"@lezer/lr": "^1.0.0"
|
||||
}
|
||||
},
|
||||
"node_modules/@lezer/highlight": {
|
||||
"version": "1.2.0",
|
||||
"resolved": "https://registry.npmjs.org/@lezer/highlight/-/highlight-1.2.0.tgz",
|
||||
"integrity": "sha512-WrS5Mw51sGrpqjlh3d4/fOwpEV2Hd3YOkp9DBt4k8XZQcoTHZFB7sx030A6OcahF4J1nDQAa3jXlTVVYH50IFA==",
|
||||
"dependencies": {
|
||||
"@lezer/common": "^1.0.0"
|
||||
}
|
||||
},
|
||||
"node_modules/@lezer/html": {
|
||||
"version": "1.3.9",
|
||||
"resolved": "https://registry.npmjs.org/@lezer/html/-/html-1.3.9.tgz",
|
||||
"integrity": "sha512-MXxeCMPyrcemSLGaTQEZx0dBUH0i+RPl8RN5GwMAzo53nTsd/Unc/t5ZxACeQoyPUM5/GkPLRUs2WliOImzkRA==",
|
||||
"dependencies": {
|
||||
"@lezer/common": "^1.2.0",
|
||||
"@lezer/highlight": "^1.0.0",
|
||||
"@lezer/lr": "^1.0.0"
|
||||
}
|
||||
},
|
||||
"node_modules/@lezer/javascript": {
|
||||
"version": "1.4.15",
|
||||
"resolved": "https://registry.npmjs.org/@lezer/javascript/-/javascript-1.4.15.tgz",
|
||||
"integrity": "sha512-B082ZdjI0vo2AgLqD834GlRTE9gwRX8NzHzKq5uDwEnQ9Dq+A/CEhd3nf68tiNA2f9O+8jS1NeSTUYT9IAqcTw==",
|
||||
"dependencies": {
|
||||
"@lezer/common": "^1.2.0",
|
||||
"@lezer/highlight": "^1.1.3",
|
||||
"@lezer/lr": "^1.3.0"
|
||||
}
|
||||
},
|
||||
"node_modules/@lezer/lr": {
|
||||
"version": "1.4.0",
|
||||
"resolved": "https://registry.npmjs.org/@lezer/lr/-/lr-1.4.0.tgz",
|
||||
"integrity": "sha512-Wst46p51km8gH0ZUmeNrtpRYmdlRHUpN1DQd3GFAyKANi8WVz8c2jHYTf1CVScFaCjQw1iO3ZZdqGDxQPRErTg==",
|
||||
"dependencies": {
|
||||
"@lezer/common": "^1.0.0"
|
||||
}
|
||||
},
|
||||
"node_modules/@lezer/markdown": {
|
||||
"version": "1.3.0",
|
||||
"resolved": "https://registry.npmjs.org/@lezer/markdown/-/markdown-1.3.0.tgz",
|
||||
"integrity": "sha512-ErbEQ15eowmJUyT095e9NJc3BI9yZ894fjSDtHftD0InkfUBGgnKSU6dvan9jqsZuNHg2+ag/1oyDRxNsENupQ==",
|
||||
"dependencies": {
|
||||
"@lezer/common": "^1.0.0",
|
||||
"@lezer/highlight": "^1.0.0"
|
||||
}
|
||||
},
|
||||
"node_modules/@mdi/font": {
|
||||
"version": "7.4.47",
|
||||
"resolved": "https://registry.npmjs.org/@mdi/font/-/font-7.4.47.tgz",
|
||||
@@ -4097,6 +4304,20 @@
|
||||
"node": ">=6"
|
||||
}
|
||||
},
|
||||
"node_modules/codemirror": {
|
||||
"version": "6.0.1",
|
||||
"resolved": "https://registry.npmjs.org/codemirror/-/codemirror-6.0.1.tgz",
|
||||
"integrity": "sha512-J8j+nZ+CdWmIeFIGXEFbFPtpiYacFMDR8GlHK3IyHQJMCaVRfGx9NT+Hxivv1ckLWPvNdZqndbr/7lVhrf/Svg==",
|
||||
"dependencies": {
|
||||
"@codemirror/autocomplete": "^6.0.0",
|
||||
"@codemirror/commands": "^6.0.0",
|
||||
"@codemirror/language": "^6.0.0",
|
||||
"@codemirror/lint": "^6.0.0",
|
||||
"@codemirror/search": "^6.0.0",
|
||||
"@codemirror/state": "^6.0.0",
|
||||
"@codemirror/view": "^6.0.0"
|
||||
}
|
||||
},
|
||||
"node_modules/color-convert": {
|
||||
"version": "1.9.3",
|
||||
"resolved": "https://registry.npmmirror.com/color-convert/-/color-convert-1.9.3.tgz",
|
||||
@@ -4331,6 +4552,11 @@
|
||||
"node": ">=10"
|
||||
}
|
||||
},
|
||||
"node_modules/crelt": {
|
||||
"version": "1.0.6",
|
||||
"resolved": "https://registry.npmjs.org/crelt/-/crelt-1.0.6.tgz",
|
||||
"integrity": "sha512-VQ2MBenTq1fWZUH9DJNGti7kKv6EeAuYr3cLwxUWhIu1baTaXh4Ib5W2CqHVqib4/MqbYGJqiL3Zb8GJZr3l4g=="
|
||||
},
|
||||
"node_modules/cross-spawn": {
|
||||
"version": "6.0.5",
|
||||
"resolved": "https://registry.npmmirror.com/cross-spawn/-/cross-spawn-6.0.5.tgz",
|
||||
@@ -9870,6 +10096,11 @@
|
||||
"node": ">=8"
|
||||
}
|
||||
},
|
||||
"node_modules/style-mod": {
|
||||
"version": "4.1.2",
|
||||
"resolved": "https://registry.npmjs.org/style-mod/-/style-mod-4.1.2.tgz",
|
||||
"integrity": "sha512-wnD1HyVqpJUI2+eKZ+eo1UwghftP6yuFheBqqe+bWCotBjC2K1YnteJILRMs3SM4V/0dLEW1SC27MWP5y+mwmw=="
|
||||
},
|
||||
"node_modules/stylehacks": {
|
||||
"version": "5.1.1",
|
||||
"resolved": "https://registry.npmmirror.com/stylehacks/-/stylehacks-5.1.1.tgz",
|
||||
@@ -10401,6 +10632,21 @@
|
||||
}
|
||||
}
|
||||
},
|
||||
"node_modules/vue-codemirror": {
|
||||
"version": "6.1.1",
|
||||
"resolved": "https://registry.npmjs.org/vue-codemirror/-/vue-codemirror-6.1.1.tgz",
|
||||
"integrity": "sha512-rTAYo44owd282yVxKtJtnOi7ERAcXTeviwoPXjIc6K/IQYUsoDkzPvw/JDFtSP6T7Cz/2g3EHaEyeyaQCKoDMg==",
|
||||
"dependencies": {
|
||||
"@codemirror/commands": "6.x",
|
||||
"@codemirror/language": "6.x",
|
||||
"@codemirror/state": "6.x",
|
||||
"@codemirror/view": "6.x"
|
||||
},
|
||||
"peerDependencies": {
|
||||
"codemirror": "6.x",
|
||||
"vue": "3.x"
|
||||
}
|
||||
},
|
||||
"node_modules/vue-eslint-parser": {
|
||||
"version": "8.3.0",
|
||||
"resolved": "https://registry.npmmirror.com/vue-eslint-parser/-/vue-eslint-parser-8.3.0.tgz",
|
||||
@@ -10614,6 +10860,11 @@
|
||||
}
|
||||
}
|
||||
},
|
||||
"node_modules/w3c-keyname": {
|
||||
"version": "2.2.8",
|
||||
"resolved": "https://registry.npmjs.org/w3c-keyname/-/w3c-keyname-2.2.8.tgz",
|
||||
"integrity": "sha512-dpojBhNsCNN7T82Tm7k26A6G9ML3NkhDsnw9n/eoxSRlVBB4CEtIQ/KTCLI2Fwf3ataSXRhYFkQi3SlnFwPvPQ=="
|
||||
},
|
||||
"node_modules/watchpack": {
|
||||
"version": "2.4.0",
|
||||
"resolved": "https://registry.npmmirror.com/watchpack/-/watchpack-2.4.0.tgz",
|
||||
@@ -12590,6 +12841,143 @@
|
||||
"to-fast-properties": "^2.0.0"
|
||||
}
|
||||
},
|
||||
"@codemirror/autocomplete": {
|
||||
"version": "6.16.0",
|
||||
"resolved": "https://registry.npmjs.org/@codemirror/autocomplete/-/autocomplete-6.16.0.tgz",
|
||||
"integrity": "sha512-P/LeCTtZHRTCU4xQsa89vSKWecYv1ZqwzOd5topheGRf+qtacFgBeIMQi3eL8Kt/BUNvxUWkx+5qP2jlGoARrg==",
|
||||
"requires": {
|
||||
"@codemirror/language": "^6.0.0",
|
||||
"@codemirror/state": "^6.0.0",
|
||||
"@codemirror/view": "^6.17.0",
|
||||
"@lezer/common": "^1.0.0"
|
||||
}
|
||||
},
|
||||
"@codemirror/commands": {
|
||||
"version": "6.5.0",
|
||||
"resolved": "https://registry.npmjs.org/@codemirror/commands/-/commands-6.5.0.tgz",
|
||||
"integrity": "sha512-rK+sj4fCAN/QfcY9BEzYMgp4wwL/q5aj/VfNSoH1RWPF9XS/dUwBkvlL3hpWgEjOqlpdN1uLC9UkjJ4tmyjJYg==",
|
||||
"requires": {
|
||||
"@codemirror/language": "^6.0.0",
|
||||
"@codemirror/state": "^6.4.0",
|
||||
"@codemirror/view": "^6.0.0",
|
||||
"@lezer/common": "^1.1.0"
|
||||
}
|
||||
},
|
||||
"@codemirror/lang-css": {
|
||||
"version": "6.2.1",
|
||||
"resolved": "https://registry.npmjs.org/@codemirror/lang-css/-/lang-css-6.2.1.tgz",
|
||||
"integrity": "sha512-/UNWDNV5Viwi/1lpr/dIXJNWiwDxpw13I4pTUAsNxZdg6E0mI2kTQb0P2iHczg1Tu+H4EBgJR+hYhKiHKko7qg==",
|
||||
"requires": {
|
||||
"@codemirror/autocomplete": "^6.0.0",
|
||||
"@codemirror/language": "^6.0.0",
|
||||
"@codemirror/state": "^6.0.0",
|
||||
"@lezer/common": "^1.0.2",
|
||||
"@lezer/css": "^1.0.0"
|
||||
}
|
||||
},
|
||||
"@codemirror/lang-html": {
|
||||
"version": "6.4.9",
|
||||
"resolved": "https://registry.npmjs.org/@codemirror/lang-html/-/lang-html-6.4.9.tgz",
|
||||
"integrity": "sha512-aQv37pIMSlueybId/2PVSP6NPnmurFDVmZwzc7jszd2KAF8qd4VBbvNYPXWQq90WIARjsdVkPbw29pszmHws3Q==",
|
||||
"requires": {
|
||||
"@codemirror/autocomplete": "^6.0.0",
|
||||
"@codemirror/lang-css": "^6.0.0",
|
||||
"@codemirror/lang-javascript": "^6.0.0",
|
||||
"@codemirror/language": "^6.4.0",
|
||||
"@codemirror/state": "^6.0.0",
|
||||
"@codemirror/view": "^6.17.0",
|
||||
"@lezer/common": "^1.0.0",
|
||||
"@lezer/css": "^1.1.0",
|
||||
"@lezer/html": "^1.3.0"
|
||||
}
|
||||
},
|
||||
"@codemirror/lang-javascript": {
|
||||
"version": "6.2.2",
|
||||
"resolved": "https://registry.npmjs.org/@codemirror/lang-javascript/-/lang-javascript-6.2.2.tgz",
|
||||
"integrity": "sha512-VGQfY+FCc285AhWuwjYxQyUQcYurWlxdKYT4bqwr3Twnd5wP5WSeu52t4tvvuWmljT4EmgEgZCqSieokhtY8hg==",
|
||||
"requires": {
|
||||
"@codemirror/autocomplete": "^6.0.0",
|
||||
"@codemirror/language": "^6.6.0",
|
||||
"@codemirror/lint": "^6.0.0",
|
||||
"@codemirror/state": "^6.0.0",
|
||||
"@codemirror/view": "^6.17.0",
|
||||
"@lezer/common": "^1.0.0",
|
||||
"@lezer/javascript": "^1.0.0"
|
||||
}
|
||||
},
|
||||
"@codemirror/lang-markdown": {
|
||||
"version": "6.2.5",
|
||||
"resolved": "https://registry.npmjs.org/@codemirror/lang-markdown/-/lang-markdown-6.2.5.tgz",
|
||||
"integrity": "sha512-Hgke565YcO4fd9pe2uLYxnMufHO5rQwRr+AAhFq8ABuhkrjyX8R5p5s+hZUTdV60O0dMRjxKhBLxz8pu/MkUVA==",
|
||||
"requires": {
|
||||
"@codemirror/autocomplete": "^6.7.1",
|
||||
"@codemirror/lang-html": "^6.0.0",
|
||||
"@codemirror/language": "^6.3.0",
|
||||
"@codemirror/state": "^6.0.0",
|
||||
"@codemirror/view": "^6.0.0",
|
||||
"@lezer/common": "^1.2.1",
|
||||
"@lezer/markdown": "^1.0.0"
|
||||
}
|
||||
},
|
||||
"@codemirror/language": {
|
||||
"version": "6.10.1",
|
||||
"resolved": "https://registry.npmjs.org/@codemirror/language/-/language-6.10.1.tgz",
|
||||
"integrity": "sha512-5GrXzrhq6k+gL5fjkAwt90nYDmjlzTIJV8THnxNFtNKWotMIlzzN+CpqxqwXOECnUdOndmSeWntVrVcv5axWRQ==",
|
||||
"requires": {
|
||||
"@codemirror/state": "^6.0.0",
|
||||
"@codemirror/view": "^6.23.0",
|
||||
"@lezer/common": "^1.1.0",
|
||||
"@lezer/highlight": "^1.0.0",
|
||||
"@lezer/lr": "^1.0.0",
|
||||
"style-mod": "^4.0.0"
|
||||
}
|
||||
},
|
||||
"@codemirror/lint": {
|
||||
"version": "6.7.0",
|
||||
"resolved": "https://registry.npmjs.org/@codemirror/lint/-/lint-6.7.0.tgz",
|
||||
"integrity": "sha512-LTLOL2nT41ADNSCCCCw8Q/UmdAFzB23OUYSjsHTdsVaH0XEo+orhuqbDNWzrzodm14w6FOxqxpmy4LF8Lixqjw==",
|
||||
"requires": {
|
||||
"@codemirror/state": "^6.0.0",
|
||||
"@codemirror/view": "^6.0.0",
|
||||
"crelt": "^1.0.5"
|
||||
}
|
||||
},
|
||||
"@codemirror/search": {
|
||||
"version": "6.5.6",
|
||||
"resolved": "https://registry.npmjs.org/@codemirror/search/-/search-6.5.6.tgz",
|
||||
"integrity": "sha512-rpMgcsh7o0GuCDUXKPvww+muLA1pDJaFrpq/CCHtpQJYz8xopu4D1hPcKRoDD0YlF8gZaqTNIRa4VRBWyhyy7Q==",
|
||||
"requires": {
|
||||
"@codemirror/state": "^6.0.0",
|
||||
"@codemirror/view": "^6.0.0",
|
||||
"crelt": "^1.0.5"
|
||||
}
|
||||
},
|
||||
"@codemirror/state": {
|
||||
"version": "6.4.1",
|
||||
"resolved": "https://registry.npmjs.org/@codemirror/state/-/state-6.4.1.tgz",
|
||||
"integrity": "sha512-QkEyUiLhsJoZkbumGZlswmAhA7CBU02Wrz7zvH4SrcifbsqwlXShVXg65f3v/ts57W3dqyamEriMhij1Z3Zz4A=="
|
||||
},
|
||||
"@codemirror/theme-one-dark": {
|
||||
"version": "6.1.2",
|
||||
"resolved": "https://registry.npmjs.org/@codemirror/theme-one-dark/-/theme-one-dark-6.1.2.tgz",
|
||||
"integrity": "sha512-F+sH0X16j/qFLMAfbciKTxVOwkdAS336b7AXTKOZhy8BR3eH/RelsnLgLFINrpST63mmN2OuwUt0W2ndUgYwUA==",
|
||||
"requires": {
|
||||
"@codemirror/language": "^6.0.0",
|
||||
"@codemirror/state": "^6.0.0",
|
||||
"@codemirror/view": "^6.0.0",
|
||||
"@lezer/highlight": "^1.0.0"
|
||||
}
|
||||
},
|
||||
"@codemirror/view": {
|
||||
"version": "6.26.3",
|
||||
"resolved": "https://registry.npmjs.org/@codemirror/view/-/view-6.26.3.tgz",
|
||||
"integrity": "sha512-gmqxkPALZjkgSxIeeweY/wGQXBfwTUaLs8h7OKtSwfbj9Ct3L11lD+u1sS7XHppxFQoMDiMDp07P9f3I2jWOHw==",
|
||||
"requires": {
|
||||
"@codemirror/state": "^6.4.0",
|
||||
"style-mod": "^4.1.0",
|
||||
"w3c-keyname": "^2.2.4"
|
||||
}
|
||||
},
|
||||
"@discoveryjs/json-ext": {
|
||||
"version": "0.5.7",
|
||||
"resolved": "https://registry.npmmirror.com/@discoveryjs/json-ext/-/json-ext-0.5.7.tgz",
|
||||
@@ -12730,6 +13118,66 @@
|
||||
"integrity": "sha512-Hcv+nVC0kZnQ3tD9GVu5xSMR4VVYOteQIr/hwFPVEvPdlXqgGEuRjiheChHgdM+JyqdgNcmzZOX/tnl0JOiI7A==",
|
||||
"dev": true
|
||||
},
|
||||
"@lezer/common": {
|
||||
"version": "1.2.1",
|
||||
"resolved": "https://registry.npmjs.org/@lezer/common/-/common-1.2.1.tgz",
|
||||
"integrity": "sha512-yemX0ZD2xS/73llMZIK6KplkjIjf2EvAHcinDi/TfJ9hS25G0388+ClHt6/3but0oOxinTcQHJLDXh6w1crzFQ=="
|
||||
},
|
||||
"@lezer/css": {
|
||||
"version": "1.1.8",
|
||||
"resolved": "https://registry.npmjs.org/@lezer/css/-/css-1.1.8.tgz",
|
||||
"integrity": "sha512-7JhxupKuMBaWQKjQoLtzhGj83DdnZY9MckEOG5+/iLKNK2ZJqKc6hf6uc0HjwCX7Qlok44jBNqZhHKDhEhZYLA==",
|
||||
"requires": {
|
||||
"@lezer/common": "^1.2.0",
|
||||
"@lezer/highlight": "^1.0.0",
|
||||
"@lezer/lr": "^1.0.0"
|
||||
}
|
||||
},
|
||||
"@lezer/highlight": {
|
||||
"version": "1.2.0",
|
||||
"resolved": "https://registry.npmjs.org/@lezer/highlight/-/highlight-1.2.0.tgz",
|
||||
"integrity": "sha512-WrS5Mw51sGrpqjlh3d4/fOwpEV2Hd3YOkp9DBt4k8XZQcoTHZFB7sx030A6OcahF4J1nDQAa3jXlTVVYH50IFA==",
|
||||
"requires": {
|
||||
"@lezer/common": "^1.0.0"
|
||||
}
|
||||
},
|
||||
"@lezer/html": {
|
||||
"version": "1.3.9",
|
||||
"resolved": "https://registry.npmjs.org/@lezer/html/-/html-1.3.9.tgz",
|
||||
"integrity": "sha512-MXxeCMPyrcemSLGaTQEZx0dBUH0i+RPl8RN5GwMAzo53nTsd/Unc/t5ZxACeQoyPUM5/GkPLRUs2WliOImzkRA==",
|
||||
"requires": {
|
||||
"@lezer/common": "^1.2.0",
|
||||
"@lezer/highlight": "^1.0.0",
|
||||
"@lezer/lr": "^1.0.0"
|
||||
}
|
||||
},
|
||||
"@lezer/javascript": {
|
||||
"version": "1.4.15",
|
||||
"resolved": "https://registry.npmjs.org/@lezer/javascript/-/javascript-1.4.15.tgz",
|
||||
"integrity": "sha512-B082ZdjI0vo2AgLqD834GlRTE9gwRX8NzHzKq5uDwEnQ9Dq+A/CEhd3nf68tiNA2f9O+8jS1NeSTUYT9IAqcTw==",
|
||||
"requires": {
|
||||
"@lezer/common": "^1.2.0",
|
||||
"@lezer/highlight": "^1.1.3",
|
||||
"@lezer/lr": "^1.3.0"
|
||||
}
|
||||
},
|
||||
"@lezer/lr": {
|
||||
"version": "1.4.0",
|
||||
"resolved": "https://registry.npmjs.org/@lezer/lr/-/lr-1.4.0.tgz",
|
||||
"integrity": "sha512-Wst46p51km8gH0ZUmeNrtpRYmdlRHUpN1DQd3GFAyKANi8WVz8c2jHYTf1CVScFaCjQw1iO3ZZdqGDxQPRErTg==",
|
||||
"requires": {
|
||||
"@lezer/common": "^1.0.0"
|
||||
}
|
||||
},
|
||||
"@lezer/markdown": {
|
||||
"version": "1.3.0",
|
||||
"resolved": "https://registry.npmjs.org/@lezer/markdown/-/markdown-1.3.0.tgz",
|
||||
"integrity": "sha512-ErbEQ15eowmJUyT095e9NJc3BI9yZ894fjSDtHftD0InkfUBGgnKSU6dvan9jqsZuNHg2+ag/1oyDRxNsENupQ==",
|
||||
"requires": {
|
||||
"@lezer/common": "^1.0.0",
|
||||
"@lezer/highlight": "^1.0.0"
|
||||
}
|
||||
},
|
||||
"@mdi/font": {
|
||||
"version": "7.4.47",
|
||||
"resolved": "https://registry.npmjs.org/@mdi/font/-/font-7.4.47.tgz",
|
||||
@@ -14490,6 +14938,20 @@
|
||||
"shallow-clone": "^3.0.0"
|
||||
}
|
||||
},
|
||||
"codemirror": {
|
||||
"version": "6.0.1",
|
||||
"resolved": "https://registry.npmjs.org/codemirror/-/codemirror-6.0.1.tgz",
|
||||
"integrity": "sha512-J8j+nZ+CdWmIeFIGXEFbFPtpiYacFMDR8GlHK3IyHQJMCaVRfGx9NT+Hxivv1ckLWPvNdZqndbr/7lVhrf/Svg==",
|
||||
"requires": {
|
||||
"@codemirror/autocomplete": "^6.0.0",
|
||||
"@codemirror/commands": "^6.0.0",
|
||||
"@codemirror/language": "^6.0.0",
|
||||
"@codemirror/lint": "^6.0.0",
|
||||
"@codemirror/search": "^6.0.0",
|
||||
"@codemirror/state": "^6.0.0",
|
||||
"@codemirror/view": "^6.0.0"
|
||||
}
|
||||
},
|
||||
"color-convert": {
|
||||
"version": "1.9.3",
|
||||
"resolved": "https://registry.npmmirror.com/color-convert/-/color-convert-1.9.3.tgz",
|
||||
@@ -14690,6 +15152,11 @@
|
||||
"yaml": "^1.10.0"
|
||||
}
|
||||
},
|
||||
"crelt": {
|
||||
"version": "1.0.6",
|
||||
"resolved": "https://registry.npmjs.org/crelt/-/crelt-1.0.6.tgz",
|
||||
"integrity": "sha512-VQ2MBenTq1fWZUH9DJNGti7kKv6EeAuYr3cLwxUWhIu1baTaXh4Ib5W2CqHVqib4/MqbYGJqiL3Zb8GJZr3l4g=="
|
||||
},
|
||||
"cross-spawn": {
|
||||
"version": "6.0.5",
|
||||
"resolved": "https://registry.npmmirror.com/cross-spawn/-/cross-spawn-6.0.5.tgz",
|
||||
@@ -18998,6 +19465,11 @@
|
||||
"integrity": "sha512-6fPc+R4ihwqP6N/aIv2f1gMH8lOVtWQHoqC4yK6oSDVVocumAsfCqjkXnqiYMhmMwS/mEHLp7Vehlt3ql6lEig==",
|
||||
"dev": true
|
||||
},
|
||||
"style-mod": {
|
||||
"version": "4.1.2",
|
||||
"resolved": "https://registry.npmjs.org/style-mod/-/style-mod-4.1.2.tgz",
|
||||
"integrity": "sha512-wnD1HyVqpJUI2+eKZ+eo1UwghftP6yuFheBqqe+bWCotBjC2K1YnteJILRMs3SM4V/0dLEW1SC27MWP5y+mwmw=="
|
||||
},
|
||||
"stylehacks": {
|
||||
"version": "5.1.1",
|
||||
"resolved": "https://registry.npmmirror.com/stylehacks/-/stylehacks-5.1.1.tgz",
|
||||
@@ -19402,6 +19874,17 @@
|
||||
"shelljs": "^0.8.3"
|
||||
}
|
||||
},
|
||||
"vue-codemirror": {
|
||||
"version": "6.1.1",
|
||||
"resolved": "https://registry.npmjs.org/vue-codemirror/-/vue-codemirror-6.1.1.tgz",
|
||||
"integrity": "sha512-rTAYo44owd282yVxKtJtnOi7ERAcXTeviwoPXjIc6K/IQYUsoDkzPvw/JDFtSP6T7Cz/2g3EHaEyeyaQCKoDMg==",
|
||||
"requires": {
|
||||
"@codemirror/commands": "6.x",
|
||||
"@codemirror/language": "6.x",
|
||||
"@codemirror/state": "6.x",
|
||||
"@codemirror/view": "6.x"
|
||||
}
|
||||
},
|
||||
"vue-eslint-parser": {
|
||||
"version": "8.3.0",
|
||||
"resolved": "https://registry.npmmirror.com/vue-eslint-parser/-/vue-eslint-parser-8.3.0.tgz",
|
||||
@@ -19550,6 +20033,11 @@
|
||||
"integrity": "sha512-zpZFZoJE9c8QlHc8s9zowKzMUTjytdzz2PQpZPezVENm0Jp+KBi+KooZGxvj7l+YfeFdKOcSjht7nEptSSMPMg==",
|
||||
"requires": {}
|
||||
},
|
||||
"w3c-keyname": {
|
||||
"version": "2.2.8",
|
||||
"resolved": "https://registry.npmjs.org/w3c-keyname/-/w3c-keyname-2.2.8.tgz",
|
||||
"integrity": "sha512-dpojBhNsCNN7T82Tm7k26A6G9ML3NkhDsnw9n/eoxSRlVBB4CEtIQ/KTCLI2Fwf3ataSXRhYFkQi3SlnFwPvPQ=="
|
||||
},
|
||||
"watchpack": {
|
||||
"version": "2.4.0",
|
||||
"resolved": "https://registry.npmmirror.com/watchpack/-/watchpack-2.4.0.tgz",
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
{
|
||||
"name": "talemate_frontend",
|
||||
"version": "0.24.0",
|
||||
"version": "0.25.5",
|
||||
"private": true,
|
||||
"scripts": {
|
||||
"serve": "vue-cli-service serve",
|
||||
@@ -8,11 +8,15 @@
|
||||
"lint": "vue-cli-service lint"
|
||||
},
|
||||
"dependencies": {
|
||||
"@codemirror/lang-markdown": "^6.2.5",
|
||||
"@codemirror/theme-one-dark": "^6.1.2",
|
||||
"@mdi/font": "7.4.47",
|
||||
"codemirror": "^6.0.1",
|
||||
"core-js": "^3.8.3",
|
||||
"dot-prop": "^8.0.2",
|
||||
"roboto-fontface": "*",
|
||||
"vue": "^3.2.13",
|
||||
"vue-codemirror": "^6.1.1",
|
||||
"vuetify": "^3.5.0",
|
||||
"webfontloader": "^1.0.0"
|
||||
},
|
||||
|
||||
@@ -99,7 +99,7 @@ export default {
|
||||
type: '',
|
||||
api_url: '',
|
||||
model_name: '',
|
||||
max_token_length: 4096,
|
||||
max_token_length: 8192,
|
||||
double_coercion: null,
|
||||
data: {
|
||||
has_prompt_template: false,
|
||||
@@ -156,7 +156,7 @@ export default {
|
||||
type: 'textgenwebui',
|
||||
api_url: 'http://localhost:5000',
|
||||
model_name: '',
|
||||
max_token_length: 4096,
|
||||
max_token_length: 8192,
|
||||
data: {
|
||||
has_prompt_template: false,
|
||||
}
|
||||
@@ -244,6 +244,13 @@ export default {
|
||||
client.api_key = data.api_key;
|
||||
client.double_coercion = data.data.double_coercion;
|
||||
client.data = data.data;
|
||||
for (let key in client.data.meta.extra_fields) {
|
||||
if (client.data[key] === null || client.data[key] === undefined) {
|
||||
client.data[key] = client.data.meta.defaults[key];
|
||||
}
|
||||
client[key] = client.data[key];
|
||||
}
|
||||
|
||||
} else if(!client) {
|
||||
console.log("Adding new client", data);
|
||||
|
||||
@@ -259,6 +266,16 @@ export default {
|
||||
double_coercion: data.data.double_coercion,
|
||||
data: data.data,
|
||||
});
|
||||
|
||||
// apply extra field defaults
|
||||
let client = this.state.clients[this.state.clients.length - 1];
|
||||
for (let key in client.data.meta.extra_fields) {
|
||||
if (client.data[key] === null || client.data[key] === undefined) {
|
||||
client.data[key] = client.data.meta.defaults[key];
|
||||
}
|
||||
client[key] = client.data[key];
|
||||
}
|
||||
|
||||
// sort the clients by name
|
||||
this.state.clients.sort((a, b) => (a.name > b.name) ? 1 : -1);
|
||||
}
|
||||
|
||||
@@ -191,6 +191,35 @@
|
||||
</v-row>
|
||||
</div>
|
||||
|
||||
<!-- GOOGLE API
|
||||
THis adds fields for
|
||||
gcloud_credentials_path
|
||||
gcloud_project_id
|
||||
gcloud_location
|
||||
-->
|
||||
|
||||
<div v-if="applicationPageSelected === 'google_api'">
|
||||
<v-alert color="white" variant="text" icon="mdi-google-cloud" density="compact">
|
||||
<v-alert-title>Google Cloud</v-alert-title>
|
||||
<div class="text-grey">
|
||||
In order for you to use Google Cloud services like the vertexi ai api for Gemini inference you will need to set up a Google Cloud project and credentials.
|
||||
|
||||
Please follow the instructions <a href="https://cloud.google.com/vertex-ai/docs/start/client-libraries">here</a> and then fill in the fields below.
|
||||
</div>
|
||||
</v-alert>
|
||||
<v-divider class="mb-2"></v-divider>
|
||||
<v-row>
|
||||
<v-col cols="12">
|
||||
<v-text-field v-model="app_config.google.gcloud_credentials_path"
|
||||
label="Google Cloud Credentials Path" messages="This should be a path to the credentials JSON file you downloaded through the setup above. This path needs to be accessible by the computer that is running the Talemate backend. If you are running Talemate on a server, you can upload the file to the server and the path should be the path to the file on the server."></v-text-field>
|
||||
</v-col>
|
||||
<v-col cols="6">
|
||||
<v-combobox v-model="app_config.google.gcloud_location"
|
||||
label="Google Cloud Location" :items="googleCloudLocations" messages="Pick something close to you" :return-object="false"></v-combobox>
|
||||
</v-col>
|
||||
</v-row>
|
||||
</div>
|
||||
|
||||
<!-- ELEVENLABS API -->
|
||||
<div v-if="applicationPageSelected === 'elevenlabs_api'">
|
||||
<v-alert color="white" variant="text" icon="mdi-api" density="compact">
|
||||
@@ -315,6 +344,7 @@ export default {
|
||||
{title: 'Anthropic', icon: 'mdi-api', value: 'anthropic_api'},
|
||||
{title: 'Cohere', icon: 'mdi-api', value: 'cohere_api'},
|
||||
{title: 'groq', icon: 'mdi-api', value: 'groq_api'},
|
||||
{title: 'Google Cloud', icon: 'mdi-google-cloud', value: 'google_api'},
|
||||
{title: 'ElevenLabs', icon: 'mdi-api', value: 'elevenlabs_api'},
|
||||
{title: 'RunPod', icon: 'mdi-api', value: 'runpod_api'},
|
||||
],
|
||||
@@ -325,6 +355,40 @@ export default {
|
||||
gamePageSelected: 'general',
|
||||
applicationPageSelected: 'openai_api',
|
||||
creatorPageSelected: 'content_context',
|
||||
googleCloudLocations: [
|
||||
{"value": 'us-central1', "title": 'US Central - Iowa'},
|
||||
{"value": 'us-west4', "title": 'US West 4 - Las Vegas'},
|
||||
{"value": 'us-east1', "title": 'US East 1 - South Carolina'},
|
||||
{"value": 'us-east4', "title": 'US East 4 - Northern Virginia'},
|
||||
{"value": 'us-west1', "title": 'US West 1 - Oregon'},
|
||||
{"value": 'northamerica-northeast1', "title": 'North America Northeast 1 - Montreal'},
|
||||
{"value": 'southamerica-east1', "title": 'South America East 1 - Sao Paulo'},
|
||||
{"value": 'europe-west1', "title": 'Europe West 1 - Belgium'},
|
||||
{"value": 'europe-north1', "title": 'Europe North 1 - Finland'},
|
||||
{"value": 'europe-west3', "title": 'Europe West 3 - Frankfurt'},
|
||||
{"value": 'europe-west2', "title": 'Europe West 2 - London'},
|
||||
{"value": 'europe-southwest1', "title": 'Europe Southwest 1 - Zurich'},
|
||||
{"value": 'europe-west8', "title": 'Europe West 8 - Netherlands'},
|
||||
{"value": 'europe-west4', "title": 'Europe West 4 - London'},
|
||||
{"value": 'europe-west9', "title": 'Europe West 9 - Stockholm'},
|
||||
{"value": 'europe-central2', "title": 'Europe Central 2 - Warsaw'},
|
||||
{"value": 'europe-west6', "title": 'Europe West 6 - Zurich'},
|
||||
{"value": 'asia-east1', "title": 'Asia East 1 - Taiwan'},
|
||||
{"value": 'asia-east2', "title": 'Asia East 2 - Hong Kong'},
|
||||
{"value": 'asia-south1', "title": 'Asia South 1 - Mumbai'},
|
||||
{"value": 'asia-northeast1', "title": 'Asia Northeast 1 - Tokyo'},
|
||||
{"value": 'asia-northeast3', "title": 'Asia Northeast 3 - Seoul'},
|
||||
{"value": 'asia-southeast1', "title": 'Asia Southeast 1 - Singapore'},
|
||||
{"value": 'asia-southeast2', "title": 'Asia Southeast 2 - Jakarta'},
|
||||
{"value": 'australia-southeast1', "title": 'Australia Southeast 1 - Sydney'},
|
||||
{"value": 'australia-southeast2', "title": 'Australia Southeast 2 - Melbourne'},
|
||||
{"value": 'me-west1', "title": 'Middle East West 1 - Dammam'},
|
||||
{"value": 'asia-northeast2', "title": 'Asia Northeast 2 - Osaka'},
|
||||
{"value": 'asia-northeast3', "title": 'Asia Northeast 3 - Seoul'},
|
||||
{"value": 'asia-south1', "title": 'Asia South 1 - Mumbai'},
|
||||
{"value": 'asia-southeast1', "title": 'Asia Southeast 1 - Singapore'},
|
||||
{"value": 'asia-southeast2', "title": 'Asia Southeast 2 - Jakarta'}
|
||||
].sort((a, b) => a.title.localeCompare(b.title))
|
||||
}
|
||||
},
|
||||
inject: ['getWebsocket', 'registerMessageHandler', 'setWaitingForInput', 'requestSceneAssets', 'requestAppConfig'],
|
||||
|
||||
@@ -12,7 +12,21 @@
|
||||
<div class="character-avatar">
|
||||
<!-- Placeholder for character avatar -->
|
||||
</div>
|
||||
<v-textarea ref="textarea" v-if="editing" v-model="editing_text" @keydown.enter.prevent="submitEdit()" @blur="cancelEdit()" @keydown.escape.prevent="cancelEdit()">
|
||||
<v-textarea
|
||||
ref="textarea"
|
||||
v-if="editing"
|
||||
v-model="editing_text"
|
||||
|
||||
auto-grow
|
||||
|
||||
:hint="autocompleteInfoMessage(autocompleting) + ', Shift+Enter for newline'"
|
||||
:loading="autocompleting"
|
||||
:disabled="autocompleting"
|
||||
|
||||
@keydown.enter.prevent="handleEnter"
|
||||
@blur="autocompleting ? null : cancelEdit()"
|
||||
@keydown.escape.prevent="cancelEdit()"
|
||||
>
|
||||
</v-textarea>
|
||||
<div v-else class="character-text" @dblclick="startEdit()">
|
||||
<span v-for="(part, index) in parts" :key="index" :class="{ highlight: part.isNarrative }">
|
||||
@@ -45,7 +59,7 @@
|
||||
<script>
|
||||
export default {
|
||||
props: ['character', 'text', 'color', 'message_id'],
|
||||
inject: ['requestDeleteMessage', 'getWebsocket', 'createPin', 'fixMessageContinuityErrors'],
|
||||
inject: ['requestDeleteMessage', 'getWebsocket', 'createPin', 'fixMessageContinuityErrors', 'autocompleteRequest', 'autocompleteInfoMessage'],
|
||||
computed: {
|
||||
parts() {
|
||||
const parts = [];
|
||||
@@ -68,11 +82,42 @@ export default {
|
||||
data() {
|
||||
return {
|
||||
editing: false,
|
||||
autocompleting: false,
|
||||
editing_text: "",
|
||||
hovered: false,
|
||||
}
|
||||
},
|
||||
methods: {
|
||||
|
||||
handleEnter(event) {
|
||||
// if ctrl -> autocomplete
|
||||
// else -> submit
|
||||
// shift -> newline
|
||||
|
||||
if (event.ctrlKey) {
|
||||
this.autocompleteEdit();
|
||||
} else if (event.shiftKey) {
|
||||
this.editing_text += "\n";
|
||||
} else {
|
||||
this.submitEdit();
|
||||
}
|
||||
},
|
||||
|
||||
autocompleteEdit() {
|
||||
this.autocompleting = true;
|
||||
this.autocompleteRequest(
|
||||
{
|
||||
partial: this.editing_text,
|
||||
context: "dialogue:npc",
|
||||
character: this.character,
|
||||
},
|
||||
(completion) => {
|
||||
this.editing_text += completion;
|
||||
this.autocompleting = false;
|
||||
},
|
||||
this.$refs.textarea
|
||||
)
|
||||
},
|
||||
cancelEdit() {
|
||||
console.log('cancelEdit', this.message_id);
|
||||
this.editing = false;
|
||||
|
||||
@@ -56,9 +56,9 @@
|
||||
</v-row>
|
||||
<v-row v-for="field in clientMeta().extra_fields" :key="field.name">
|
||||
<v-col cols="12">
|
||||
<v-text-field v-model="client.data[field.name]" v-if="field.type === 'text'" :label="field.label"
|
||||
<v-text-field v-model="client[field.name]" v-if="field.type === 'text'" :label="field.label"
|
||||
:rules="[rules.required]" :hint="field.description"></v-text-field>
|
||||
<v-checkbox v-else-if="field.type === 'bool'" v-model="client.data[field.name]"
|
||||
<v-checkbox v-else-if="field.type === 'bool'" v-model="client[field.name]"
|
||||
:label="field.label" :hint="field.description" density="compact"></v-checkbox>
|
||||
</v-col>
|
||||
</v-row>
|
||||
@@ -200,7 +200,7 @@ export default {
|
||||
if (defaults) {
|
||||
this.client.model = defaults.model || '';
|
||||
this.client.api_url = defaults.api_url || '';
|
||||
this.client.max_token_length = defaults.max_token_length || 4096;
|
||||
this.client.max_token_length = defaults.max_token_length || 8192;
|
||||
this.client.double_coercion = defaults.double_coercion || null;
|
||||
// loop and build name from prefix, checking against current clients
|
||||
let name = this.clientTypes[this.client.type].name_prefix;
|
||||
|
||||
@@ -99,6 +99,10 @@ export default {
|
||||
time: parseInt(data.data.time),
|
||||
num: this.total++,
|
||||
generation_parameters: data.data.generation_parameters,
|
||||
// immutable copy of original generation parameters
|
||||
original_generation_parameters: JSON.parse(JSON.stringify(data.data.generation_parameters)),
|
||||
original_prompt: data.data.prompt,
|
||||
original_response: data.data.response,
|
||||
})
|
||||
|
||||
while(this.prompts.length > this.max_prompts) {
|
||||
|
||||
@@ -27,31 +27,42 @@
|
||||
</v-list-item>
|
||||
<v-list-subheader>
|
||||
<v-icon>mdi-details</v-icon>Parameters
|
||||
<v-btn size="x-small" variant="text" v-if="promptHasDirtyParams" color="orange" @click.stop="resetParams" prepend-icon="mdi-restore">Reset</v-btn>
|
||||
</v-list-subheader>
|
||||
<v-list-item v-for="(value, name) in filteredParameters" :key="name">
|
||||
<v-list-item-subtitle color="grey-lighten-1">{{ name }}</v-list-item-subtitle>
|
||||
<p class="text-caption text-grey">
|
||||
{{ value }}
|
||||
</p>
|
||||
<v-list-item>
|
||||
<v-text-field class="mt-1" v-for="(value, name) in filteredParameters" :key="name" v-model="prompt.generation_parameters[name]" :label="name" density="compact" variant="plain" placeholder="Value" :type="parameterType(name)">
|
||||
<template v-slot:prepend-inner>
|
||||
<v-icon class="mt-1" size="x-small">mdi-pencil</v-icon>
|
||||
</template>
|
||||
|
||||
</v-text-field>
|
||||
|
||||
</v-list-item>
|
||||
</v-list>
|
||||
</v-col>
|
||||
<v-col :cols="details ? 5 : 6">
|
||||
<v-col :cols="details ? 6 : 7">
|
||||
<v-card flat>
|
||||
<v-card-title>Prompt</v-card-title>
|
||||
<v-card-title>Prompt
|
||||
<v-btn size="x-small" variant="text" v-if="promptHasDirtyPrompt" color="orange" @click.stop="resetPrompt" prepend-icon="mdi-restore">Reset</v-btn>
|
||||
</v-card-title>
|
||||
<v-card-text>
|
||||
<!--
|
||||
<div class="prompt-view">{{ prompt.prompt }}</div>
|
||||
-->
|
||||
<v-textarea :disabled="busy" density="compact" v-model="prompt.prompt" rows="10" auto-grow max-rows="22"></v-textarea>
|
||||
-->
|
||||
<Codemirror
|
||||
v-model="prompt.prompt"
|
||||
:extensions="extensions"
|
||||
:style="promptEditorStyle"
|
||||
></Codemirror>
|
||||
</v-card-text>
|
||||
</v-card>
|
||||
</v-col>
|
||||
<v-col :cols="details ? 5 : 6">
|
||||
<v-col :cols="details ? 4 : 5">
|
||||
<v-card elevation="10" color="grey-darken-3">
|
||||
<v-card-title>Response
|
||||
<v-progress-circular class="ml-1 mr-3" size="20" v-if="busy" indeterminate="disable-shrink"
|
||||
color="primary"></v-progress-circular>
|
||||
color="primary"></v-progress-circular>
|
||||
<v-btn size="x-small" variant="text" v-else-if="promptHasDirtyResponse" color="orange" @click.stop="resetResponse" prepend-icon="mdi-restore">Reset</v-btn>
|
||||
</v-card-title>
|
||||
<v-card-text style="max-height:600px; overflow-y:auto;" :class="busy ? 'text-grey' : 'text-white'">
|
||||
<div class="prompt-view">{{ prompt.response }}</div>
|
||||
@@ -75,8 +86,16 @@
|
||||
</template>
|
||||
|
||||
<script>
|
||||
import { Codemirror } from 'vue-codemirror'
|
||||
import { markdown } from '@codemirror/lang-markdown'
|
||||
import { oneDark } from '@codemirror/theme-one-dark'
|
||||
import { EditorView } from '@codemirror/view'
|
||||
|
||||
export default {
|
||||
name: 'DebugToolPromptView',
|
||||
components: {
|
||||
Codemirror,
|
||||
},
|
||||
data() {
|
||||
return {
|
||||
prompt: null,
|
||||
@@ -102,6 +121,17 @@ export default {
|
||||
|
||||
return filtered;
|
||||
},
|
||||
promptHasDirtyParams() {
|
||||
// compoare prompt.generation_parameters with prompt.original_generation_parameters
|
||||
// use json string comparison
|
||||
return JSON.stringify(this.prompt.generation_parameters) !== JSON.stringify(this.prompt.original_generation_parameters);
|
||||
},
|
||||
promptHasDirtyPrompt() {
|
||||
return this.prompt.prompt !== this.prompt.original_prompt;
|
||||
},
|
||||
promptHasDirtyResponse() {
|
||||
return this.prompt.response !== this.prompt.original_response;
|
||||
},
|
||||
},
|
||||
inject: [
|
||||
"getWebsocket",
|
||||
@@ -109,6 +139,30 @@ export default {
|
||||
],
|
||||
methods: {
|
||||
|
||||
parameterType(name) {
|
||||
// to vuetify text-field type
|
||||
const typ = typeof this.prompt.original_generation_parameters[name];
|
||||
if(typ === 'number') {
|
||||
return 'number';
|
||||
} else if(typ === 'boolean') {
|
||||
return 'boolean';
|
||||
} else {
|
||||
return 'text';
|
||||
}
|
||||
},
|
||||
|
||||
resetParams() {
|
||||
this.prompt.generation_parameters = JSON.parse(JSON.stringify(this.prompt.original_generation_parameters));
|
||||
},
|
||||
|
||||
resetPrompt() {
|
||||
this.prompt.prompt = this.prompt.original_prompt;
|
||||
},
|
||||
|
||||
resetResponse() {
|
||||
this.prompt.response = this.prompt.original_response;
|
||||
},
|
||||
|
||||
toggleDetailsLabel() {
|
||||
return this.details ? 'Hide Details' : 'Show Details';
|
||||
},
|
||||
@@ -185,6 +239,23 @@ export default {
|
||||
created() {
|
||||
this.registerMessageHandler(this.handleMessage);
|
||||
},
|
||||
setup() {
|
||||
|
||||
const extensions = [
|
||||
markdown(),
|
||||
oneDark,
|
||||
EditorView.lineWrapping
|
||||
];
|
||||
|
||||
const promptEditorStyle = {
|
||||
maxHeight: "600px"
|
||||
}
|
||||
|
||||
return {
|
||||
extensions,
|
||||
promptEditorStyle,
|
||||
}
|
||||
}
|
||||
}
|
||||
</script>
|
||||
|
||||
|
||||
@@ -19,10 +19,10 @@
|
||||
<div class="tile" v-for="(scene, index) in recentScenes()" :key="index">
|
||||
<v-card density="compact" elevation="7" @click="loadScene(scene)" color="primary" variant="outlined">
|
||||
<v-card-title>
|
||||
{{ scene.name }}
|
||||
{{ filenameToTitle(scene.filename) }}
|
||||
</v-card-title>
|
||||
<v-card-subtitle>
|
||||
{{ scene.filename }}
|
||||
{{ scene.name }}
|
||||
</v-card-subtitle>
|
||||
<v-card-text>
|
||||
<div class="cover-image-placeholder">
|
||||
@@ -60,6 +60,14 @@ export default {
|
||||
},
|
||||
methods: {
|
||||
|
||||
filenameToTitle(filename) {
|
||||
// remove .json extension, replace _ with space, and capitalize first letter of each word
|
||||
|
||||
filename = filename.replace('.json', '');
|
||||
|
||||
return filename.replace(/_/g, ' ').replace(/\b\w/g, l => l.toUpperCase());
|
||||
},
|
||||
|
||||
hasRecentScenes() {
|
||||
return this.config != null && this.config.recent_scenes != null && this.config.recent_scenes.scenes != null && this.config.recent_scenes.scenes.length > 0;
|
||||
},
|
||||
|
||||
@@ -6,7 +6,20 @@
|
||||
</v-btn>
|
||||
</template>
|
||||
<div class="narrator-message">
|
||||
<v-textarea ref="textarea" v-if="editing" v-model="editing_text" @keydown.enter.prevent="submitEdit()" @blur="cancelEdit()" @keydown.escape.prevent="cancelEdit()">
|
||||
<v-textarea
|
||||
ref="textarea"
|
||||
v-if="editing"
|
||||
v-model="editing_text"
|
||||
|
||||
auto-grow
|
||||
|
||||
:hint="autocompleteInfoMessage(autocompleting) + ', Shift+Enter for newline'"
|
||||
:loading="autocompleting"
|
||||
:disabled="autocompleting"
|
||||
|
||||
@keydown.enter.prevent="handleEnter"
|
||||
@blur="autocompleting ? null : cancelEdit()"
|
||||
@keydown.escape.prevent="cancelEdit()">
|
||||
</v-textarea>
|
||||
<div v-else class="narrator-text" @dblclick="startEdit()">
|
||||
<span v-for="(part, index) in parts" :key="index" :class="{ highlight: part.isNarrative }">
|
||||
@@ -36,7 +49,7 @@
|
||||
<script>
|
||||
export default {
|
||||
props: ['text', 'message_id'],
|
||||
inject: ['requestDeleteMessage', 'getWebsocket', 'createPin'],
|
||||
inject: ['requestDeleteMessage', 'getWebsocket', 'createPin', 'fixMessageContinuityErrors', 'autocompleteRequest', 'autocompleteInfoMessage'],
|
||||
computed: {
|
||||
parts() {
|
||||
const parts = [];
|
||||
@@ -60,11 +73,41 @@ export default {
|
||||
data() {
|
||||
return {
|
||||
editing: false,
|
||||
autocompleting: false,
|
||||
editing_text: "",
|
||||
hovered: false,
|
||||
}
|
||||
},
|
||||
methods: {
|
||||
handleEnter(event) {
|
||||
// if ctrl -> autocomplete
|
||||
// else -> submit
|
||||
// shift -> newline
|
||||
|
||||
if (event.ctrlKey) {
|
||||
this.autocompleteEdit();
|
||||
} else if (event.shiftKey) {
|
||||
this.editing_text += "\n";
|
||||
} else {
|
||||
this.submitEdit();
|
||||
}
|
||||
},
|
||||
|
||||
autocompleteEdit() {
|
||||
this.autocompleting = true;
|
||||
this.autocompleteRequest(
|
||||
{
|
||||
partial: this.editing_text,
|
||||
context: "narrative:continue",
|
||||
},
|
||||
(completion) => {
|
||||
this.editing_text += completion;
|
||||
this.autocompleting = false;
|
||||
},
|
||||
this.$refs.textarea
|
||||
)
|
||||
},
|
||||
|
||||
cancelEdit() {
|
||||
console.log('cancelEdit', this.message_id);
|
||||
this.editing = false;
|
||||
|
||||
@@ -6,7 +6,10 @@
|
||||
</v-card-title>
|
||||
<v-card-text style="max-height:600px; overflow-y:scroll;">
|
||||
<v-list-item v-for="(entry, index) in history" :key="index" class="text-body-2">
|
||||
{{ entry.ts }} {{ entry.text }}
|
||||
<v-list-item-subtitle>{{ entry.ts }}</v-list-item-subtitle>
|
||||
<div class="history-entry">
|
||||
{{ entry.text }}
|
||||
</div>
|
||||
<v-divider class="mt-1"></v-divider>
|
||||
</v-list-item>
|
||||
</v-card-text>
|
||||
@@ -63,4 +66,8 @@ export default {
|
||||
}
|
||||
</script>
|
||||
|
||||
<style scoped></style>
|
||||
<style scoped>
|
||||
.history-entry {
|
||||
white-space: pre-wrap;
|
||||
}
|
||||
</style>
|
||||
@@ -140,13 +140,17 @@
|
||||
<CharacterSheet ref="characterSheet" />
|
||||
<SceneHistory ref="sceneHistory" />
|
||||
|
||||
<v-text-field
|
||||
<v-textarea
|
||||
v-model="messageInput"
|
||||
:label="inputHint"
|
||||
rows="1"
|
||||
auto-grow
|
||||
outlined
|
||||
ref="messageInput"
|
||||
@keyup.enter="sendMessage"
|
||||
@keydown.enter.prevent="sendMessage"
|
||||
hint="Ctrl+Enter to autocomplete, Shift+Enter for newline"
|
||||
:disabled="isInputDisabled()"
|
||||
:loading="autocompleting"
|
||||
:prepend-inner-icon="messageInputIcon()"
|
||||
:color="messageInputColor()">
|
||||
<template v-slot:append>
|
||||
@@ -155,7 +159,7 @@
|
||||
<v-icon v-else>mdi-skip-next</v-icon>
|
||||
</v-btn>
|
||||
</template>
|
||||
</v-text-field>
|
||||
</v-textarea>
|
||||
</div>
|
||||
|
||||
<IntroView v-else
|
||||
@@ -244,6 +248,10 @@ export default {
|
||||
messageHandlers: [],
|
||||
scene: {},
|
||||
appConfig: {},
|
||||
autocompleting: false,
|
||||
autocompletePartialInput: "",
|
||||
autocompleteCallback: null,
|
||||
autocompleteFocusElement: null,
|
||||
}
|
||||
},
|
||||
mounted() {
|
||||
@@ -281,6 +289,8 @@ export default {
|
||||
getTrackedWorldState: (question) => this.$refs.worldState.trackedWorldState(question),
|
||||
getPlayerCharacterName: () => this.getPlayerCharacterName(),
|
||||
formatWorldStateTemplateString: (templateString, chracterName) => this.formatWorldStateTemplateString(templateString, chracterName),
|
||||
autocompleteRequest: (partialInput, callback, focus_element) => this.autocompleteRequest(partialInput, callback, focus_element),
|
||||
autocompleteInfoMessage: (active) => this.autocompleteInfoMessage(active),
|
||||
};
|
||||
},
|
||||
methods: {
|
||||
@@ -293,9 +303,11 @@ export default {
|
||||
|
||||
this.connecting = true;
|
||||
let currentUrl = new URL(window.location.href);
|
||||
console.log(currentUrl);
|
||||
let websocketUrl = process.env.VUE_APP_TALEMATE_BACKEND_WEBSOCKET_URL || `ws://${currentUrl.hostname}:5050/ws`;
|
||||
|
||||
this.websocket = new WebSocket(`ws://${currentUrl.hostname}:5050/ws`);
|
||||
console.log("urls", { websocketUrl, currentUrl }, {env : process.env});
|
||||
|
||||
this.websocket = new WebSocket(websocketUrl);
|
||||
console.log("Websocket connecting ...")
|
||||
this.websocket.onmessage = this.handleMessage;
|
||||
this.websocket.onopen = () => {
|
||||
@@ -383,6 +395,9 @@ export default {
|
||||
|
||||
if (data.type === 'autocomplete_suggestion') {
|
||||
|
||||
if(!this.autocompleteCallback)
|
||||
return;
|
||||
|
||||
const completion = data.message;
|
||||
|
||||
// append completion to messageInput, add a space if
|
||||
@@ -391,11 +406,23 @@ export default {
|
||||
|
||||
const completionStartsWithSentenceEnd = completion.startsWith('!') || completion.startsWith('.') || completion.startsWith('?') || completion.startsWith(')') || completion.startsWith(']') || completion.startsWith('}') || completion.startsWith('"') || completion.startsWith("'") || completion.startsWith("*") || completion.startsWith(",")
|
||||
|
||||
if (this.messageInput.endsWith(' ') || completion.startsWith(' ') || completionStartsWithSentenceEnd) {
|
||||
this.messageInput += completion;
|
||||
if (this.autocompletePartialInput.endsWith(' ') || completion.startsWith(' ') || completionStartsWithSentenceEnd) {
|
||||
this.autocompleteCallback(completion);
|
||||
} else {
|
||||
this.messageInput += ' ' + completion;
|
||||
this.autocompleteCallback(' ' + completion);
|
||||
}
|
||||
|
||||
if (this.autocompleteFocusElement) {
|
||||
let focus_element = this.autocompleteFocusElement;
|
||||
setTimeout(() => {
|
||||
focus_element.focus();
|
||||
}, 200);
|
||||
this.autocompleteFocusElement = null;
|
||||
}
|
||||
|
||||
this.autocompleteCallback = null;
|
||||
this.autocompletePartialInput = "";
|
||||
return;
|
||||
}
|
||||
|
||||
if (data.type === 'request_input') {
|
||||
@@ -439,7 +466,26 @@ export default {
|
||||
|
||||
// if ctrl+enter is pressed, request autocomplete
|
||||
if (event.ctrlKey && event.key === 'Enter') {
|
||||
this.websocket.send(JSON.stringify({ type: 'interact', text: `!acdlg: ${this.messageInput}` }));
|
||||
this.autocompleting = true
|
||||
this.inputDisabled = true;
|
||||
this.autocompleteRequest(
|
||||
{
|
||||
partial: this.messageInput,
|
||||
context: "dialogue:player"
|
||||
},
|
||||
(completion) => {
|
||||
this.inputDisabled = false
|
||||
this.autocompleting = false
|
||||
this.messageInput += completion;
|
||||
},
|
||||
this.$refs.messageInput
|
||||
);
|
||||
return;
|
||||
}
|
||||
|
||||
// if shift+enter is pressed, add a newline
|
||||
if (event.shiftKey && event.key === 'Enter') {
|
||||
this.messageInput += "\n";
|
||||
return;
|
||||
}
|
||||
|
||||
@@ -450,6 +496,26 @@ export default {
|
||||
this.waitingForInput = false;
|
||||
}
|
||||
},
|
||||
|
||||
autocompleteRequest(param, callback, focus_element) {
|
||||
|
||||
this.autocompleteCallback = callback;
|
||||
this.autocompleteFocusElement = focus_element;
|
||||
this.autocompletePartialInput = param.partial;
|
||||
|
||||
const param_copy = JSON.parse(JSON.stringify(param));
|
||||
param_copy.type = "assistant";
|
||||
param_copy.action = "autocomplete";
|
||||
|
||||
this.websocket.send(JSON.stringify(param_copy));
|
||||
|
||||
//this.websocket.send(JSON.stringify({ type: 'interact', text: `!autocomplete:${JSON.stringify(param)}` }));
|
||||
},
|
||||
|
||||
autocompleteInfoMessage(active) {
|
||||
return active ? 'Generating ...' : "Ctrl+Enter to autocomplete";
|
||||
},
|
||||
|
||||
requestAppConfig() {
|
||||
this.websocket.send(JSON.stringify({ type: 'request_app_config' }));
|
||||
},
|
||||
|
||||
@@ -107,11 +107,16 @@
|
||||
@generate="content => setAndUpdateCharacterDescription(content)"
|
||||
/>
|
||||
|
||||
<v-textarea rows="5" auto-grow v-model="characterDetails.description"
|
||||
<v-textarea ref="characterDescription" rows="5" auto-grow v-model="characterDetails.description"
|
||||
:color="characterDescriptionDirty ? 'info' : ''"
|
||||
|
||||
:disabled="characterDescriptionBusy"
|
||||
:loading="characterDescriptionBusy"
|
||||
@keyup.ctrl.enter.stop="autocompleteRequestCharacterDescription"
|
||||
|
||||
@update:model-value="queueUpdateCharacterDescription"
|
||||
label="Description"
|
||||
hint="A short description of the character."></v-textarea>
|
||||
:hint="'A short description of the character. '+autocompleteInfoMessage(characterDescriptionBusy)"></v-textarea>
|
||||
</div>
|
||||
|
||||
<!-- CHARACTER ATTRIBUTES -->
|
||||
@@ -166,11 +171,14 @@
|
||||
:label="selectedCharacterAttribute"
|
||||
:color="characterAttributeDirty ? 'info' : ''"
|
||||
|
||||
:disabled="characterAttributeBusy"
|
||||
:loading="characterAttributeBusy"
|
||||
:hint="autocompleteInfoMessage(characterAttributeBusy)"
|
||||
@keyup.ctrl.enter.stop="autocompleteRequestCharacterAttribute"
|
||||
|
||||
@update:modelValue="queueUpdateCharacterAttribute(selectedCharacterAttribute)"
|
||||
|
||||
v-model="characterDetails.base_attributes[selectedCharacterAttribute]">
|
||||
|
||||
|
||||
</v-textarea>
|
||||
|
||||
</div>
|
||||
@@ -253,6 +261,13 @@
|
||||
<v-textarea rows="5" max-rows="10" auto-grow
|
||||
ref="characterDetail"
|
||||
:color="characterDetailDirty ? 'info' : ''"
|
||||
|
||||
:disabled="characterDetailBusy"
|
||||
:loading="characterDetailBusy"
|
||||
:hint="autocompleteInfoMessage(characterDetailBusy)"
|
||||
|
||||
@keyup.ctrl.enter.stop="autocompleteRequestCharacterDetail"
|
||||
|
||||
@update:modelValue="queueUpdateCharacterDetail(selectedCharacterDetail)"
|
||||
:label="selectedCharacterDetail"
|
||||
v-model="characterDetails.details[selectedCharacterDetail]">
|
||||
@@ -888,6 +903,10 @@ export default {
|
||||
characterDescriptionDirty: false,
|
||||
characterStateReinforcerDirty: false,
|
||||
|
||||
characterAttributeBusy: false,
|
||||
characterDetailBusy: false,
|
||||
characterDescriptionBusy: false,
|
||||
|
||||
characterAttributeUpdateTimeout: null,
|
||||
characterDetailUpdateTimeout: null,
|
||||
characterDescriptionUpdateTimeout: null,
|
||||
@@ -1003,6 +1022,8 @@ export default {
|
||||
'openCharacterSheet',
|
||||
'characterSheet',
|
||||
'isInputDisabled',
|
||||
'autocompleteRequest',
|
||||
'autocompleteInfoMessage',
|
||||
],
|
||||
methods: {
|
||||
show(tab, sub1, sub2, sub3) {
|
||||
@@ -1083,6 +1104,8 @@ export default {
|
||||
this.removePinConfirm = false;
|
||||
this.deferedNavigation = null;
|
||||
this.isBusy = false;
|
||||
this.characterAttributeBusy = false;
|
||||
this.characterDetailBusy = false;
|
||||
},
|
||||
exit() {
|
||||
this.dialog = false;
|
||||
@@ -1645,7 +1668,33 @@ export default {
|
||||
this.characterList = message.data;
|
||||
}
|
||||
else if (message.action === 'character_details') {
|
||||
// if we are currently editing an attribute, override it in the incoming data
|
||||
// this fixes the annoying rubberbanding when editing an attribute
|
||||
if (this.selectedCharacterAttribute) {
|
||||
message.data.base_attributes[this.selectedCharacterAttribute] = this.characterDetails.base_attributes[this.selectedCharacterAttribute];
|
||||
}
|
||||
|
||||
// if we are currently editing a detail, override it in the incoming data
|
||||
// this fixes the annoying rubberbanding when editing a detail
|
||||
if (this.selectedCharacterDetail) {
|
||||
message.data.details[this.selectedCharacterDetail] = this.characterDetails.details[this.selectedCharacterDetail];
|
||||
}
|
||||
|
||||
// if we are currently editing a description, override it in the incoming data
|
||||
// this fixes the annoying rubberbanding when editing a description
|
||||
if (this.characterDescriptionDirty) {
|
||||
message.data.description = this.characterDetails.description;
|
||||
}
|
||||
|
||||
// if we are currently editing a state reinforcement, override it in the incoming data
|
||||
// this fixes the annoying rubberbanding when editing a state reinforcement
|
||||
if (this.characterStateReinforcerDirty) {
|
||||
message.data.reinforcements[this.selectedCharacterStateReinforcer] = this.characterDetails.reinforcements[this.selectedCharacterStateReinforcer];
|
||||
}
|
||||
|
||||
this.characterDetails = message.data;
|
||||
|
||||
|
||||
// select first attribute
|
||||
if (!this.selectedCharacterAttribute)
|
||||
this.selectedCharacterAttribute = Object.keys(this.characterDetails.base_attributes)[0];
|
||||
@@ -1712,6 +1761,46 @@ export default {
|
||||
}
|
||||
|
||||
},
|
||||
// autocomplete handlers
|
||||
|
||||
autocompleteRequestCharacterAttribute() {
|
||||
this.characterAttributeBusy = true;
|
||||
this.autocompleteRequest({
|
||||
partial: this.characterDetails.base_attributes[this.selectedCharacterAttribute],
|
||||
context: `character attribute:${this.selectedCharacterAttribute}`,
|
||||
character: this.characterDetails.name
|
||||
}, (completion) => {
|
||||
this.characterDetails.base_attributes[this.selectedCharacterAttribute] += completion;
|
||||
this.characterAttributeBusy = false;
|
||||
}, this.$refs.characterAttribute);
|
||||
|
||||
},
|
||||
|
||||
autocompleteRequestCharacterDetail() {
|
||||
this.characterDetailBusy = true;
|
||||
this.autocompleteRequest({
|
||||
partial: this.characterDetails.details[this.selectedCharacterDetail],
|
||||
context: `character detail:${this.selectedCharacterDetail}`,
|
||||
character: this.characterDetails.name
|
||||
}, (completion) => {
|
||||
this.characterDetails.details[this.selectedCharacterDetail] += completion;
|
||||
this.characterDetailBusy = false;
|
||||
}, this.$refs.characterDetail);
|
||||
|
||||
},
|
||||
|
||||
autocompleteRequestCharacterDescription() {
|
||||
this.characterDescriptionBusy = true;
|
||||
this.autocompleteRequest({
|
||||
partial: this.characterDetails.description,
|
||||
context: `character detail:description`,
|
||||
character: this.characterDetails.name
|
||||
}, (completion) => {
|
||||
this.characterDetails.description += completion;
|
||||
this.characterDescriptionBusy = false;
|
||||
}, this.$refs.characterDescription);
|
||||
|
||||
},
|
||||
},
|
||||
created() {
|
||||
this.registerMessageHandler(this.handleMessage);
|
||||
|
||||
@@ -14,7 +14,17 @@
|
||||
<v-col cols="9">
|
||||
|
||||
<div v-if="tab == 'instructions'">
|
||||
<v-sheet class="text-right">
|
||||
<v-spacer></v-spacer>
|
||||
<v-tooltip class="pre-wrap" :text="tooltipText" max-width="250" >
|
||||
<template v-slot:activator="{ props }">
|
||||
<v-btn v-bind="props" color="primary" @click.stop="generateCharacterDialogueInstructions" variant="text" size="x-small" prepend-icon="mdi-auto-fix">Generate</v-btn>
|
||||
</template>
|
||||
</v-tooltip>
|
||||
</v-sheet>
|
||||
<v-textarea
|
||||
:loading="dialogueInstructionsBusy"
|
||||
:disabled="dialogueInstructionsBusy"
|
||||
placeholder="speak less formally, use more contractions, and be more casual."
|
||||
v-model="dialogueInstructions" label="Acting Instructions"
|
||||
:color="dialogueInstructionsDirty ? 'primary' : null"
|
||||
@@ -78,6 +88,7 @@ export default {
|
||||
dialogueExample: "",
|
||||
dialogueInstructions: null,
|
||||
dialogueInstructionsDirty: false,
|
||||
dialogueInstructionsBusy: false,
|
||||
updateCharacterActorTimeout: null,
|
||||
}
|
||||
},
|
||||
@@ -86,6 +97,9 @@ export default {
|
||||
return this.dialogueExamples.map((example) => {
|
||||
return example.replace(this.character.name + ': ', '');
|
||||
});
|
||||
},
|
||||
tooltipText() {
|
||||
return `Automatically generate dialogue instructions for ${this.character.name}, based on their attributes and description`;
|
||||
}
|
||||
},
|
||||
props: {
|
||||
@@ -111,11 +125,24 @@ export default {
|
||||
dialogue_examples: this.dialogueExamples,
|
||||
}));
|
||||
},
|
||||
|
||||
generateCharacterDialogueInstructions() {
|
||||
this.dialogueInstructionsBusy = true;
|
||||
this.getWebsocket().send(JSON.stringify({
|
||||
type: "world_state_manager",
|
||||
action: "generate_character_dialogue_instructions",
|
||||
name: this.character.name,
|
||||
}));
|
||||
},
|
||||
|
||||
handleMessage(data) {
|
||||
if(data.type === 'world_state_manager') {
|
||||
console.log("WORLD STATE MANAGER", data);
|
||||
if(data.action === 'character_actor_updated') {
|
||||
this.dialogueInstructionsDirty = false;
|
||||
} else if (data.action === 'character_dialogue_instructions_generated') {
|
||||
this.dialogueInstructions = data.data.instructions;
|
||||
this.dialogueInstructionsBusy = false;
|
||||
}
|
||||
}
|
||||
},
|
||||
|
||||
@@ -1,4 +1,12 @@
|
||||
const { defineConfig } = require('@vue/cli-service')
|
||||
|
||||
const ALLOWED_HOSTS = ((process.env.ALLOWED_HOSTS || "all") !== "all" ? process.env.ALLOWED_HOSTS.split(",") : "all")
|
||||
const VUE_APP_TALEMATE_BACKEND_WEBSOCKET_URL = process.env.VUE_APP_TALEMATE_BACKEND_WEBSOCKET_URL || null
|
||||
|
||||
console.log("NODE_ENV", process.env.NODE_ENV)
|
||||
console.log("ALLOWED_HOSTS", ALLOWED_HOSTS)
|
||||
console.log("VUE_APP_TALEMATE_BACKEND_WEBSOCKET_URL", VUE_APP_TALEMATE_BACKEND_WEBSOCKET_URL)
|
||||
|
||||
module.exports = defineConfig({
|
||||
transpileDependencies: true,
|
||||
|
||||
@@ -9,6 +17,7 @@ module.exports = defineConfig({
|
||||
},
|
||||
|
||||
devServer: {
|
||||
allowedHosts: ALLOWED_HOSTS,
|
||||
client: {
|
||||
overlay: {
|
||||
warnings: false,
|
||||
|
||||
1
templates/llm-prompt/std/Mistral.jinja2
Normal file
1
templates/llm-prompt/std/Mistral.jinja2
Normal file
@@ -0,0 +1 @@
|
||||
<s>[INST] {{ system_message }} {{ user_message }} [/INST] {{ coercion_message }}
|
||||
Reference in New Issue
Block a user