* flip title and name in recent scenes

* fix issue where a message could not be regenerated after applying continuity error fixes

* prompt tweaks

* allow json parameters for commands

* autocomplete improvements

* dialogue cleanup fixes

* fix issue with narrate after dialogue and llama3 (and other models that don't have a line break after the user prompt in their prompt template.

* expose ability to auto generate dialogue instructions to wsm character ux

* use b64_json response type

* move tag checks up so they match first

* fix typo

* prompt tweak

* api key support

* prompt tweaks

* editable parameters in prompt debugger / tester

* allow reseting of prompt params

* codemirror for prompt editor

* prompt tweaks

* more prompt debug tool tweaks

* some extra control for `context_history`

* new analytical preset (testing)

* add `join` and `llm_can_be_coerced` to jinja env

* support factual list summaries

* prompt tweaks to continuity check and fix

* new summarization method `facts` exposed to ux

* clamp mistral ai temperature according to their new requirements

* prompt tweaks

* better parsing of fixed dialogue response

* prompt tweaks

* fix intermittent empty meta issue

* history regen status progression and small ux tweaks

* summary entries should always be condensed

* google gemini support

* relock to install google-cloud-aiplatform for vertex ai inference

* fix instruction link

* better error handling of google safety validation and allow disabling of safety validation

* docs

* clarify credentials path requirements

* tweak error line identification

* handle quota limit error

* autocomplete ux wired to assistant plugin instead of command

* autocomplete narrative editing and fixes to autocomplete during dialog edit

* main input autocomplete tweaks

* allow new lines in main input

* 0.25.0 and relock

* fix issue with autocomplete elsewhere locking out main input

* better way to determine remote service

* prompt tweak

* fix rubberbanding issue when editing character attributes

* add open mistral 8x22

* fix continuity error check summary inclusion of target entry

* docs

* default context length to 8192

* linting
This commit is contained in:
veguAI
2024-05-05 22:16:03 +03:00
committed by GitHub
parent f0b627b900
commit 39bd02722d
60 changed files with 2639 additions and 594 deletions

View File

@@ -13,6 +13,7 @@ Supported APIs:
- [mistral.ai](https://mistral.ai/)
- [Cohere](https://www.cohere.com/)
- [Groq](https://www.groq.com/)
- [Google Gemini](https://console.cloud.google.com/)
Supported self-hosted APIs:
- [oobabooga/text-generation-webui](https://github.com/oobabooga/text-generation-webui) (local or with runpod support)
@@ -50,6 +51,8 @@ Please read the documents in the `docs` folder for more advanced configuration a
- [Specifying the correct prompt template](#specifying-the-correct-prompt-template)
- [Recommended Models](#recommended-models)
- [DeepInfra via OpenAI Compatible client](#deepinfra-via-openai-compatible-client)
- [Google Gemini](#google-gemini)
- [Google Cloud Setup](#google-cloud-setup)
- [Ready to go](#ready-to-go)
- [Load the introductory scenario "Infinity Quest"](#load-the-introductory-scenario-infinity-quest)
- [Loading character cards](#loading-character-cards)
@@ -195,6 +198,36 @@ Models on DeepInfra that work well with Talemate:
- [cognitivecomputations/dolphin-2.6-mixtral-8x7b](https://deepinfra.com/cognitivecomputations/dolphin-2.6-mixtral-8x7b) (max context 32k, 8k recommended)
- [lizpreciatior/lzlv_70b_fp16_hf](https://deepinfra.com/lizpreciatior/lzlv_70b_fp16_hf) (max context 4k)
## Google Gemini
### Google Cloud Setup
Unlike the other clients the setup for Google Gemini is a bit more involved as you will need to set up a google cloud project and credentials for it.
Please follow their [instructions for setup](https://cloud.google.com/vertex-ai/docs/start/client-libraries) - which includes setting up a project, enabling the Vertex AI API, creating a service account, and downloading the credentials.
Once you have downloaded the credentials, copy the JSON file into the talemate directory. You can rename it to something that's easier to remember, like `my-credentials.json`.
### Add the client
![Google Gemini](docs/img/0.25.0/google-add-client.png)
The `Disable Safety Settings` option will turn off the google reponse validation for what they consider harmful content. Use at your own risk.
### Conmplete the google cloud setup in talemate
![Google Gemini](docs/img/0.25.0/google-setup-incomplete.png)
Click the `SETUP GOOGLE API CREDENTIALS` button that will appear on the client.
The google cloud setup modal will appear, fill in the path to the credentials file and select a location that is close to you.
![Google Gemini](docs/img/0.25.0/google-cloud-setup.png)
Click save and after a moment the client should have a green dot next to it, indicating that it is ready to go.
![Google Gemini](docs/img/0.25.0/google-ready.png)
## Ready to go
You will know you are good to go when the client and all the agents have a green dot next to them.

Binary file not shown.

After

Width:  |  Height:  |  Size: 25 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 59 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 5.3 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 7.5 KiB

1174
poetry.lock generated

File diff suppressed because it is too large Load Diff

View File

@@ -4,7 +4,7 @@ build-backend = "poetry.masonry.api"
[tool.poetry]
name = "talemate"
version = "0.24.0"
version = "0.25.0"
description = "AI-backed roleplay and narrative tools"
authors = ["FinalWombat"]
license = "GNU Affero General Public License v3.0"
@@ -37,6 +37,7 @@ python-dotenv = "^1.0.0"
websockets = "^11.0.3"
structlog = "^23.1.0"
runpod = "^1.2.0"
google-cloud-aiplatform = ">=1.50.0"
nest_asyncio = "^1.5.7"
isodate = ">=0.6.1"
thefuzz = ">=0.20.0"

View File

@@ -2,4 +2,4 @@ from .agents import Agent
from .client import TextGeneratorWebuiClient
from .tale_mate import *
VERSION = "0.24.0"
VERSION = "0.25.0"

View File

@@ -18,10 +18,11 @@ class ContentGenerationContext(pydantic.BaseModel):
"""
context: str
instructions: str
length: int
instructions: str = ""
length: int = 100
character: Union[str, None] = None
original: Union[str, None] = None
partial: str = ""
@property
def computed_context(self) -> Tuple[str, str]:
@@ -37,10 +38,11 @@ class AssistantMixin:
async def contextual_generate_from_args(
self,
context: str,
instructions: str,
instructions: str = "",
length: int = 100,
character: Union[str, None] = None,
original: Union[str, None] = None,
partial: str = "",
):
"""
Request content from the assistant.
@@ -52,6 +54,7 @@ class AssistantMixin:
length=length,
character=character,
original=original,
partial=partial,
)
return await self.contextual_generate(generation_context)
@@ -86,6 +89,7 @@ class AssistantMixin:
"generation_context": generation_context,
"context_typ": context_typ,
"context_name": context_name,
"can_coerce": self.client.can_be_coerced,
"character": (
self.scene.get_character(generation_context.character)
if generation_context.character
@@ -94,7 +98,8 @@ class AssistantMixin:
},
)
content = util.strip_partial_sentences(content)
if not generation_context.partial:
content = util.strip_partial_sentences(content)
return content.strip()
@@ -139,3 +144,39 @@ class AssistantMixin:
emit("autocomplete_suggestion", response)
return response
@set_processing
async def autocomplete_narrative(
self,
input: str,
emit_signal: bool = True,
) -> str:
"""
Autocomplete narrative.
"""
response = await Prompt.request(
f"creator.autocomplete-narrative",
self.client,
"create_short",
vars={
"scene": self.scene,
"max_tokens": self.client.max_token_length,
"input": input.strip(),
"can_coerce": self.client.can_be_coerced,
},
pad_prepended_response=False,
dedupe_enabled=False,
)
if response.startswith(input):
response = response[len(input) :]
self.scene.log.debug(
"autocomplete_suggestion", suggestion=response, input=input
)
if emit_signal:
emit("autocomplete_suggestion", response)
return response

View File

@@ -204,6 +204,8 @@ class CharacterCreatorMixin:
"create_concise",
vars={
"character": character,
"scene": self.scene,
"max_tokens": self.client.max_token_length,
},
)

View File

@@ -201,7 +201,12 @@ class EditorAgent(Agent):
@set_processing
async def check_continuity_errors(
self, content: str, character: Character, force: bool = False, fix: bool = True
self,
content: str,
character: Character,
force: bool = False,
fix: bool = True,
message_id: int = None,
) -> str:
"""
Edits a text to ensure that it is consistent with the scene
@@ -223,15 +228,25 @@ class EditorAgent(Agent):
)
return content
log.debug(
"check_continuity_errors START",
content=content,
character=character,
force=force,
fix=fix,
message_id=message_id,
)
response = await Prompt.request(
"editor.check-continuity-errors",
self.client,
"basic_deterministic_medium2",
"basic_analytical_medium2",
vars={
"content": content,
"character": character,
"scene": self.scene,
"max_tokens": self.client.max_token_length,
"message_id": message_id,
},
)
@@ -241,7 +256,7 @@ class EditorAgent(Agent):
errors = []
for line in response.split("\n"):
if not line.startswith("ERROR"):
if "ERROR" not in line:
continue
errors.append(line)
@@ -274,8 +289,14 @@ class EditorAgent(Agent):
content_fix_identifer = state.get("content_fix_identifier")
try:
content = response.split("```")[0].strip()
content = response.strip().strip("```").split("```")[0].strip()
content = content.replace(content_fix_identifer, "").strip()
content = content.strip(":")
# if content doesnt start with {character_name}: then add it
if not content.startswith(f"{character.name}:"):
content = f"{character.name}: {content}"
except Exception as e:
log.error(
"check_continuity_errors FAILED",

View File

@@ -720,6 +720,11 @@ class ChromaDBMemoryAgent(MemoryAgent):
doc = _results["documents"][0][i]
meta = _results["metadatas"][0][i]
if not meta:
log.warning("chromadb agent get", error="no meta", doc=doc)
continue
ts = meta.get("ts")
# skip pin_only entries

View File

@@ -61,6 +61,7 @@ class SummarizeAgent(Agent):
{"label": "Short & Concise", "value": "short"},
{"label": "Balanced", "value": "balanced"},
{"label": "Lengthy & Detailed", "value": "long"},
{"label": "Factual List", "value": "facts"},
],
),
"include_previous": AgentActionConfig(
@@ -77,6 +78,15 @@ class SummarizeAgent(Agent):
)
}
@property
def threshold(self):
return self.actions["archive"].config["threshold"].value
@property
def estimated_entry_count(self):
all_tokens = sum([util.count_tokens(entry) for entry in self.scene.history])
return all_tokens // self.threshold
def connect(self, scene):
super().connect(scene)
talemate.emit.async_signals.get("game_loop").connect(self.on_game_loop)

View File

@@ -109,29 +109,10 @@ class OpenAIImageMixin:
size=f"{resolution.width}x{resolution.height}",
quality=self.openai_quality,
n=1,
response_format="b64_json",
)
download_url = response.data[0].url
# decode url because httpx will encode it again
download_url = unquote(download_url)
parsed = urlparse(download_url)
query = parse_qs(parsed.query)
log.debug("openai_image_generate", download_url=download_url, query=query)
async with httpx.AsyncClient() as client:
response = await client.get(download_url, params=query, timeout=90)
log.debug("openai_image_generate", status_code=response.status_code)
if response.status_code >= 400:
log.error(
f"Error downloading image",
content=response.content,
status=response.status_code,
)
# bytes to base64encoded
image = base64.b64encode(response.content).decode("utf-8")
await self.emit_image(image)
await self.emit_image(response.data[0].b64_json)
async def openai_image_ready(self) -> bool:
"""

View File

@@ -3,6 +3,7 @@ import os
import talemate.client.runpod
from talemate.client.anthropic import AnthropicClient
from talemate.client.cohere import CohereClient
from talemate.client.google import GoogleClient
from talemate.client.groq import GroqClient
from talemate.client.lmstudio import LMStudioClient
from talemate.client.mistral import MistralAIClient

View File

@@ -2,6 +2,7 @@
A unified client base, based on the openai API
"""
import ipaddress
import logging
import random
import time
@@ -9,6 +10,7 @@ from typing import Callable, Union
import pydantic
import structlog
import urllib3
from openai import AsyncOpenAI, PermissionDeniedError
import talemate.client.presets as presets
@@ -25,11 +27,6 @@ logging.getLogger("httpx").setLevel(logging.WARNING)
log = structlog.get_logger("client.base")
REMOTE_SERVICES = [
# TODO: runpod.py should add this to the list
".runpod.net"
]
STOPPING_STRINGS = ["<|im_end|>", "</s>"]
@@ -55,7 +52,7 @@ class ErrorAction(pydantic.BaseModel):
class Defaults(pydantic.BaseModel):
api_url: str = "http://localhost:5000"
max_token_length: int = 4096
max_token_length: int = 8192
double_coercion: str = None
@@ -74,7 +71,7 @@ class ClientBase:
name: str = None
enabled: bool = True
current_status: str = None
max_token_length: int = 4096
max_token_length: int = 8192
processing: bool = False
connected: bool = False
conversation_retries: int = 0
@@ -106,7 +103,7 @@ class ClientBase:
self.double_coercion = kwargs.get("double_coercion", None)
if "max_token_length" in kwargs:
self.max_token_length = (
int(kwargs["max_token_length"]) if kwargs["max_token_length"] else 4096
int(kwargs["max_token_length"]) if kwargs["max_token_length"] else 8192
)
self.set_client(max_token_length=self.max_token_length)
@@ -179,21 +176,46 @@ class ClientBase:
if "double_coercion" in kwargs:
self.double_coercion = kwargs["double_coercion"]
def host_is_remote(self, url: str) -> bool:
"""
Returns whether or not the host is a remote service.
It checks common local hostnames / ip prefixes.
- localhost
"""
host = urllib3.util.parse_url(url).host
if host.lower() == "localhost":
return False
# use ipaddress module to check for local ip prefixes
try:
ip = ipaddress.ip_address(host)
except ValueError:
return True
if ip.is_loopback or ip.is_private:
return False
return True
def toggle_disabled_if_remote(self):
"""
If the client is targeting a remote recognized service, this
will disable the client.
"""
for service in REMOTE_SERVICES:
if service in self.api_url:
if self.enabled:
self.log.warn(
"remote service unreachable, disabling client", client=self.name
)
self.enabled = False
if not self.api_url:
return False
return True
if self.host_is_remote(self.api_url) and self.enabled:
self.log.warn(
"remote service unreachable, disabling client", client=self.name
)
self.enabled = False
return True
return False
@@ -344,6 +366,14 @@ class ClientBase:
if status_change:
instance.emit_agent_status_by_client(self)
def populate_extra_fields(self, data: dict):
"""
Updates data with the extra fields from the client's Meta
"""
for field_name in getattr(self.Meta(), "extra_fields", {}).keys():
data[field_name] = getattr(self, field_name, None)
def determine_prompt_template(self):
if not self.model_name:
return
@@ -387,7 +417,6 @@ class ClientBase:
self.connected = True
if not self.model_name or self.model_name == "None":
self.log.warning("client model not loaded", client=self)
self.emit_status()
return
@@ -512,9 +541,8 @@ class ClientBase:
max_token_length=self.max_token_length,
parameters=prompt_param,
)
response = await self.generate(
self.repetition_adjustment(finalized_prompt), prompt_param, kind
)
prompt_sent = self.repetition_adjustment(finalized_prompt)
response = await self.generate(prompt_sent, prompt_param, kind)
response, finalized_prompt = await self.auto_break_repetition(
finalized_prompt, prompt_param, response, kind, retries
@@ -536,7 +564,7 @@ class ClientBase:
"prompt_sent",
data=PromptData(
kind=kind,
prompt=finalized_prompt,
prompt=prompt_sent,
response=response,
prompt_tokens=self._returned_prompt_tokens or token_length,
response_tokens=self._returned_response_tokens
@@ -714,7 +742,6 @@ class ClientBase:
lines = prompt.split("\n")
new_lines = []
for line in lines:
if line.startswith("[$REPETITION|"):
if is_repetitive:

View File

@@ -0,0 +1,312 @@
import json
import os
import pydantic
import structlog
import vertexai
from google.api_core.exceptions import ResourceExhausted
from vertexai.generative_models import (
ChatSession,
GenerativeModel,
ResponseValidationError,
SafetySetting,
)
from talemate.client.base import ClientBase, ErrorAction, ExtraField
from talemate.client.registry import register
from talemate.client.remote import RemoteServiceMixin
from talemate.config import Client as BaseClientConfig
from talemate.config import load_config
from talemate.emit import emit
from talemate.emit.signals import handlers
from talemate.util import count_tokens
__all__ = [
"GoogleClient",
]
log = structlog.get_logger("talemate")
# Edit this to add new models / remove old models
SUPPORTED_MODELS = [
"gemini-1.0-pro",
"gemini-1.5-pro-preview-0409",
]
class Defaults(pydantic.BaseModel):
max_token_length: int = 16384
model: str = "gemini-1.0-pro"
disable_safety_settings: bool = False
class ClientConfig(BaseClientConfig):
disable_safety_settings: bool = False
@register()
class GoogleClient(RemoteServiceMixin, ClientBase):
"""
Google client for generating text.
"""
client_type = "google"
conversation_retries = 0
auto_break_repetition_enabled = False
decensor_enabled = True
config_cls = ClientConfig
class Meta(ClientBase.Meta):
name_prefix: str = "Google"
title: str = "Google"
manual_model: bool = True
manual_model_choices: list[str] = SUPPORTED_MODELS
requires_prompt_template: bool = False
defaults: Defaults = Defaults()
extra_fields: dict[str, ExtraField] = {
"disable_safety_settings": ExtraField(
name="disable_safety_settings",
type="bool",
label="Disable Safety Settings",
required=False,
description="Disable Google's safety settings for responses generated by the model.",
),
}
def __init__(self, model="gemini-1.0-pro", **kwargs):
self.model_name = model
self.setup_status = None
self.model_instance = None
self.disable_safety_settings = kwargs.get("disable_safety_settings", False)
self.google_credentials_read = False
self.google_project_id = None
self.config = load_config()
super().__init__(**kwargs)
handlers["config_saved"].connect(self.on_config_saved)
@property
def google_credentials(self):
path = self.google_credentials_path
if not path:
return None
with open(path) as f:
return json.load(f)
@property
def google_credentials_path(self):
return self.config.get("google").get("gcloud_credentials_path")
@property
def google_location(self):
return self.config.get("google").get("gcloud_location")
@property
def ready(self):
# all google settings must be set
return all(
[
self.google_credentials_path,
self.google_location,
]
)
@property
def safety_settings(self):
if not self.disable_safety_settings:
return None
safety_settings = [
SafetySetting(
category=SafetySetting.HarmCategory.HARM_CATEGORY_SEXUALLY_EXPLICIT,
threshold=SafetySetting.HarmBlockThreshold.BLOCK_NONE,
),
SafetySetting(
category=SafetySetting.HarmCategory.HARM_CATEGORY_DANGEROUS_CONTENT,
threshold=SafetySetting.HarmBlockThreshold.BLOCK_NONE,
),
SafetySetting(
category=SafetySetting.HarmCategory.HARM_CATEGORY_HARASSMENT,
threshold=SafetySetting.HarmBlockThreshold.BLOCK_NONE,
),
SafetySetting(
category=SafetySetting.HarmCategory.HARM_CATEGORY_HATE_SPEECH,
threshold=SafetySetting.HarmBlockThreshold.BLOCK_NONE,
),
SafetySetting(
category=SafetySetting.HarmCategory.HARM_CATEGORY_UNSPECIFIED,
threshold=SafetySetting.HarmBlockThreshold.BLOCK_NONE,
),
]
return safety_settings
def emit_status(self, processing: bool = None):
error_action = None
if processing is not None:
self.processing = processing
if self.ready:
status = "busy" if self.processing else "idle"
model_name = self.model_name
else:
status = "error"
model_name = "Setup incomplete"
error_action = ErrorAction(
title="Setup Google API credentials",
action_name="openAppConfig",
icon="mdi-key-variant",
arguments=[
"application",
"google_api",
],
)
if not self.model_name:
status = "error"
model_name = "No model loaded"
self.current_status = status
data = {
"error_action": error_action.model_dump() if error_action else None,
"meta": self.Meta().model_dump(),
}
self.populate_extra_fields(data)
emit(
"client_status",
message=self.client_type,
id=self.name,
details=model_name,
status=status,
data=data,
)
def set_client(self, max_token_length: int = None, **kwargs):
if not self.ready:
log.error("Google cloud setup incomplete")
if self.setup_status:
self.setup_status = False
emit("request_client_status")
emit("request_agent_status")
return
if not self.model_name:
self.model_name = "gemini-1.0-pro"
if max_token_length and not isinstance(max_token_length, int):
max_token_length = int(max_token_length)
if self.google_credentials_path:
os.environ["GOOGLE_APPLICATION_CREDENTIALS"] = self.google_credentials_path
model = self.model_name
self.max_token_length = max_token_length or 16384
if not self.setup_status:
if self.setup_status is False:
project_id = self.google_credentials.get("project_id")
self.google_project_id = project_id
if self.google_credentials_path:
vertexai.init(project=project_id, location=self.google_location)
emit("request_client_status")
emit("request_agent_status")
self.setup_status = True
self.model_instance = GenerativeModel(model_name=model)
log.info(
"google set client",
max_token_length=self.max_token_length,
provided_max_token_length=max_token_length,
model=model,
)
def response_tokens(self, response: str):
return count_tokens(response.text)
def prompt_tokens(self, prompt: str):
return count_tokens(prompt)
def reconfigure(self, **kwargs):
if kwargs.get("model"):
self.model_name = kwargs["model"]
self.set_client(kwargs.get("max_token_length"))
if "disable_safety_settings" in kwargs:
self.disable_safety_settings = kwargs["disable_safety_settings"]
async def generate(self, prompt: str, parameters: dict, kind: str):
"""
Generates text from the given prompt and parameters.
"""
if not self.ready:
raise Exception("Google cloud setup incomplete")
right = None
expected_response = None
try:
_, right = prompt.split("\nStart your response with: ")
expected_response = right.strip()
except (IndexError, ValueError):
pass
human_message = prompt.strip()
system_message = self.get_system_message(kind)
self.log.debug(
"generate",
prompt=prompt[:128] + " ...",
parameters=parameters,
system_message=system_message,
disable_safety_settings=self.disable_safety_settings,
safety_settings=self.safety_settings,
)
try:
chat = self.model_instance.start_chat()
response = await chat.send_message_async(
human_message,
safety_settings=self.safety_settings,
)
self._returned_prompt_tokens = self.prompt_tokens(prompt)
self._returned_response_tokens = self.response_tokens(response)
response = response.text
log.debug("generated response", response=response)
if expected_response and expected_response.startswith("{"):
if response.startswith("```json") and response.endswith("```"):
response = response[7:-3].strip()
if right and response.startswith(right):
response = response[len(right) :].strip()
return response
# except PermissionDeniedError as e:
# self.log.error("generate error", e=e)
# emit("status", message="google API: Permission Denied", status="error")
# return ""
except ResourceExhausted as e:
self.log.error("generate error", e=e)
emit("status", message="google API: Quota Limit reached", status="error")
return ""
except ResponseValidationError as e:
self.log.error("generate error", e=e)
emit(
"status",
message="google API: Response Validation Error",
status="error",
)
if not self.disable_safety_settings:
return "Failed to generate response. Probably due to safety settings, you can turn them off in the client settings."
return "Failed to generate response. Please check logs."
except Exception as e:
raise

View File

@@ -7,7 +7,7 @@ from talemate.client.registry import register
class Defaults(pydantic.BaseModel):
api_url: str = "http://localhost:1234"
max_token_length: int = 4096
max_token_length: int = 8192
@register()

View File

@@ -19,6 +19,7 @@ log = structlog.get_logger("talemate")
SUPPORTED_MODELS = [
"open-mistral-7b",
"open-mixtral-8x7b",
"open-mixtral-8x22b",
"mistral-small-latest",
"mistral-medium-latest",
"mistral-large-latest",
@@ -174,6 +175,12 @@ class MistralAIClient(ClientBase):
if key not in valid_keys:
del parameters[key]
# clamp temperature to 0.1 and 1.0
# Unhandled Error: Status: 422. Message: {"object":"error","message":{"detail":[{"type":"less_than_equal","loc":["body","temperature"],"msg":"Input should be less than or equal to 1","input":1.31,"ctx":{"le":1.0},"url":"https://errors.pydantic.dev/2.6/v/less_than_equal"}]},"type":"invalid_request_error","param":null,"code":null}
if "temperature" in parameters:
parameters["temperature"] = min(1.0, max(0.1, parameters["temperature"]))
async def generate(self, prompt: str, parameters: dict, kind: str):
"""
Generates text from the given prompt and parameters.

View File

@@ -17,7 +17,7 @@ EXPERIMENTAL_DESCRIPTION = """Use this client if you want to connect to a servic
class Defaults(pydantic.BaseModel):
api_url: str = "http://localhost:5000"
api_key: str = ""
max_token_length: int = 4096
max_token_length: int = 8192
model: str = ""
api_handles_prompt_template: bool = False
@@ -145,10 +145,10 @@ class OpenAICompatibleClient(ClientBase):
self.api_url = kwargs["api_url"]
if "max_token_length" in kwargs:
self.max_token_length = (
int(kwargs["max_token_length"]) if kwargs["max_token_length"] else 4096
int(kwargs["max_token_length"]) if kwargs["max_token_length"] else 8192
)
if "api_key" in kwargs:
self.api_auth = kwargs["api_key"]
self.api_key = kwargs["api_key"]
if "api_handles_prompt_template" in kwargs:
self.api_handles_prompt_template = kwargs["api_handles_prompt_template"]

View File

@@ -35,8 +35,8 @@ PRESET_LLAMA_PRECISE = {
}
PRESET_DETERMINISTIC = {
"temperature": 0.01,
"top_p": 0.01,
"temperature": 0.1,
"top_p": 1,
"top_k": 0,
"repetition_penalty": 1.0,
}
@@ -56,6 +56,12 @@ PRESET_SIMPLE_1 = {
"repetition_penalty": 1.15,
}
PRESET_ANALYTICAL = {
"temperature": 0.1,
"top_p": 0.9,
"top_k": 20,
}
def configure(config: dict, kind: str, total_budget: int):
"""
@@ -82,7 +88,17 @@ def set_preset(config: dict, kind: str):
def preset_for_kind(kind: str):
if kind == "conversation":
# tag based
if "deterministic" in kind:
return PRESET_DETERMINISTIC
elif "creative" in kind:
return PRESET_DIVINE_INTELLECT
elif "simple" in kind:
return PRESET_SIMPLE_1
elif "analytical" in kind:
return PRESET_ANALYTICAL
elif kind == "conversation":
return PRESET_TALEMATE_CONVERSATION
elif kind == "conversation_old":
return PRESET_TALEMATE_CONVERSATION # Assuming old conversation uses the same preset
@@ -133,11 +149,6 @@ def preset_for_kind(kind: str):
elif kind == "visualize":
return PRESET_SIMPLE_1
# tag based
elif "deterministic" in kind:
return PRESET_DETERMINISTIC
elif "creative" in kind:
return PRESET_DIVINE_INTELLECT
else:
return PRESET_SIMPLE_1 # Default preset if none of the kinds match

View File

@@ -0,0 +1,35 @@
__all__ = ["RemoteServiceMixin"]
class RemoteServiceMixin:
def prompt_template(self, system_message: str, prompt: str):
if "<|BOT|>" in prompt:
_, right = prompt.split("<|BOT|>", 1)
if right:
prompt = prompt.replace("<|BOT|>", "\nStart your response with: ")
else:
prompt = prompt.replace("<|BOT|>", "")
return prompt
def reconfigure(self, **kwargs):
if kwargs.get("model"):
self.model_name = kwargs["model"]
self.set_client(kwargs.get("max_token_length"))
def on_config_saved(self, event):
config = event.data
self.config = config
self.set_client(max_token_length=self.max_token_length)
def tune_prompt_parameters(self, parameters: dict, kind: str):
super().tune_prompt_parameters(parameters, kind)
keys = list(parameters.keys())
valid_keys = ["temperature", "max_tokens"]
for key in keys:
if key not in valid_keys:
del parameters[key]
async def status(self):
self.emit_status()

View File

@@ -5,12 +5,16 @@ import httpx
import structlog
from openai import AsyncOpenAI
from talemate.client.base import STOPPING_STRINGS, ClientBase, ExtraField
from talemate.client.base import STOPPING_STRINGS, ClientBase, Defaults, ExtraField
from talemate.client.registry import register
log = structlog.get_logger("talemate.client.textgenwebui")
class TextGeneratorWebuiClientDefaults(Defaults):
api_key: str = ""
@register()
class TextGeneratorWebuiClient(ClientBase):
auto_determine_prompt_template: bool = True
@@ -24,6 +28,20 @@ class TextGeneratorWebuiClient(ClientBase):
class Meta(ClientBase.Meta):
name_prefix: str = "TextGenWebUI"
title: str = "Text-Generation-WebUI (ooba)"
enable_api_auth: bool = True
defaults: TextGeneratorWebuiClientDefaults = TextGeneratorWebuiClientDefaults()
@property
def request_headers(self):
headers = {}
headers["Content-Type"] = "application/json"
if self.api_key:
headers["Authorization"] = f"Bearer {self.api_key}"
return headers
def __init__(self, **kwargs):
self.api_key = kwargs.pop("api_key", "")
super().__init__(**kwargs)
def tune_prompt_parameters(self, parameters: dict, kind: str):
super().tune_prompt_parameters(parameters, kind)
@@ -35,6 +53,7 @@ class TextGeneratorWebuiClient(ClientBase):
parameters["stop"] = parameters["stopping_strings"]
def set_client(self, **kwargs):
self.api_key = kwargs.get("api_key", self.api_key)
self.client = AsyncOpenAI(base_url=self.api_url + "/v1", api_key="sk-1111")
def finalize_llama3(self, parameters: dict, prompt: str) -> tuple[str, bool]:
@@ -72,9 +91,12 @@ class TextGeneratorWebuiClient(ClientBase):
return prompt, True
async def get_model_name(self):
async with httpx.AsyncClient() as client:
response = await client.get(
f"{self.api_url}/v1/internal/model/info", timeout=2
f"{self.api_url}/v1/internal/model/info",
timeout=2,
headers=self.request_headers,
)
if response.status_code == 404:
raise Exception("Could not find model info (wrong api version?)")
@@ -91,9 +113,6 @@ class TextGeneratorWebuiClient(ClientBase):
Generates text from the given prompt and parameters.
"""
headers = {}
headers["Content-Type"] = "application/json"
parameters["prompt"] = prompt.strip(" ")
async with httpx.AsyncClient() as client:
@@ -101,7 +120,7 @@ class TextGeneratorWebuiClient(ClientBase):
f"{self.api_url}/v1/completions",
json=parameters,
timeout=None,
headers=headers,
headers=self.request_headers,
)
response_data = response.json()
return response_data["choices"][0]["text"]
@@ -121,3 +140,9 @@ class TextGeneratorWebuiClient(ClientBase):
prompt_config["repetition_penalty"] = random.uniform(
rep_pen + min_offset * 0.3, rep_pen + offset * 0.3
)
def reconfigure(self, **kwargs):
if "api_key" in kwargs:
self.api_key = kwargs.pop("api_key")
super().reconfigure(**kwargs)

View File

@@ -7,6 +7,8 @@ from __future__ import annotations
from abc import ABC, abstractmethod
from typing import TYPE_CHECKING
import pydantic
from talemate.emit import Emitter, emit
if TYPE_CHECKING:
@@ -21,17 +23,23 @@ class TalemateCommand(Emitter, ABC):
manager: CommandManager = None
label: str = None
sets_scene_unsaved: bool = True
argument_cls: pydantic.BaseModel | None = None
def __init__(
self,
manager,
*args,
**kwargs,
):
self.scene = manager.scene
self.manager = manager
self.args = args
self.setup_emitter(self.scene)
if self.argument_cls is not None:
self.args = self.argument_cls(**kwargs)
else:
self.args = args
@classmethod
def is_command(cls, name):
return name == cls.name or name in cls.aliases

View File

@@ -1,10 +1,9 @@
from talemate.agents.creator.assistant import ContentGenerationContext
from talemate.commands.base import TalemateCommand
from talemate.commands.manager import register
from talemate.emit import emit
__all__ = [
"CmdAutocompleteDialogue",
]
__all__ = ["CmdAutocompleteDialogue", "CmdAutocomplete"]
@register
@@ -20,7 +19,63 @@ class CmdAutocompleteDialogue(TalemateCommand):
async def run(self):
input = self.args[0]
if len(self.args) > 1:
character_name = self.args[1]
character = self.scene.get_character(character_name)
else:
character = self.scene.get_player_character()
creator = self.scene.get_helper("creator").agent
character = self.scene.get_player_character()
await creator.autocomplete_dialogue(input, character, emit_signal=True)
@register
class CmdAutocomplete(TalemateCommand):
"""
Command class for the 'autocomplete' command
"""
name = "autocomplete"
description = "Generate information for an AI selected actor"
aliases = ["ac"]
argument_cls = ContentGenerationContext
async def run(self):
try:
creator = self.scene.get_helper("creator").agent
context_type, context_name = self.args.computed_context
if context_type == "dialogue":
if not self.args.character:
character = self.scene.get_player_character()
else:
character = self.scene.get_character(self.args.character)
self.scene.log.info(
"Running autocomplete dialogue",
partial=self.args.partial,
character=character,
)
await creator.autocomplete_dialogue(
self.args.partial, character, emit_signal=True
)
return
# force length to 35
self.args.length = 35
self.scene.log.info("Running autocomplete context", args=self.args)
completion = await creator.contextual_generate(self.args)
self.scene.log.info("Autocomplete context complete", completion=completion)
completion = (
completion.replace(f"{context_name}: {self.args.partial}", "")
.lstrip(".")
.strip()
)
emit("autocomplete_suggestion", completion)
except Exception as e:
self.scene.log.error("Error running autocomplete", error=str(e))
emit("autocomplete_suggestion", "")

View File

@@ -39,7 +39,7 @@ class CmdFixContinuityErrors(TalemateCommand):
character = None
fixed_message = await editor.check_continuity_errors(
str(message), character, force=True
str(message), character, force=True, message_id=message_id
)
self.scene.edit_message(message_id, fixed_message)

View File

@@ -2,6 +2,7 @@ import asyncio
from talemate.commands.base import TalemateCommand
from talemate.commands.manager import register
from talemate.emit import emit
@register
@@ -32,11 +33,19 @@ class CmdRebuildArchive(TalemateCommand):
else "PT0S"
)
entries = 0
total_entries = summarizer.agent.estimated_entry_count
while True:
emit(
"status",
message=f"Rebuilding historical archive... {entries}/{total_entries}",
status="busy",
)
more = await summarizer.agent.build_archive(self.scene)
entries += 1
if not more:
break
self.scene.sync_time()
await self.scene.commit_to_memory()
emit("status", message="Historical archive rebuilt", status="success")

View File

@@ -1,3 +1,5 @@
import json
import structlog
from talemate.emit import AbortCommand, Emitter
@@ -38,20 +40,28 @@ class Manager(Emitter):
# commands start with ! and are followed by a command name
cmd = cmd.strip()
cmd_args = ""
cmd_kwargs = {}
if not self.is_command(cmd):
return False
if ":" in cmd:
# split command name and args which are separated by a colon
cmd_name, cmd_args = cmd[1:].split(":", 1)
cmd_args_unsplit = cmd_args
cmd_args = cmd_args.split(":")
else:
cmd_name = cmd[1:]
cmd_args = []
for command_cls in self.command_classes:
if command_cls.is_command(cmd_name):
command = command_cls(self, *cmd_args)
if command_cls.argument_cls:
cmd_kwargs = json.loads(cmd_args_unsplit)
cmd_args = []
command = command_cls(self, *cmd_args, **cmd_kwargs)
try:
self.processing_command = True
command.command_start()

View File

@@ -162,6 +162,11 @@ class CoquiConfig(BaseModel):
api_key: Union[str, None] = None
class GoogleConfig(BaseModel):
gcloud_credentials_path: Union[str, None] = None
gcloud_location: Union[str, None] = None
class TTSVoiceSamples(BaseModel):
label: str
value: str
@@ -337,6 +342,8 @@ class Config(BaseModel):
runpod: RunPodConfig = RunPodConfig()
google: GoogleConfig = GoogleConfig()
chromadb: ChromaDB = ChromaDB()
elevenlabs: ElevenLabsConfig = ElevenLabsConfig()

View File

@@ -33,6 +33,7 @@ from talemate.util import (
fix_faulty_json,
remove_extra_linebreaks,
)
from talemate.util.prompt import condensed
__all__ = [
"Prompt",
@@ -96,14 +97,6 @@ def validate_line(line):
)
def condensed(s):
"""Replace all line breaks in a string with spaces."""
r = s.replace("\n", " ").replace("\r", "")
# also replace multiple spaces with a single space
return re.sub(r"\s+", " ", r)
def clean_response(response):
# remove invalid lines
cleaned = "\n".join(
@@ -379,6 +372,7 @@ class Prompt:
env.globals["len"] = lambda x: len(x)
env.globals["max"] = lambda x, y: max(x, y)
env.globals["min"] = lambda x, y: min(x, y)
env.globals["join"] = lambda x, y: y.join(x)
env.globals["make_list"] = lambda: JoinableList()
env.globals["make_dict"] = lambda: {}
env.globals["count_tokens"] = lambda x: count_tokens(
@@ -389,6 +383,9 @@ class Prompt:
env.globals["emit_system"] = lambda status, message: emit(
"system", status=status, message=message
)
env.globals["llm_can_be_coerced"] = lambda: (
self.client.can_be_coerced if self.client else False
)
env.globals["emit_narrator"] = lambda message: emit("system", message=message)
env.filters["condensed"] = condensed
ctx.update(self.vars)

View File

@@ -56,7 +56,9 @@ Emotions and actions should be written in italics. For example:
STAY IN THE SCENE. YOU MUST NOT BREAK CHARACTER. YOU MUST NOT BREAK THE FOURTH WALL.
YOU MUST DELIMIT YOUR CONTRIBUTION WITH "-- endofline --" AT THE END OF YOUR CONTRIBUTION.
YOU MUST MARK YOUR CONTRIBUTION WITH "-- endofline --" AT THE END OF YOUR CONTRIBUTION.
YOU MUST ONLY WRITE NEW DIALOGUE FOR {{ talking_character.name.upper() }}.
{% if scene.count_messages() >= 5 and not talking_character.dialogue_instructions %}Use an informal and colloquial register with a conversational tone. Overall, {{ talking_character.name }}'s dialog is informal, conversational, natural, and spontaneous, with a sense of immediacy.
{% endif -%}
@@ -109,7 +111,7 @@ YOU MUST DELIMIT YOUR CONTRIBUTION WITH "-- endofline --" AT THE END OF YOUR CON
{%- endif -%}
{% endif -%}
{% for scene_line in scene_context -%}
{{ scene_line }}
{{ scene_line }}-- endofline --
{% endfor %}
{% endblock -%}

View File

@@ -13,9 +13,9 @@
<|SECTION:TASK|>
Continue {{ character.name }}'s unfinished line in this screenplay.
Your response MUST only be the new parts of the dialogue, not the entire line.
Your response MUST only be the new parts of {{ character.name }}'s dialogue, not the entire line.
Partial line: {{ character.name }}: {{ input }}
Continue this text: {{ character.name }}: {{ input }}
{% if not can_coerce -%}
Continuation:
<|CLOSE_SECTION|>

View File

@@ -0,0 +1,25 @@
{% block rendered_context -%}
<|SECTION:CONTEXT|>
{%- with memory_query=scene.snapshot() -%}
{% include "extra-context.jinja2" %}
{% endwith %}
<|CLOSE_SECTION|>
{% endblock -%}
<|SECTION:SCENE|>
{% for scene_context in scene.context_history(budget=min(2048, max_tokens-300-count_tokens(self.rendered_context())), min_dialogue=20, sections=False) -%}
{{ scene_context }}
{% endfor %}
<|CLOSE_SECTION|>
<|SECTION:TASK|>
Continue the unfinished section of the next narrative.
Your response MUST only be the new parts of the narrative completion, not the entire line. Never add dialogue.
Continue this text: {{ input }}
{% if not can_coerce -%}
Continuation:
<|CLOSE_SECTION|>
{%- else -%}
<|CLOSE_SECTION|>
{{ bot_token }}{{ input }}
{%- endif -%}

View File

@@ -44,16 +44,18 @@ Use a simple, easy to read writing format.
{% elif context_typ == "character dialogue" %}
Generate a new line of example dialogue for {{ character.name }}.
{%- if character.example_dialogue -%}
Exisiting Dialogue Examples:
{% for line in character.example_dialogue %}
{{ line }}
{% endfor %}
{%- endif %}
You must only respond with the generated dialogue example.
Always contain actions in asterisks. For example, *{{ character.name}} smiles*.
Always contain dialogue in quotation marks. For example, {{ character.name}}: "Hello!"
{%- if character.dialogue_instructions -%}
{% if character.dialogue_instructions %}
Dialogue instructions for {{ character.name }}: {{ character.dialogue_instructions }}
{% endif -%}
{#- GENERAL CONTEXT -#}
@@ -67,6 +69,22 @@ Use a simple, easy to read writing format.
{% endif %}
{% if generation_context.instructions %}Additional instructions: {{ generation_context.instructions }}{% endif %}
<|CLOSE_SECTION|>
{% if can_coerce -%}
{{ bot_token }}
{%- if context_typ == 'character attribute' -%}
{{ character.name }}'s {{ context_name }}:{{ generation_context.partial }}
{%- elif context_typ == 'character dialogue' -%}
{{ character.name }}:{{ generation_context.partial }}
{%- else -%}
{{ context_name }}:{{ generation_context.partial }}
{%- endif -%}
{%- elif generation_context.partial -%}
Continue the partially generated text for "{{ context_name }}".
Your response MUST only be the new parts of the text, not the entire text.
Continue this text: {{ generation_context.partial }}
{%- else -%}
{{ bot_token }}
{%- if context_typ == 'character attribute' -%}
{{ character.name }}'s {{ context_name }}:
@@ -74,4 +92,5 @@ Use a simple, easy to read writing format.
{{ character.name }}:
{%- else -%}
{{ context_name }}:
{%- endif -%}
{%- endif -%}

View File

@@ -3,13 +3,20 @@
{{ character.description }}
<|CLOSE_SECTION|>
<|SECTION:TASK|>
Your task is to determine fitting dialogue instructions for this character.
Your task is to determine fitting dialogue instructions for {{ character.name }}.
By default all actors are given the following instructions for their character(s):
Dialogue instructions: "Use an informal and colloquial register with a conversational tone. Overall, {{ character.name }}'s dialog is informal, conversational, natural, and spontaneous, with a sense of immediacy."
However you can override this default instruction by providing your own instructions below.
{{ character.name }} is a character in {{ scene.context }}. The goal is always for {{ character.name }} to feel like a believable character in the context of the scene.
The character MUST feel relatable to the audience.
You must use simple language to describe the character's dialogue instructions.
Keep the format similar and stick to one paragraph.
<|CLOSE_SECTION|>
{{ bot_token }}Dialogue instructions:

View File

@@ -1,56 +1,37 @@
{% if character -%}
{% set content_block_identifier = character.name + "'s next dialogue" %}
{% set content_block_identifier = character.name + "'s next scene" %}
{% else -%}
{% set content_block_identifier = "next narrative" %}
{% endif -%}
{% block rendered_context -%}
<|SECTION:CONTEXT|>
{%- with memory_query=scene.snapshot() -%}
{% include "extra-context.jinja2" %}
{% endwith %}
{% if character %}
{{ character.name }}'s description: {{ character.description|condensed }}
{% endif %}
{{ text }}
<|CLOSE_SECTION|>
{% endblock -%}
<|SECTION:SCENE|>
{% set scene_history=scene.context_history(budget=max_tokens-512-count_tokens(self.rendered_context())) -%}
{% set final_line_number=len(scene_history) -%}
{% for scene_context in scene_history -%}
{{ loop.index }}. {{ scene_context }}
{% endfor -%}
{% if not scene.history -%}
No dialogue so far
{% endif -%}
<|SECTION:STORY DEVELOPMENT|>
{% set scene_history=scene.context_history(budget=max_tokens-512-count_tokens(self.rendered_context()), message_id=message_id, include_reinfocements=False) -%}
{{ agent_action("summarizer", "summarize", text=join(scene_history, '\n\n'), method="facts") }}
<|CLOSE_SECTION|>
<|SECTION:TASK|>
What are continuity errors?
Continuity errors are mistakes in a story that occur when something changes from one scene to the next. This could be a character's appearance, state of clothing, the time of day, or even the weather. These errors can be distracting for the reader and can take them out of the story. It's important to catch these errors and fix them before the story is published.
{% if character -%}
CAREFULLY Analyze {{ character.name }}'s next line in the scene for continuity errors.
CAREFULLY Analyze {{ character.name }}'s next scene for logical continuity errors in the context of the story developments so far.
{% else -%}
CAREFULLY Analyze the next line in the scene for continuity errors.
{% endif -%}
YOU MUST DO THIS LINE BY LINE PROVIDING ANALYSIS FOR EACH LINE SEPARATELY.
CAREFULLY Analyze the next scene for continuity errors.
{% endif %}
```{{ content_block_identifier }}
{{ content }}
{{ instruct_text("Create a highly accurate one line summary for the scene above. Important is anything that causes a state change in the scene, characters or objects. Use simple, clear language, and note details. Use exact words. YOUR RESPONSE MUST ONLY BE THE SUMMARY.", content)}}
```
YOU MUST NOT PROVIDE REPLACEMENT SUGGESTIONS WHEN YOU FIND CONTINUITY ERRORS.
You are looking for clear mistakes in objects or characters' state.
THINK CAREFULLY, consider state of the scene, the characters, clothing, items present or not longer present. If you find any continuity errors, list them in the response.
For example:
It is possible for the text to have multiple continuity errors. You must identify all of them.
- Characters interacting with objects in a way that contradicts the object's state as per the story developments.
- Characters forgetting something they said / agreed to earlier.
Always analyze the full dialogue, don't stop if you find one error.
THINK CAREFULLY, consider the chronological order of the story. If you find any logical continuity mistakes specifically in {{ content_block_identifier }}.
You response must be in the following format:
Your response must be in the following format:
ERROR 1: explanation of error
ERROR 2: explanation of error
ERROR 3: explanation of error
ERROR: [Description of the logical contradiction] - one per line
{% if llm_can_be_coerced() -%}
{{ bot_token }}I carefully analyzed the story developments and compared against the next proposed scene, and i found that there are
{% endif -%}

View File

@@ -1,8 +1,8 @@
{% if character -%}
{% set content_block_identifier = character.name + "'s next dialogue" %}
{% set content_block_identifier = character.name + "'s next scene (ID 11)" %}
{% set content_fix_identifier = character.name + "'s adjusted dialogue" %}
{% else -%}
{% set content_block_identifier = "next narrative" %}
{% set content_block_identifier = "next narrative (ID 11)" %}
{% set content_fix_identifier = "adjusted narrative" %}
{% endif -%}
{% set _ = set_state("content_fix_identifier", content_fix_identifier) %}
@@ -33,19 +33,19 @@ No dialogue so far
```{{ content_block_identifier }}
{{ content }}
```
The following continuity errors have been identified in "{{ content_block_identifier }}":
<|CLOSE_SECTION|>
<|SECTION:TASK|>
Write a revised draft of "{{ content_block_identifier }}" and fix the continuity errors identified in "{{ content_block_identifier }}":
{% for error in errors -%}
{{ error }}
{% endfor %}
<|CLOSE_SECTION|>
<|SECTION:TASK|>
Write a revised draft of "{{ content_block_identifier }}" and fix the continuity errors identified.
YOU MUST NOT CHANGE THE MEANING, PLOT DIRECTION OR TONE OF THE TEXT.
YOU MUST ONLY FIX CONTINUITY ERRORS, KEEP THE TONE, STYLE, AND MEANING THE SAME.
YOU MUST ONLY FIX CONTINUITY ERRORS.
Your revision must be framed between "```{{ content_fix_identifier }}" and "```". Your revision must only be {{ character.name }}'s dialogue and must not include any other character's dialogue.
<|CLOSE_SECTION|>
{{ bot_token }}```{{ content_fix_identifier }}<|TRAILING_NEW_LINE|>
{% if llm_can_be_coerced() -%}
{{ bot_token }}```{{ content_fix_identifier }}<|TRAILING_NEW_LINE|>
{% endif -%}

View File

@@ -24,9 +24,9 @@ Use an informal and colloquial register with a conversational tone. Overall, the
Narration style should be that of a 90s point and click adventure game. You are omniscient and can describe the scene in detail.
[$REPETITION|Narration is getting repetitive. Try to choose different words to break up the repetitive text.]
Only generate new narration. {{ extra_instructions }}
{% include "rerun-context.jinja2" -%}
[$REPETITION|Narration is getting repetitive. Try to choose different words to break up the repetitive text.]
<|CLOSE_SECTION|>
{{ bot_token }}New Narration:

View File

@@ -1,12 +1,19 @@
{% if summarization_method == "facts" -%}
{% set output_type = "factual list" -%}
{% else -%}
{% set output_type = "narrative description" -%}
{% endif -%}
{% if extra_context -%}
<|SECTION:PREVIOUS CONTEXT|>
{{ extra_context }}
<|CLOSE_SECTION|>
{% endif -%}
<|SECTION:TASK|>
Question: What happens explicitly within the dialogue section alpha below? Summarize into narrative description.
Question: What happens explicitly within the dialogue section alpha below? Summarize into a {{output_type}}.
Content Context: This is a specific scene from {{ scene.context }}
{% if output_type == "narrative description" %}
Use an informal and colloquial register with a conversational tone. Overall, the narrative is informal, conversational, natural, and spontaneous, with a sense of immediacy.
{% endif %}
{% if summarization_method == "long" -%}
This should be a detailed summary of the dialogue, including all the juicy details.
@@ -16,7 +23,11 @@ This should be a short and specific summary of the dialogue, including the most
YOU MUST ONLY SUMMARIZE THE CONTENT IN DIALOGUE SECTION ALPHA.
Expected Answer: A summarized narrative description of the dialogue section alpha, that can be inserted into the ongoing story in place of the dialogue.
{% if output_type == "narrative description" %}
Expected Answer: A summarized {{output_type}} of the dialogue section alpha, that can be inserted into the ongoing story in place of the dialogue.
{% elif output_type == "factual list" %}
Expected Answer: A highly accurate numerical chronological list of the events and state changes that occur in the dialogue section alpha. Important is anything that causes a state change in the scene, characters or objects. Use simple, clear language, and note details. Use exact words. Note all the state changes. Leave nothing out.
{% endif %}
{% if extra_instructions -%}
{{ extra_instructions }}
{% endif -%}

View File

@@ -2,6 +2,7 @@ import pydantic
import structlog
from talemate.agents.creator.assistant import ContentGenerationContext
from talemate.emit import emit
from talemate.instance import get_agent
log = structlog.get_logger("talemate.server.assistant")
@@ -41,3 +42,46 @@ class AssistantPlugin:
},
}
)
async def handle_autocomplete(self, data: dict):
data = ContentGenerationContext(**data)
try:
creator = self.scene.get_helper("creator").agent
context_type, context_name = data.computed_context
if context_type == "dialogue":
if not data.character:
character = self.scene.get_player_character()
else:
character = self.scene.get_character(data.character)
log.info(
"Running autocomplete dialogue",
partial=data.partial,
character=character,
)
await creator.autocomplete_dialogue(
data.partial, character, emit_signal=True
)
return
elif context_type == "narrative":
log.info("Running autocomplete narrative", partial=data.partial)
await creator.autocomplete_narrative(data.partial, emit_signal=True)
return
# force length to 35
data.length = 35
log.info("Running autocomplete context", args=data)
completion = await creator.contextual_generate(data)
log.info("Autocomplete context complete", completion=completion)
completion = (
completion.replace(f"{context_name}: {data.partial}", "")
.lstrip(".")
.strip()
)
emit("autocomplete_suggestion", completion)
except Exception as e:
log.error("Error running autocomplete", error=str(e))
emit("autocomplete_suggestion", "")

View File

@@ -30,6 +30,14 @@ class DevToolsPlugin:
async def handle_test_prompt(self, data):
payload = TestPromptPayload(**data)
client = self.websocket_handler.llm_clients[payload.client_name]["client"]
log.info(
"Testing prompt",
payload={
k: v for k, v in payload.generation_parameters.items() if k != "prompt"
},
)
response = await client.generate(
payload.prompt,
payload.generation_parameters,

View File

@@ -515,7 +515,7 @@ class WebsocketHandler(Receiver):
"name": emission.id,
"status": emission.status,
"data": emission.data,
"max_token_length": client.max_token_length if client else 4096,
"max_token_length": client.max_token_length if client else 8192,
"api_url": getattr(client, "api_url", None) if client else None,
"api_url": getattr(client, "api_url", None) if client else None,
"api_key": getattr(client, "api_key", None) if client else None,
@@ -757,6 +757,18 @@ class WebsocketHandler(Receiver):
self.scene.delete_message(message_id)
def edit_message(self, message_id, new_text):
message = self.scene.get_message(message_id)
editor = instance.get_agent("editor")
if editor.enabled and message.typ == "character":
character = self.scene.get_character(message.character_name)
loop = asyncio.get_event_loop()
new_text = loop.run_until_complete(
editor.fix_exposition(new_text, character)
)
self.scene.edit_message(message_id, new_text)
def apply_scene_config(self, scene_config: dict):

View File

@@ -4,6 +4,7 @@ from typing import Any, Union
import pydantic
import structlog
from talemate.instance import get_agent
from talemate.world_state.manager import (
StateReinforcementTemplate,
WorldStateManager,
@@ -105,6 +106,10 @@ class DeleteWorldStateTemplatePayload(pydantic.BaseModel):
template: StateReinforcementTemplate
class GenerateCharacterDialogueInstructionsPayload(pydantic.BaseModel):
name: str
class WorldStateManagerPlugin:
router = "world_state_manager"
@@ -602,3 +607,36 @@ class WorldStateManagerPlugin:
await self.handle_get_templates({})
await self.signal_operation_done()
async def handle_generate_character_dialogue_instructions(self, data):
payload = GenerateCharacterDialogueInstructionsPayload(**data)
log.debug("Generate character dialogue instructions", name=payload.name)
character = self.scene.get_character(payload.name)
if not character:
log.error("Character not found", name=payload.name)
return
creator = get_agent("creator")
instructions = await creator.determine_character_dialogue_instructions(
character
)
character.dialogue_instructions = instructions
self.websocket_handler.queue_put(
{
"type": "world_state_manager",
"action": "character_dialogue_instructions_generated",
"data": {
"name": payload.name,
"instructions": instructions,
},
}
)
await self.signal_operation_done()
self.scene.emit_status()

View File

@@ -46,6 +46,7 @@ from talemate.scene_message import (
TimePassageMessage,
)
from talemate.util import colored_text, count_tokens, extract_metadata, wrap_text
from talemate.util.prompt import condensed
from talemate.world_state import WorldState
from talemate.world_state.manager import WorldStateManager
@@ -1385,11 +1386,29 @@ class Scene(Emitter):
conversation_format = self.conversation_format
actor_direction_mode = self.get_helper("director").agent.actor_direction_mode
history_offset = kwargs.get("history_offset", 0)
message_id = kwargs.get("message_id")
include_reinfocements = kwargs.get("include_reinfocements", True)
# if message id is provided, find the message in the history
if message_id:
if history_offset:
log.warning(
"context_history",
message="history_offset is ignored when message_id is provided",
)
message_index = self.message_index(message_id)
history_start = message_index - 1
else:
history_start = len(self.history) - (1 + history_offset)
# collect dialogue
count = 0
for i in range(len(self.history) - 1, -1, -1):
for i in range(history_start, -1, -1):
count += 1
message = self.history[i]
@@ -1397,6 +1416,9 @@ class Scene(Emitter):
if message.hidden:
continue
if isinstance(message, ReinforcementMessage) and not include_reinfocements:
continue
if isinstance(message, DirectorMessage):
if not keep_director:
continue
@@ -1441,7 +1463,7 @@ class Scene(Emitter):
if count_tokens(parts_context) + count_tokens(text) > budget_context:
break
parts_context.insert(0, text)
parts_context.insert(0, condensed(text))
if count_tokens(parts_context + parts_dialogue) < 1024:
intro = self.get_intro()
@@ -2202,6 +2224,7 @@ class Scene(Emitter):
"ts": scene.ts,
"help": scene.help,
"experimental": scene.experimental,
"restore_from": scene.restore_from,
}
@property

View File

@@ -949,6 +949,7 @@ def ensure_dialog_line_format(line: str, default_wrap: str = None) -> str:
line = line.replace('*, "', '* "')
line = line.replace('*. "', '* "')
line = line.replace("*.", ".*")
# if the line ends with a whitespace followed by a classifier, strip both from the end
# as this indicates the remnants of a partial segment that was removed.

View File

@@ -1,4 +1,6 @@
__all__ = ["replace_special_tokens"]
import re
__all__ = ["condensed", "replace_special_tokens"]
def replace_special_tokens(prompt: str):
@@ -12,3 +14,11 @@ def replace_special_tokens(prompt: str):
return prompt.replace("<|TRAILING_NEW_LINE|>", "\n").replace(
"<|TRAILING_SPACE|>", " "
)
def condensed(s):
"""Replace all line breaks in a string with spaces."""
r = s.replace("\n", " ").replace("\r", "")
# also replace multiple spaces with a single space
return re.sub(r"\s+", " ", r)

View File

@@ -1,18 +1,22 @@
{
"name": "talemate_frontend",
"version": "0.24.0",
"version": "0.25.0",
"lockfileVersion": 2,
"requires": true,
"packages": {
"": {
"name": "talemate_frontend",
"version": "0.24.0",
"version": "0.25.0",
"dependencies": {
"@codemirror/lang-markdown": "^6.2.5",
"@codemirror/theme-one-dark": "^6.1.2",
"@mdi/font": "7.4.47",
"codemirror": "^6.0.1",
"core-js": "^3.8.3",
"dot-prop": "^8.0.2",
"roboto-fontface": "*",
"vue": "^3.2.13",
"vue-codemirror": "^6.1.1",
"vuetify": "^3.5.0",
"webfontloader": "^1.0.0"
},
@@ -1823,6 +1827,149 @@
"node": ">=6.9.0"
}
},
"node_modules/@codemirror/autocomplete": {
"version": "6.16.0",
"resolved": "https://registry.npmjs.org/@codemirror/autocomplete/-/autocomplete-6.16.0.tgz",
"integrity": "sha512-P/LeCTtZHRTCU4xQsa89vSKWecYv1ZqwzOd5topheGRf+qtacFgBeIMQi3eL8Kt/BUNvxUWkx+5qP2jlGoARrg==",
"dependencies": {
"@codemirror/language": "^6.0.0",
"@codemirror/state": "^6.0.0",
"@codemirror/view": "^6.17.0",
"@lezer/common": "^1.0.0"
},
"peerDependencies": {
"@codemirror/language": "^6.0.0",
"@codemirror/state": "^6.0.0",
"@codemirror/view": "^6.0.0",
"@lezer/common": "^1.0.0"
}
},
"node_modules/@codemirror/commands": {
"version": "6.5.0",
"resolved": "https://registry.npmjs.org/@codemirror/commands/-/commands-6.5.0.tgz",
"integrity": "sha512-rK+sj4fCAN/QfcY9BEzYMgp4wwL/q5aj/VfNSoH1RWPF9XS/dUwBkvlL3hpWgEjOqlpdN1uLC9UkjJ4tmyjJYg==",
"dependencies": {
"@codemirror/language": "^6.0.0",
"@codemirror/state": "^6.4.0",
"@codemirror/view": "^6.0.0",
"@lezer/common": "^1.1.0"
}
},
"node_modules/@codemirror/lang-css": {
"version": "6.2.1",
"resolved": "https://registry.npmjs.org/@codemirror/lang-css/-/lang-css-6.2.1.tgz",
"integrity": "sha512-/UNWDNV5Viwi/1lpr/dIXJNWiwDxpw13I4pTUAsNxZdg6E0mI2kTQb0P2iHczg1Tu+H4EBgJR+hYhKiHKko7qg==",
"dependencies": {
"@codemirror/autocomplete": "^6.0.0",
"@codemirror/language": "^6.0.0",
"@codemirror/state": "^6.0.0",
"@lezer/common": "^1.0.2",
"@lezer/css": "^1.0.0"
}
},
"node_modules/@codemirror/lang-html": {
"version": "6.4.9",
"resolved": "https://registry.npmjs.org/@codemirror/lang-html/-/lang-html-6.4.9.tgz",
"integrity": "sha512-aQv37pIMSlueybId/2PVSP6NPnmurFDVmZwzc7jszd2KAF8qd4VBbvNYPXWQq90WIARjsdVkPbw29pszmHws3Q==",
"dependencies": {
"@codemirror/autocomplete": "^6.0.0",
"@codemirror/lang-css": "^6.0.0",
"@codemirror/lang-javascript": "^6.0.0",
"@codemirror/language": "^6.4.0",
"@codemirror/state": "^6.0.0",
"@codemirror/view": "^6.17.0",
"@lezer/common": "^1.0.0",
"@lezer/css": "^1.1.0",
"@lezer/html": "^1.3.0"
}
},
"node_modules/@codemirror/lang-javascript": {
"version": "6.2.2",
"resolved": "https://registry.npmjs.org/@codemirror/lang-javascript/-/lang-javascript-6.2.2.tgz",
"integrity": "sha512-VGQfY+FCc285AhWuwjYxQyUQcYurWlxdKYT4bqwr3Twnd5wP5WSeu52t4tvvuWmljT4EmgEgZCqSieokhtY8hg==",
"dependencies": {
"@codemirror/autocomplete": "^6.0.0",
"@codemirror/language": "^6.6.0",
"@codemirror/lint": "^6.0.0",
"@codemirror/state": "^6.0.0",
"@codemirror/view": "^6.17.0",
"@lezer/common": "^1.0.0",
"@lezer/javascript": "^1.0.0"
}
},
"node_modules/@codemirror/lang-markdown": {
"version": "6.2.5",
"resolved": "https://registry.npmjs.org/@codemirror/lang-markdown/-/lang-markdown-6.2.5.tgz",
"integrity": "sha512-Hgke565YcO4fd9pe2uLYxnMufHO5rQwRr+AAhFq8ABuhkrjyX8R5p5s+hZUTdV60O0dMRjxKhBLxz8pu/MkUVA==",
"dependencies": {
"@codemirror/autocomplete": "^6.7.1",
"@codemirror/lang-html": "^6.0.0",
"@codemirror/language": "^6.3.0",
"@codemirror/state": "^6.0.0",
"@codemirror/view": "^6.0.0",
"@lezer/common": "^1.2.1",
"@lezer/markdown": "^1.0.0"
}
},
"node_modules/@codemirror/language": {
"version": "6.10.1",
"resolved": "https://registry.npmjs.org/@codemirror/language/-/language-6.10.1.tgz",
"integrity": "sha512-5GrXzrhq6k+gL5fjkAwt90nYDmjlzTIJV8THnxNFtNKWotMIlzzN+CpqxqwXOECnUdOndmSeWntVrVcv5axWRQ==",
"dependencies": {
"@codemirror/state": "^6.0.0",
"@codemirror/view": "^6.23.0",
"@lezer/common": "^1.1.0",
"@lezer/highlight": "^1.0.0",
"@lezer/lr": "^1.0.0",
"style-mod": "^4.0.0"
}
},
"node_modules/@codemirror/lint": {
"version": "6.7.0",
"resolved": "https://registry.npmjs.org/@codemirror/lint/-/lint-6.7.0.tgz",
"integrity": "sha512-LTLOL2nT41ADNSCCCCw8Q/UmdAFzB23OUYSjsHTdsVaH0XEo+orhuqbDNWzrzodm14w6FOxqxpmy4LF8Lixqjw==",
"dependencies": {
"@codemirror/state": "^6.0.0",
"@codemirror/view": "^6.0.0",
"crelt": "^1.0.5"
}
},
"node_modules/@codemirror/search": {
"version": "6.5.6",
"resolved": "https://registry.npmjs.org/@codemirror/search/-/search-6.5.6.tgz",
"integrity": "sha512-rpMgcsh7o0GuCDUXKPvww+muLA1pDJaFrpq/CCHtpQJYz8xopu4D1hPcKRoDD0YlF8gZaqTNIRa4VRBWyhyy7Q==",
"dependencies": {
"@codemirror/state": "^6.0.0",
"@codemirror/view": "^6.0.0",
"crelt": "^1.0.5"
}
},
"node_modules/@codemirror/state": {
"version": "6.4.1",
"resolved": "https://registry.npmjs.org/@codemirror/state/-/state-6.4.1.tgz",
"integrity": "sha512-QkEyUiLhsJoZkbumGZlswmAhA7CBU02Wrz7zvH4SrcifbsqwlXShVXg65f3v/ts57W3dqyamEriMhij1Z3Zz4A=="
},
"node_modules/@codemirror/theme-one-dark": {
"version": "6.1.2",
"resolved": "https://registry.npmjs.org/@codemirror/theme-one-dark/-/theme-one-dark-6.1.2.tgz",
"integrity": "sha512-F+sH0X16j/qFLMAfbciKTxVOwkdAS336b7AXTKOZhy8BR3eH/RelsnLgLFINrpST63mmN2OuwUt0W2ndUgYwUA==",
"dependencies": {
"@codemirror/language": "^6.0.0",
"@codemirror/state": "^6.0.0",
"@codemirror/view": "^6.0.0",
"@lezer/highlight": "^1.0.0"
}
},
"node_modules/@codemirror/view": {
"version": "6.26.3",
"resolved": "https://registry.npmjs.org/@codemirror/view/-/view-6.26.3.tgz",
"integrity": "sha512-gmqxkPALZjkgSxIeeweY/wGQXBfwTUaLs8h7OKtSwfbj9Ct3L11lD+u1sS7XHppxFQoMDiMDp07P9f3I2jWOHw==",
"dependencies": {
"@codemirror/state": "^6.4.0",
"style-mod": "^4.1.0",
"w3c-keyname": "^2.2.4"
}
},
"node_modules/@discoveryjs/json-ext": {
"version": "0.5.7",
"resolved": "https://registry.npmmirror.com/@discoveryjs/json-ext/-/json-ext-0.5.7.tgz",
@@ -1986,6 +2133,66 @@
"integrity": "sha512-Hcv+nVC0kZnQ3tD9GVu5xSMR4VVYOteQIr/hwFPVEvPdlXqgGEuRjiheChHgdM+JyqdgNcmzZOX/tnl0JOiI7A==",
"dev": true
},
"node_modules/@lezer/common": {
"version": "1.2.1",
"resolved": "https://registry.npmjs.org/@lezer/common/-/common-1.2.1.tgz",
"integrity": "sha512-yemX0ZD2xS/73llMZIK6KplkjIjf2EvAHcinDi/TfJ9hS25G0388+ClHt6/3but0oOxinTcQHJLDXh6w1crzFQ=="
},
"node_modules/@lezer/css": {
"version": "1.1.8",
"resolved": "https://registry.npmjs.org/@lezer/css/-/css-1.1.8.tgz",
"integrity": "sha512-7JhxupKuMBaWQKjQoLtzhGj83DdnZY9MckEOG5+/iLKNK2ZJqKc6hf6uc0HjwCX7Qlok44jBNqZhHKDhEhZYLA==",
"dependencies": {
"@lezer/common": "^1.2.0",
"@lezer/highlight": "^1.0.0",
"@lezer/lr": "^1.0.0"
}
},
"node_modules/@lezer/highlight": {
"version": "1.2.0",
"resolved": "https://registry.npmjs.org/@lezer/highlight/-/highlight-1.2.0.tgz",
"integrity": "sha512-WrS5Mw51sGrpqjlh3d4/fOwpEV2Hd3YOkp9DBt4k8XZQcoTHZFB7sx030A6OcahF4J1nDQAa3jXlTVVYH50IFA==",
"dependencies": {
"@lezer/common": "^1.0.0"
}
},
"node_modules/@lezer/html": {
"version": "1.3.9",
"resolved": "https://registry.npmjs.org/@lezer/html/-/html-1.3.9.tgz",
"integrity": "sha512-MXxeCMPyrcemSLGaTQEZx0dBUH0i+RPl8RN5GwMAzo53nTsd/Unc/t5ZxACeQoyPUM5/GkPLRUs2WliOImzkRA==",
"dependencies": {
"@lezer/common": "^1.2.0",
"@lezer/highlight": "^1.0.0",
"@lezer/lr": "^1.0.0"
}
},
"node_modules/@lezer/javascript": {
"version": "1.4.15",
"resolved": "https://registry.npmjs.org/@lezer/javascript/-/javascript-1.4.15.tgz",
"integrity": "sha512-B082ZdjI0vo2AgLqD834GlRTE9gwRX8NzHzKq5uDwEnQ9Dq+A/CEhd3nf68tiNA2f9O+8jS1NeSTUYT9IAqcTw==",
"dependencies": {
"@lezer/common": "^1.2.0",
"@lezer/highlight": "^1.1.3",
"@lezer/lr": "^1.3.0"
}
},
"node_modules/@lezer/lr": {
"version": "1.4.0",
"resolved": "https://registry.npmjs.org/@lezer/lr/-/lr-1.4.0.tgz",
"integrity": "sha512-Wst46p51km8gH0ZUmeNrtpRYmdlRHUpN1DQd3GFAyKANi8WVz8c2jHYTf1CVScFaCjQw1iO3ZZdqGDxQPRErTg==",
"dependencies": {
"@lezer/common": "^1.0.0"
}
},
"node_modules/@lezer/markdown": {
"version": "1.3.0",
"resolved": "https://registry.npmjs.org/@lezer/markdown/-/markdown-1.3.0.tgz",
"integrity": "sha512-ErbEQ15eowmJUyT095e9NJc3BI9yZ894fjSDtHftD0InkfUBGgnKSU6dvan9jqsZuNHg2+ag/1oyDRxNsENupQ==",
"dependencies": {
"@lezer/common": "^1.0.0",
"@lezer/highlight": "^1.0.0"
}
},
"node_modules/@mdi/font": {
"version": "7.4.47",
"resolved": "https://registry.npmjs.org/@mdi/font/-/font-7.4.47.tgz",
@@ -4097,6 +4304,20 @@
"node": ">=6"
}
},
"node_modules/codemirror": {
"version": "6.0.1",
"resolved": "https://registry.npmjs.org/codemirror/-/codemirror-6.0.1.tgz",
"integrity": "sha512-J8j+nZ+CdWmIeFIGXEFbFPtpiYacFMDR8GlHK3IyHQJMCaVRfGx9NT+Hxivv1ckLWPvNdZqndbr/7lVhrf/Svg==",
"dependencies": {
"@codemirror/autocomplete": "^6.0.0",
"@codemirror/commands": "^6.0.0",
"@codemirror/language": "^6.0.0",
"@codemirror/lint": "^6.0.0",
"@codemirror/search": "^6.0.0",
"@codemirror/state": "^6.0.0",
"@codemirror/view": "^6.0.0"
}
},
"node_modules/color-convert": {
"version": "1.9.3",
"resolved": "https://registry.npmmirror.com/color-convert/-/color-convert-1.9.3.tgz",
@@ -4331,6 +4552,11 @@
"node": ">=10"
}
},
"node_modules/crelt": {
"version": "1.0.6",
"resolved": "https://registry.npmjs.org/crelt/-/crelt-1.0.6.tgz",
"integrity": "sha512-VQ2MBenTq1fWZUH9DJNGti7kKv6EeAuYr3cLwxUWhIu1baTaXh4Ib5W2CqHVqib4/MqbYGJqiL3Zb8GJZr3l4g=="
},
"node_modules/cross-spawn": {
"version": "6.0.5",
"resolved": "https://registry.npmmirror.com/cross-spawn/-/cross-spawn-6.0.5.tgz",
@@ -9870,6 +10096,11 @@
"node": ">=8"
}
},
"node_modules/style-mod": {
"version": "4.1.2",
"resolved": "https://registry.npmjs.org/style-mod/-/style-mod-4.1.2.tgz",
"integrity": "sha512-wnD1HyVqpJUI2+eKZ+eo1UwghftP6yuFheBqqe+bWCotBjC2K1YnteJILRMs3SM4V/0dLEW1SC27MWP5y+mwmw=="
},
"node_modules/stylehacks": {
"version": "5.1.1",
"resolved": "https://registry.npmmirror.com/stylehacks/-/stylehacks-5.1.1.tgz",
@@ -10401,6 +10632,21 @@
}
}
},
"node_modules/vue-codemirror": {
"version": "6.1.1",
"resolved": "https://registry.npmjs.org/vue-codemirror/-/vue-codemirror-6.1.1.tgz",
"integrity": "sha512-rTAYo44owd282yVxKtJtnOi7ERAcXTeviwoPXjIc6K/IQYUsoDkzPvw/JDFtSP6T7Cz/2g3EHaEyeyaQCKoDMg==",
"dependencies": {
"@codemirror/commands": "6.x",
"@codemirror/language": "6.x",
"@codemirror/state": "6.x",
"@codemirror/view": "6.x"
},
"peerDependencies": {
"codemirror": "6.x",
"vue": "3.x"
}
},
"node_modules/vue-eslint-parser": {
"version": "8.3.0",
"resolved": "https://registry.npmmirror.com/vue-eslint-parser/-/vue-eslint-parser-8.3.0.tgz",
@@ -10614,6 +10860,11 @@
}
}
},
"node_modules/w3c-keyname": {
"version": "2.2.8",
"resolved": "https://registry.npmjs.org/w3c-keyname/-/w3c-keyname-2.2.8.tgz",
"integrity": "sha512-dpojBhNsCNN7T82Tm7k26A6G9ML3NkhDsnw9n/eoxSRlVBB4CEtIQ/KTCLI2Fwf3ataSXRhYFkQi3SlnFwPvPQ=="
},
"node_modules/watchpack": {
"version": "2.4.0",
"resolved": "https://registry.npmmirror.com/watchpack/-/watchpack-2.4.0.tgz",
@@ -12590,6 +12841,143 @@
"to-fast-properties": "^2.0.0"
}
},
"@codemirror/autocomplete": {
"version": "6.16.0",
"resolved": "https://registry.npmjs.org/@codemirror/autocomplete/-/autocomplete-6.16.0.tgz",
"integrity": "sha512-P/LeCTtZHRTCU4xQsa89vSKWecYv1ZqwzOd5topheGRf+qtacFgBeIMQi3eL8Kt/BUNvxUWkx+5qP2jlGoARrg==",
"requires": {
"@codemirror/language": "^6.0.0",
"@codemirror/state": "^6.0.0",
"@codemirror/view": "^6.17.0",
"@lezer/common": "^1.0.0"
}
},
"@codemirror/commands": {
"version": "6.5.0",
"resolved": "https://registry.npmjs.org/@codemirror/commands/-/commands-6.5.0.tgz",
"integrity": "sha512-rK+sj4fCAN/QfcY9BEzYMgp4wwL/q5aj/VfNSoH1RWPF9XS/dUwBkvlL3hpWgEjOqlpdN1uLC9UkjJ4tmyjJYg==",
"requires": {
"@codemirror/language": "^6.0.0",
"@codemirror/state": "^6.4.0",
"@codemirror/view": "^6.0.0",
"@lezer/common": "^1.1.0"
}
},
"@codemirror/lang-css": {
"version": "6.2.1",
"resolved": "https://registry.npmjs.org/@codemirror/lang-css/-/lang-css-6.2.1.tgz",
"integrity": "sha512-/UNWDNV5Viwi/1lpr/dIXJNWiwDxpw13I4pTUAsNxZdg6E0mI2kTQb0P2iHczg1Tu+H4EBgJR+hYhKiHKko7qg==",
"requires": {
"@codemirror/autocomplete": "^6.0.0",
"@codemirror/language": "^6.0.0",
"@codemirror/state": "^6.0.0",
"@lezer/common": "^1.0.2",
"@lezer/css": "^1.0.0"
}
},
"@codemirror/lang-html": {
"version": "6.4.9",
"resolved": "https://registry.npmjs.org/@codemirror/lang-html/-/lang-html-6.4.9.tgz",
"integrity": "sha512-aQv37pIMSlueybId/2PVSP6NPnmurFDVmZwzc7jszd2KAF8qd4VBbvNYPXWQq90WIARjsdVkPbw29pszmHws3Q==",
"requires": {
"@codemirror/autocomplete": "^6.0.0",
"@codemirror/lang-css": "^6.0.0",
"@codemirror/lang-javascript": "^6.0.0",
"@codemirror/language": "^6.4.0",
"@codemirror/state": "^6.0.0",
"@codemirror/view": "^6.17.0",
"@lezer/common": "^1.0.0",
"@lezer/css": "^1.1.0",
"@lezer/html": "^1.3.0"
}
},
"@codemirror/lang-javascript": {
"version": "6.2.2",
"resolved": "https://registry.npmjs.org/@codemirror/lang-javascript/-/lang-javascript-6.2.2.tgz",
"integrity": "sha512-VGQfY+FCc285AhWuwjYxQyUQcYurWlxdKYT4bqwr3Twnd5wP5WSeu52t4tvvuWmljT4EmgEgZCqSieokhtY8hg==",
"requires": {
"@codemirror/autocomplete": "^6.0.0",
"@codemirror/language": "^6.6.0",
"@codemirror/lint": "^6.0.0",
"@codemirror/state": "^6.0.0",
"@codemirror/view": "^6.17.0",
"@lezer/common": "^1.0.0",
"@lezer/javascript": "^1.0.0"
}
},
"@codemirror/lang-markdown": {
"version": "6.2.5",
"resolved": "https://registry.npmjs.org/@codemirror/lang-markdown/-/lang-markdown-6.2.5.tgz",
"integrity": "sha512-Hgke565YcO4fd9pe2uLYxnMufHO5rQwRr+AAhFq8ABuhkrjyX8R5p5s+hZUTdV60O0dMRjxKhBLxz8pu/MkUVA==",
"requires": {
"@codemirror/autocomplete": "^6.7.1",
"@codemirror/lang-html": "^6.0.0",
"@codemirror/language": "^6.3.0",
"@codemirror/state": "^6.0.0",
"@codemirror/view": "^6.0.0",
"@lezer/common": "^1.2.1",
"@lezer/markdown": "^1.0.0"
}
},
"@codemirror/language": {
"version": "6.10.1",
"resolved": "https://registry.npmjs.org/@codemirror/language/-/language-6.10.1.tgz",
"integrity": "sha512-5GrXzrhq6k+gL5fjkAwt90nYDmjlzTIJV8THnxNFtNKWotMIlzzN+CpqxqwXOECnUdOndmSeWntVrVcv5axWRQ==",
"requires": {
"@codemirror/state": "^6.0.0",
"@codemirror/view": "^6.23.0",
"@lezer/common": "^1.1.0",
"@lezer/highlight": "^1.0.0",
"@lezer/lr": "^1.0.0",
"style-mod": "^4.0.0"
}
},
"@codemirror/lint": {
"version": "6.7.0",
"resolved": "https://registry.npmjs.org/@codemirror/lint/-/lint-6.7.0.tgz",
"integrity": "sha512-LTLOL2nT41ADNSCCCCw8Q/UmdAFzB23OUYSjsHTdsVaH0XEo+orhuqbDNWzrzodm14w6FOxqxpmy4LF8Lixqjw==",
"requires": {
"@codemirror/state": "^6.0.0",
"@codemirror/view": "^6.0.0",
"crelt": "^1.0.5"
}
},
"@codemirror/search": {
"version": "6.5.6",
"resolved": "https://registry.npmjs.org/@codemirror/search/-/search-6.5.6.tgz",
"integrity": "sha512-rpMgcsh7o0GuCDUXKPvww+muLA1pDJaFrpq/CCHtpQJYz8xopu4D1hPcKRoDD0YlF8gZaqTNIRa4VRBWyhyy7Q==",
"requires": {
"@codemirror/state": "^6.0.0",
"@codemirror/view": "^6.0.0",
"crelt": "^1.0.5"
}
},
"@codemirror/state": {
"version": "6.4.1",
"resolved": "https://registry.npmjs.org/@codemirror/state/-/state-6.4.1.tgz",
"integrity": "sha512-QkEyUiLhsJoZkbumGZlswmAhA7CBU02Wrz7zvH4SrcifbsqwlXShVXg65f3v/ts57W3dqyamEriMhij1Z3Zz4A=="
},
"@codemirror/theme-one-dark": {
"version": "6.1.2",
"resolved": "https://registry.npmjs.org/@codemirror/theme-one-dark/-/theme-one-dark-6.1.2.tgz",
"integrity": "sha512-F+sH0X16j/qFLMAfbciKTxVOwkdAS336b7AXTKOZhy8BR3eH/RelsnLgLFINrpST63mmN2OuwUt0W2ndUgYwUA==",
"requires": {
"@codemirror/language": "^6.0.0",
"@codemirror/state": "^6.0.0",
"@codemirror/view": "^6.0.0",
"@lezer/highlight": "^1.0.0"
}
},
"@codemirror/view": {
"version": "6.26.3",
"resolved": "https://registry.npmjs.org/@codemirror/view/-/view-6.26.3.tgz",
"integrity": "sha512-gmqxkPALZjkgSxIeeweY/wGQXBfwTUaLs8h7OKtSwfbj9Ct3L11lD+u1sS7XHppxFQoMDiMDp07P9f3I2jWOHw==",
"requires": {
"@codemirror/state": "^6.4.0",
"style-mod": "^4.1.0",
"w3c-keyname": "^2.2.4"
}
},
"@discoveryjs/json-ext": {
"version": "0.5.7",
"resolved": "https://registry.npmmirror.com/@discoveryjs/json-ext/-/json-ext-0.5.7.tgz",
@@ -12730,6 +13118,66 @@
"integrity": "sha512-Hcv+nVC0kZnQ3tD9GVu5xSMR4VVYOteQIr/hwFPVEvPdlXqgGEuRjiheChHgdM+JyqdgNcmzZOX/tnl0JOiI7A==",
"dev": true
},
"@lezer/common": {
"version": "1.2.1",
"resolved": "https://registry.npmjs.org/@lezer/common/-/common-1.2.1.tgz",
"integrity": "sha512-yemX0ZD2xS/73llMZIK6KplkjIjf2EvAHcinDi/TfJ9hS25G0388+ClHt6/3but0oOxinTcQHJLDXh6w1crzFQ=="
},
"@lezer/css": {
"version": "1.1.8",
"resolved": "https://registry.npmjs.org/@lezer/css/-/css-1.1.8.tgz",
"integrity": "sha512-7JhxupKuMBaWQKjQoLtzhGj83DdnZY9MckEOG5+/iLKNK2ZJqKc6hf6uc0HjwCX7Qlok44jBNqZhHKDhEhZYLA==",
"requires": {
"@lezer/common": "^1.2.0",
"@lezer/highlight": "^1.0.0",
"@lezer/lr": "^1.0.0"
}
},
"@lezer/highlight": {
"version": "1.2.0",
"resolved": "https://registry.npmjs.org/@lezer/highlight/-/highlight-1.2.0.tgz",
"integrity": "sha512-WrS5Mw51sGrpqjlh3d4/fOwpEV2Hd3YOkp9DBt4k8XZQcoTHZFB7sx030A6OcahF4J1nDQAa3jXlTVVYH50IFA==",
"requires": {
"@lezer/common": "^1.0.0"
}
},
"@lezer/html": {
"version": "1.3.9",
"resolved": "https://registry.npmjs.org/@lezer/html/-/html-1.3.9.tgz",
"integrity": "sha512-MXxeCMPyrcemSLGaTQEZx0dBUH0i+RPl8RN5GwMAzo53nTsd/Unc/t5ZxACeQoyPUM5/GkPLRUs2WliOImzkRA==",
"requires": {
"@lezer/common": "^1.2.0",
"@lezer/highlight": "^1.0.0",
"@lezer/lr": "^1.0.0"
}
},
"@lezer/javascript": {
"version": "1.4.15",
"resolved": "https://registry.npmjs.org/@lezer/javascript/-/javascript-1.4.15.tgz",
"integrity": "sha512-B082ZdjI0vo2AgLqD834GlRTE9gwRX8NzHzKq5uDwEnQ9Dq+A/CEhd3nf68tiNA2f9O+8jS1NeSTUYT9IAqcTw==",
"requires": {
"@lezer/common": "^1.2.0",
"@lezer/highlight": "^1.1.3",
"@lezer/lr": "^1.3.0"
}
},
"@lezer/lr": {
"version": "1.4.0",
"resolved": "https://registry.npmjs.org/@lezer/lr/-/lr-1.4.0.tgz",
"integrity": "sha512-Wst46p51km8gH0ZUmeNrtpRYmdlRHUpN1DQd3GFAyKANi8WVz8c2jHYTf1CVScFaCjQw1iO3ZZdqGDxQPRErTg==",
"requires": {
"@lezer/common": "^1.0.0"
}
},
"@lezer/markdown": {
"version": "1.3.0",
"resolved": "https://registry.npmjs.org/@lezer/markdown/-/markdown-1.3.0.tgz",
"integrity": "sha512-ErbEQ15eowmJUyT095e9NJc3BI9yZ894fjSDtHftD0InkfUBGgnKSU6dvan9jqsZuNHg2+ag/1oyDRxNsENupQ==",
"requires": {
"@lezer/common": "^1.0.0",
"@lezer/highlight": "^1.0.0"
}
},
"@mdi/font": {
"version": "7.4.47",
"resolved": "https://registry.npmjs.org/@mdi/font/-/font-7.4.47.tgz",
@@ -14490,6 +14938,20 @@
"shallow-clone": "^3.0.0"
}
},
"codemirror": {
"version": "6.0.1",
"resolved": "https://registry.npmjs.org/codemirror/-/codemirror-6.0.1.tgz",
"integrity": "sha512-J8j+nZ+CdWmIeFIGXEFbFPtpiYacFMDR8GlHK3IyHQJMCaVRfGx9NT+Hxivv1ckLWPvNdZqndbr/7lVhrf/Svg==",
"requires": {
"@codemirror/autocomplete": "^6.0.0",
"@codemirror/commands": "^6.0.0",
"@codemirror/language": "^6.0.0",
"@codemirror/lint": "^6.0.0",
"@codemirror/search": "^6.0.0",
"@codemirror/state": "^6.0.0",
"@codemirror/view": "^6.0.0"
}
},
"color-convert": {
"version": "1.9.3",
"resolved": "https://registry.npmmirror.com/color-convert/-/color-convert-1.9.3.tgz",
@@ -14690,6 +15152,11 @@
"yaml": "^1.10.0"
}
},
"crelt": {
"version": "1.0.6",
"resolved": "https://registry.npmjs.org/crelt/-/crelt-1.0.6.tgz",
"integrity": "sha512-VQ2MBenTq1fWZUH9DJNGti7kKv6EeAuYr3cLwxUWhIu1baTaXh4Ib5W2CqHVqib4/MqbYGJqiL3Zb8GJZr3l4g=="
},
"cross-spawn": {
"version": "6.0.5",
"resolved": "https://registry.npmmirror.com/cross-spawn/-/cross-spawn-6.0.5.tgz",
@@ -18998,6 +19465,11 @@
"integrity": "sha512-6fPc+R4ihwqP6N/aIv2f1gMH8lOVtWQHoqC4yK6oSDVVocumAsfCqjkXnqiYMhmMwS/mEHLp7Vehlt3ql6lEig==",
"dev": true
},
"style-mod": {
"version": "4.1.2",
"resolved": "https://registry.npmjs.org/style-mod/-/style-mod-4.1.2.tgz",
"integrity": "sha512-wnD1HyVqpJUI2+eKZ+eo1UwghftP6yuFheBqqe+bWCotBjC2K1YnteJILRMs3SM4V/0dLEW1SC27MWP5y+mwmw=="
},
"stylehacks": {
"version": "5.1.1",
"resolved": "https://registry.npmmirror.com/stylehacks/-/stylehacks-5.1.1.tgz",
@@ -19402,6 +19874,17 @@
"shelljs": "^0.8.3"
}
},
"vue-codemirror": {
"version": "6.1.1",
"resolved": "https://registry.npmjs.org/vue-codemirror/-/vue-codemirror-6.1.1.tgz",
"integrity": "sha512-rTAYo44owd282yVxKtJtnOi7ERAcXTeviwoPXjIc6K/IQYUsoDkzPvw/JDFtSP6T7Cz/2g3EHaEyeyaQCKoDMg==",
"requires": {
"@codemirror/commands": "6.x",
"@codemirror/language": "6.x",
"@codemirror/state": "6.x",
"@codemirror/view": "6.x"
}
},
"vue-eslint-parser": {
"version": "8.3.0",
"resolved": "https://registry.npmmirror.com/vue-eslint-parser/-/vue-eslint-parser-8.3.0.tgz",
@@ -19550,6 +20033,11 @@
"integrity": "sha512-zpZFZoJE9c8QlHc8s9zowKzMUTjytdzz2PQpZPezVENm0Jp+KBi+KooZGxvj7l+YfeFdKOcSjht7nEptSSMPMg==",
"requires": {}
},
"w3c-keyname": {
"version": "2.2.8",
"resolved": "https://registry.npmjs.org/w3c-keyname/-/w3c-keyname-2.2.8.tgz",
"integrity": "sha512-dpojBhNsCNN7T82Tm7k26A6G9ML3NkhDsnw9n/eoxSRlVBB4CEtIQ/KTCLI2Fwf3ataSXRhYFkQi3SlnFwPvPQ=="
},
"watchpack": {
"version": "2.4.0",
"resolved": "https://registry.npmmirror.com/watchpack/-/watchpack-2.4.0.tgz",

View File

@@ -1,6 +1,6 @@
{
"name": "talemate_frontend",
"version": "0.24.0",
"version": "0.25.0",
"private": true,
"scripts": {
"serve": "vue-cli-service serve",
@@ -8,11 +8,15 @@
"lint": "vue-cli-service lint"
},
"dependencies": {
"@codemirror/lang-markdown": "^6.2.5",
"@codemirror/theme-one-dark": "^6.1.2",
"@mdi/font": "7.4.47",
"codemirror": "^6.0.1",
"core-js": "^3.8.3",
"dot-prop": "^8.0.2",
"roboto-fontface": "*",
"vue": "^3.2.13",
"vue-codemirror": "^6.1.1",
"vuetify": "^3.5.0",
"webfontloader": "^1.0.0"
},

View File

@@ -99,7 +99,7 @@ export default {
type: '',
api_url: '',
model_name: '',
max_token_length: 4096,
max_token_length: 8192,
double_coercion: null,
data: {
has_prompt_template: false,
@@ -156,7 +156,7 @@ export default {
type: 'textgenwebui',
api_url: 'http://localhost:5000',
model_name: '',
max_token_length: 4096,
max_token_length: 8192,
data: {
has_prompt_template: false,
}

View File

@@ -191,6 +191,35 @@
</v-row>
</div>
<!-- GOOGLE API
THis adds fields for
gcloud_credentials_path
gcloud_project_id
gcloud_location
-->
<div v-if="applicationPageSelected === 'google_api'">
<v-alert color="white" variant="text" icon="mdi-google-cloud" density="compact">
<v-alert-title>Google Cloud</v-alert-title>
<div class="text-grey">
In order for you to use Google Cloud services like the vertexi ai api for Gemini inference you will need to set up a Google Cloud project and credentials.
Please follow the instructions <a href="https://cloud.google.com/vertex-ai/docs/start/client-libraries">here</a> and then fill in the fields below.
</div>
</v-alert>
<v-divider class="mb-2"></v-divider>
<v-row>
<v-col cols="12">
<v-text-field v-model="app_config.google.gcloud_credentials_path"
label="Google Cloud Credentials Path" messages="This should be a path to the credentials JSON file you downloaded through the setup above. This path needs to be accessible by the computer that is running the Talemate backend. If you are running Talemate on a server, you can upload the file to the server and the path should be the path to the file on the server."></v-text-field>
</v-col>
<v-col cols="6">
<v-combobox v-model="app_config.google.gcloud_location"
label="Google Cloud Location" :items="googleCloudLocations" messages="Pick something close to you" :return-object="false"></v-combobox>
</v-col>
</v-row>
</div>
<!-- ELEVENLABS API -->
<div v-if="applicationPageSelected === 'elevenlabs_api'">
<v-alert color="white" variant="text" icon="mdi-api" density="compact">
@@ -315,6 +344,7 @@ export default {
{title: 'Anthropic', icon: 'mdi-api', value: 'anthropic_api'},
{title: 'Cohere', icon: 'mdi-api', value: 'cohere_api'},
{title: 'groq', icon: 'mdi-api', value: 'groq_api'},
{title: 'Google Cloud', icon: 'mdi-google-cloud', value: 'google_api'},
{title: 'ElevenLabs', icon: 'mdi-api', value: 'elevenlabs_api'},
{title: 'RunPod', icon: 'mdi-api', value: 'runpod_api'},
],
@@ -325,6 +355,40 @@ export default {
gamePageSelected: 'general',
applicationPageSelected: 'openai_api',
creatorPageSelected: 'content_context',
googleCloudLocations: [
{"value": 'us-central1', "title": 'US Central - Iowa'},
{"value": 'us-west4', "title": 'US West 4 - Las Vegas'},
{"value": 'us-east1', "title": 'US East 1 - South Carolina'},
{"value": 'us-east4', "title": 'US East 4 - Northern Virginia'},
{"value": 'us-west1', "title": 'US West 1 - Oregon'},
{"value": 'northamerica-northeast1', "title": 'North America Northeast 1 - Montreal'},
{"value": 'southamerica-east1', "title": 'South America East 1 - Sao Paulo'},
{"value": 'europe-west1', "title": 'Europe West 1 - Belgium'},
{"value": 'europe-north1', "title": 'Europe North 1 - Finland'},
{"value": 'europe-west3', "title": 'Europe West 3 - Frankfurt'},
{"value": 'europe-west2', "title": 'Europe West 2 - London'},
{"value": 'europe-southwest1', "title": 'Europe Southwest 1 - Zurich'},
{"value": 'europe-west8', "title": 'Europe West 8 - Netherlands'},
{"value": 'europe-west4', "title": 'Europe West 4 - London'},
{"value": 'europe-west9', "title": 'Europe West 9 - Stockholm'},
{"value": 'europe-central2', "title": 'Europe Central 2 - Warsaw'},
{"value": 'europe-west6', "title": 'Europe West 6 - Zurich'},
{"value": 'asia-east1', "title": 'Asia East 1 - Taiwan'},
{"value": 'asia-east2', "title": 'Asia East 2 - Hong Kong'},
{"value": 'asia-south1', "title": 'Asia South 1 - Mumbai'},
{"value": 'asia-northeast1', "title": 'Asia Northeast 1 - Tokyo'},
{"value": 'asia-northeast3', "title": 'Asia Northeast 3 - Seoul'},
{"value": 'asia-southeast1', "title": 'Asia Southeast 1 - Singapore'},
{"value": 'asia-southeast2', "title": 'Asia Southeast 2 - Jakarta'},
{"value": 'australia-southeast1', "title": 'Australia Southeast 1 - Sydney'},
{"value": 'australia-southeast2', "title": 'Australia Southeast 2 - Melbourne'},
{"value": 'me-west1', "title": 'Middle East West 1 - Dammam'},
{"value": 'asia-northeast2', "title": 'Asia Northeast 2 - Osaka'},
{"value": 'asia-northeast3', "title": 'Asia Northeast 3 - Seoul'},
{"value": 'asia-south1', "title": 'Asia South 1 - Mumbai'},
{"value": 'asia-southeast1', "title": 'Asia Southeast 1 - Singapore'},
{"value": 'asia-southeast2', "title": 'Asia Southeast 2 - Jakarta'}
].sort((a, b) => a.title.localeCompare(b.title))
}
},
inject: ['getWebsocket', 'registerMessageHandler', 'setWaitingForInput', 'requestSceneAssets', 'requestAppConfig'],

View File

@@ -12,7 +12,21 @@
<div class="character-avatar">
<!-- Placeholder for character avatar -->
</div>
<v-textarea ref="textarea" v-if="editing" v-model="editing_text" @keydown.enter.prevent="submitEdit()" @blur="cancelEdit()" @keydown.escape.prevent="cancelEdit()">
<v-textarea
ref="textarea"
v-if="editing"
v-model="editing_text"
auto-grow
:hint="autocompleteInfoMessage(autocompleting) + ', Shift+Enter for newline'"
:loading="autocompleting"
:disabled="autocompleting"
@keydown.enter.prevent="handleEnter"
@blur="autocompleting ? null : cancelEdit()"
@keydown.escape.prevent="cancelEdit()"
>
</v-textarea>
<div v-else class="character-text" @dblclick="startEdit()">
<span v-for="(part, index) in parts" :key="index" :class="{ highlight: part.isNarrative }">
@@ -45,7 +59,7 @@
<script>
export default {
props: ['character', 'text', 'color', 'message_id'],
inject: ['requestDeleteMessage', 'getWebsocket', 'createPin', 'fixMessageContinuityErrors'],
inject: ['requestDeleteMessage', 'getWebsocket', 'createPin', 'fixMessageContinuityErrors', 'autocompleteRequest', 'autocompleteInfoMessage'],
computed: {
parts() {
const parts = [];
@@ -68,11 +82,42 @@ export default {
data() {
return {
editing: false,
autocompleting: false,
editing_text: "",
hovered: false,
}
},
methods: {
handleEnter(event) {
// if ctrl -> autocomplete
// else -> submit
// shift -> newline
if (event.ctrlKey) {
this.autocompleteEdit();
} else if (event.shiftKey) {
this.editing_text += "\n";
} else {
this.submitEdit();
}
},
autocompleteEdit() {
this.autocompleting = true;
this.autocompleteRequest(
{
partial: this.editing_text,
context: "dialogue:npc",
character: this.character,
},
(completion) => {
this.editing_text += completion;
this.autocompleting = false;
},
this.$refs.textarea
)
},
cancelEdit() {
console.log('cancelEdit', this.message_id);
this.editing = false;

View File

@@ -200,7 +200,7 @@ export default {
if (defaults) {
this.client.model = defaults.model || '';
this.client.api_url = defaults.api_url || '';
this.client.max_token_length = defaults.max_token_length || 4096;
this.client.max_token_length = defaults.max_token_length || 8192;
this.client.double_coercion = defaults.double_coercion || null;
// loop and build name from prefix, checking against current clients
let name = this.clientTypes[this.client.type].name_prefix;

View File

@@ -99,6 +99,10 @@ export default {
time: parseInt(data.data.time),
num: this.total++,
generation_parameters: data.data.generation_parameters,
// immutable copy of original generation parameters
original_generation_parameters: JSON.parse(JSON.stringify(data.data.generation_parameters)),
original_prompt: data.data.prompt,
original_response: data.data.response,
})
while(this.prompts.length > this.max_prompts) {

View File

@@ -27,31 +27,42 @@
</v-list-item>
<v-list-subheader>
<v-icon>mdi-details</v-icon>Parameters
<v-btn size="x-small" variant="text" v-if="promptHasDirtyParams" color="orange" @click.stop="resetParams" prepend-icon="mdi-restore">Reset</v-btn>
</v-list-subheader>
<v-list-item v-for="(value, name) in filteredParameters" :key="name">
<v-list-item-subtitle color="grey-lighten-1">{{ name }}</v-list-item-subtitle>
<p class="text-caption text-grey">
{{ value }}
</p>
<v-list-item>
<v-text-field class="mt-1" v-for="(value, name) in filteredParameters" :key="name" v-model="prompt.generation_parameters[name]" :label="name" density="compact" variant="plain" placeholder="Value" :type="parameterType(name)">
<template v-slot:prepend-inner>
<v-icon class="mt-1" size="x-small">mdi-pencil</v-icon>
</template>
</v-text-field>
</v-list-item>
</v-list>
</v-col>
<v-col :cols="details ? 5 : 6">
<v-col :cols="details ? 6 : 7">
<v-card flat>
<v-card-title>Prompt</v-card-title>
<v-card-title>Prompt
<v-btn size="x-small" variant="text" v-if="promptHasDirtyPrompt" color="orange" @click.stop="resetPrompt" prepend-icon="mdi-restore">Reset</v-btn>
</v-card-title>
<v-card-text>
<!--
<div class="prompt-view">{{ prompt.prompt }}</div>
-->
<v-textarea :disabled="busy" density="compact" v-model="prompt.prompt" rows="10" auto-grow max-rows="22"></v-textarea>
-->
<Codemirror
v-model="prompt.prompt"
:extensions="extensions"
:style="promptEditorStyle"
></Codemirror>
</v-card-text>
</v-card>
</v-col>
<v-col :cols="details ? 5 : 6">
<v-col :cols="details ? 4 : 5">
<v-card elevation="10" color="grey-darken-3">
<v-card-title>Response
<v-progress-circular class="ml-1 mr-3" size="20" v-if="busy" indeterminate="disable-shrink"
color="primary"></v-progress-circular>
color="primary"></v-progress-circular>
<v-btn size="x-small" variant="text" v-else-if="promptHasDirtyResponse" color="orange" @click.stop="resetResponse" prepend-icon="mdi-restore">Reset</v-btn>
</v-card-title>
<v-card-text style="max-height:600px; overflow-y:auto;" :class="busy ? 'text-grey' : 'text-white'">
<div class="prompt-view">{{ prompt.response }}</div>
@@ -75,8 +86,16 @@
</template>
<script>
import { Codemirror } from 'vue-codemirror'
import { markdown } from '@codemirror/lang-markdown'
import { oneDark } from '@codemirror/theme-one-dark'
import { EditorView } from '@codemirror/view'
export default {
name: 'DebugToolPromptView',
components: {
Codemirror,
},
data() {
return {
prompt: null,
@@ -102,6 +121,17 @@ export default {
return filtered;
},
promptHasDirtyParams() {
// compoare prompt.generation_parameters with prompt.original_generation_parameters
// use json string comparison
return JSON.stringify(this.prompt.generation_parameters) !== JSON.stringify(this.prompt.original_generation_parameters);
},
promptHasDirtyPrompt() {
return this.prompt.prompt !== this.prompt.original_prompt;
},
promptHasDirtyResponse() {
return this.prompt.response !== this.prompt.original_response;
},
},
inject: [
"getWebsocket",
@@ -109,6 +139,30 @@ export default {
],
methods: {
parameterType(name) {
// to vuetify text-field type
const typ = typeof this.prompt.original_generation_parameters[name];
if(typ === 'number') {
return 'number';
} else if(typ === 'boolean') {
return 'boolean';
} else {
return 'text';
}
},
resetParams() {
this.prompt.generation_parameters = JSON.parse(JSON.stringify(this.prompt.original_generation_parameters));
},
resetPrompt() {
this.prompt.prompt = this.prompt.original_prompt;
},
resetResponse() {
this.prompt.response = this.prompt.original_response;
},
toggleDetailsLabel() {
return this.details ? 'Hide Details' : 'Show Details';
},
@@ -185,6 +239,23 @@ export default {
created() {
this.registerMessageHandler(this.handleMessage);
},
setup() {
const extensions = [
markdown(),
oneDark,
EditorView.lineWrapping
];
const promptEditorStyle = {
maxHeight: "600px"
}
return {
extensions,
promptEditorStyle,
}
}
}
</script>

View File

@@ -19,10 +19,10 @@
<div class="tile" v-for="(scene, index) in recentScenes()" :key="index">
<v-card density="compact" elevation="7" @click="loadScene(scene)" color="primary" variant="outlined">
<v-card-title>
{{ scene.name }}
{{ filenameToTitle(scene.filename) }}
</v-card-title>
<v-card-subtitle>
{{ scene.filename }}
{{ scene.name }}
</v-card-subtitle>
<v-card-text>
<div class="cover-image-placeholder">
@@ -60,6 +60,14 @@ export default {
},
methods: {
filenameToTitle(filename) {
// remove .json extension, replace _ with space, and capitalize first letter of each word
filename = filename.replace('.json', '');
return filename.replace(/_/g, ' ').replace(/\b\w/g, l => l.toUpperCase());
},
hasRecentScenes() {
return this.config != null && this.config.recent_scenes != null && this.config.recent_scenes.scenes != null && this.config.recent_scenes.scenes.length > 0;
},

View File

@@ -6,7 +6,20 @@
</v-btn>
</template>
<div class="narrator-message">
<v-textarea ref="textarea" v-if="editing" v-model="editing_text" @keydown.enter.prevent="submitEdit()" @blur="cancelEdit()" @keydown.escape.prevent="cancelEdit()">
<v-textarea
ref="textarea"
v-if="editing"
v-model="editing_text"
auto-grow
:hint="autocompleteInfoMessage(autocompleting) + ', Shift+Enter for newline'"
:loading="autocompleting"
:disabled="autocompleting"
@keydown.enter.prevent="handleEnter"
@blur="autocompleting ? null : cancelEdit()"
@keydown.escape.prevent="cancelEdit()">
</v-textarea>
<div v-else class="narrator-text" @dblclick="startEdit()">
<span v-for="(part, index) in parts" :key="index" :class="{ highlight: part.isNarrative }">
@@ -36,7 +49,7 @@
<script>
export default {
props: ['text', 'message_id'],
inject: ['requestDeleteMessage', 'getWebsocket', 'createPin'],
inject: ['requestDeleteMessage', 'getWebsocket', 'createPin', 'fixMessageContinuityErrors', 'autocompleteRequest', 'autocompleteInfoMessage'],
computed: {
parts() {
const parts = [];
@@ -60,11 +73,41 @@ export default {
data() {
return {
editing: false,
autocompleting: false,
editing_text: "",
hovered: false,
}
},
methods: {
handleEnter(event) {
// if ctrl -> autocomplete
// else -> submit
// shift -> newline
if (event.ctrlKey) {
this.autocompleteEdit();
} else if (event.shiftKey) {
this.editing_text += "\n";
} else {
this.submitEdit();
}
},
autocompleteEdit() {
this.autocompleting = true;
this.autocompleteRequest(
{
partial: this.editing_text,
context: "narrative:continue",
},
(completion) => {
this.editing_text += completion;
this.autocompleting = false;
},
this.$refs.textarea
)
},
cancelEdit() {
console.log('cancelEdit', this.message_id);
this.editing = false;

View File

@@ -6,7 +6,10 @@
</v-card-title>
<v-card-text style="max-height:600px; overflow-y:scroll;">
<v-list-item v-for="(entry, index) in history" :key="index" class="text-body-2">
{{ entry.ts }} {{ entry.text }}
<v-list-item-subtitle>{{ entry.ts }}</v-list-item-subtitle>
<div class="history-entry">
{{ entry.text }}
</div>
<v-divider class="mt-1"></v-divider>
</v-list-item>
</v-card-text>
@@ -63,4 +66,8 @@ export default {
}
</script>
<style scoped></style>
<style scoped>
.history-entry {
white-space: pre-wrap;
}
</style>

View File

@@ -140,13 +140,17 @@
<CharacterSheet ref="characterSheet" />
<SceneHistory ref="sceneHistory" />
<v-text-field
<v-textarea
v-model="messageInput"
:label="inputHint"
rows="1"
auto-grow
outlined
ref="messageInput"
@keyup.enter="sendMessage"
@keydown.enter.prevent="sendMessage"
hint="Ctrl+Enter to autocomplete, Shift+Enter for newline"
:disabled="isInputDisabled()"
:loading="autocompleting"
:prepend-inner-icon="messageInputIcon()"
:color="messageInputColor()">
<template v-slot:append>
@@ -155,7 +159,7 @@
<v-icon v-else>mdi-skip-next</v-icon>
</v-btn>
</template>
</v-text-field>
</v-textarea>
</div>
<IntroView v-else
@@ -244,6 +248,10 @@ export default {
messageHandlers: [],
scene: {},
appConfig: {},
autcompleting: false,
autocompletePartialInput: "",
autocompleteCallback: null,
autocompleteFocusElement: null,
}
},
mounted() {
@@ -281,6 +289,8 @@ export default {
getTrackedWorldState: (question) => this.$refs.worldState.trackedWorldState(question),
getPlayerCharacterName: () => this.getPlayerCharacterName(),
formatWorldStateTemplateString: (templateString, chracterName) => this.formatWorldStateTemplateString(templateString, chracterName),
autocompleteRequest: (partialInput, callback, focus_element) => this.autocompleteRequest(partialInput, callback, focus_element),
autocompleteInfoMessage: (active) => this.autocompleteInfoMessage(active),
};
},
methods: {
@@ -383,6 +393,9 @@ export default {
if (data.type === 'autocomplete_suggestion') {
if(!this.autocompleteCallback)
return;
const completion = data.message;
// append completion to messageInput, add a space if
@@ -391,11 +404,23 @@ export default {
const completionStartsWithSentenceEnd = completion.startsWith('!') || completion.startsWith('.') || completion.startsWith('?') || completion.startsWith(')') || completion.startsWith(']') || completion.startsWith('}') || completion.startsWith('"') || completion.startsWith("'") || completion.startsWith("*") || completion.startsWith(",")
if (this.messageInput.endsWith(' ') || completion.startsWith(' ') || completionStartsWithSentenceEnd) {
this.messageInput += completion;
if (this.autocompletePartialInput.endsWith(' ') || completion.startsWith(' ') || completionStartsWithSentenceEnd) {
this.autocompleteCallback(completion);
} else {
this.messageInput += ' ' + completion;
this.autocompleteCallback(' ' + completion);
}
if (this.autocompleteFocusElement) {
let focus_element = this.autocompleteFocusElement;
setTimeout(() => {
focus_element.focus();
}, 200);
this.autocompleteFocusElement = null;
}
this.autocompleteCallback = null;
this.autocompletePartialInput = "";
return;
}
if (data.type === 'request_input') {
@@ -439,7 +464,26 @@ export default {
// if ctrl+enter is pressed, request autocomplete
if (event.ctrlKey && event.key === 'Enter') {
this.websocket.send(JSON.stringify({ type: 'interact', text: `!acdlg: ${this.messageInput}` }));
this.autocompleting = true
this.inputDisabled = true;
this.autocompleteRequest(
{
partial: this.messageInput,
context: "dialogue:player"
},
(completion) => {
this.inputDisabled = false
this.autocompleting = false
this.messageInput += completion;
},
this.$refs.messageInput
);
return;
}
// if shift+enter is pressed, add a newline
if (event.shiftKey && event.key === 'Enter') {
this.messageInput += "\n";
return;
}
@@ -450,6 +494,26 @@ export default {
this.waitingForInput = false;
}
},
autocompleteRequest(param, callback, focus_element) {
this.autocompleteCallback = callback;
this.autocompleteFocusElement = focus_element;
this.autocompletePartialInput = param.partial;
const param_copy = JSON.parse(JSON.stringify(param));
param_copy.type = "assistant";
param_copy.action = "autocomplete";
this.websocket.send(JSON.stringify(param_copy));
//this.websocket.send(JSON.stringify({ type: 'interact', text: `!autocomplete:${JSON.stringify(param)}` }));
},
autocompleteInfoMessage(active) {
return active ? 'Generating ...' : "Ctrl+Enter to autocomplete";
},
requestAppConfig() {
this.websocket.send(JSON.stringify({ type: 'request_app_config' }));
},

View File

@@ -107,11 +107,16 @@
@generate="content => setAndUpdateCharacterDescription(content)"
/>
<v-textarea rows="5" auto-grow v-model="characterDetails.description"
<v-textarea ref="characterDescription" rows="5" auto-grow v-model="characterDetails.description"
:color="characterDescriptionDirty ? 'info' : ''"
:disabled="characterDescriptionBusy"
:loading="characterDescriptionBusy"
@keyup.ctrl.enter.stop="autocompleteRequestCharacterDescription"
@update:model-value="queueUpdateCharacterDescription"
label="Description"
hint="A short description of the character."></v-textarea>
:hint="'A short description of the character. '+autocompleteInfoMessage(characterDescriptionBusy)"></v-textarea>
</div>
<!-- CHARACTER ATTRIBUTES -->
@@ -166,11 +171,14 @@
:label="selectedCharacterAttribute"
:color="characterAttributeDirty ? 'info' : ''"
:disabled="characterAttributeBusy"
:loading="characterAttributeBusy"
:hint="autocompleteInfoMessage(characterAttributeBusy)"
@keyup.ctrl.enter.stop="autocompleteRequestCharacterAttribute"
@update:modelValue="queueUpdateCharacterAttribute(selectedCharacterAttribute)"
v-model="characterDetails.base_attributes[selectedCharacterAttribute]">
</v-textarea>
</div>
@@ -253,6 +261,13 @@
<v-textarea rows="5" max-rows="10" auto-grow
ref="characterDetail"
:color="characterDetailDirty ? 'info' : ''"
:disabled="characterDetailBusy"
:loading="characterDetailBusy"
:hint="autocompleteInfoMessage(characterDetailBusy)"
@keyup.ctrl.enter.stop="autocompleteRequestCharacterDetail"
@update:modelValue="queueUpdateCharacterDetail(selectedCharacterDetail)"
:label="selectedCharacterDetail"
v-model="characterDetails.details[selectedCharacterDetail]">
@@ -888,6 +903,10 @@ export default {
characterDescriptionDirty: false,
characterStateReinforcerDirty: false,
characterAttributeBusy: false,
characterDetailBusy: false,
characterDescriptionBusy: false,
characterAttributeUpdateTimeout: null,
characterDetailUpdateTimeout: null,
characterDescriptionUpdateTimeout: null,
@@ -1003,6 +1022,8 @@ export default {
'openCharacterSheet',
'characterSheet',
'isInputDisabled',
'autocompleteRequest',
'autocompleteInfoMessage',
],
methods: {
show(tab, sub1, sub2, sub3) {
@@ -1083,6 +1104,8 @@ export default {
this.removePinConfirm = false;
this.deferedNavigation = null;
this.isBusy = false;
this.characterAttributeBusy = false;
this.characterDetailBusy = false;
},
exit() {
this.dialog = false;
@@ -1645,7 +1668,33 @@ export default {
this.characterList = message.data;
}
else if (message.action === 'character_details') {
// if we are currently editing an attribute, override it in the incoming data
// this fixes the annoying rubberbanding when editing an attribute
if (this.selectedCharacterAttribute) {
message.data.base_attributes[this.selectedCharacterAttribute] = this.characterDetails.base_attributes[this.selectedCharacterAttribute];
}
// if we are currently editing a detail, override it in the incoming data
// this fixes the annoying rubberbanding when editing a detail
if (this.selectedCharacterDetail) {
message.data.details[this.selectedCharacterDetail] = this.characterDetails.details[this.selectedCharacterDetail];
}
// if we are currently editing a description, override it in the incoming data
// this fixes the annoying rubberbanding when editing a description
if (this.characterDescriptionDirty) {
message.data.description = this.characterDetails.description;
}
// if we are currently editing a state reinforcement, override it in the incoming data
// this fixes the annoying rubberbanding when editing a state reinforcement
if (this.characterStateReinforcerDirty) {
message.data.reinforcements[this.selectedCharacterStateReinforcer] = this.characterDetails.reinforcements[this.selectedCharacterStateReinforcer];
}
this.characterDetails = message.data;
// select first attribute
if (!this.selectedCharacterAttribute)
this.selectedCharacterAttribute = Object.keys(this.characterDetails.base_attributes)[0];
@@ -1712,6 +1761,46 @@ export default {
}
},
// autocomplete handlers
autocompleteRequestCharacterAttribute() {
this.characterAttributeBusy = true;
this.autocompleteRequest({
partial: this.characterDetails.base_attributes[this.selectedCharacterAttribute],
context: `character attribute:${this.selectedCharacterAttribute}`,
character: this.characterDetails.name
}, (completion) => {
this.characterDetails.base_attributes[this.selectedCharacterAttribute] += completion;
this.characterAttributeBusy = false;
}, this.$refs.characterAttribute);
},
autocompleteRequestCharacterDetail() {
this.characterDetailBusy = true;
this.autocompleteRequest({
partial: this.characterDetails.details[this.selectedCharacterDetail],
context: `character detail:${this.selectedCharacterDetail}`,
character: this.characterDetails.name
}, (completion) => {
this.characterDetails.details[this.selectedCharacterDetail] += completion;
this.characterDetailBusy = false;
}, this.$refs.characterDetail);
},
autocompleteRequestCharacterDescription() {
this.characterDescriptionBusy = true;
this.autocompleteRequest({
partial: this.characterDetails.description,
context: `character detail:description`,
character: this.characterDetails.name
}, (completion) => {
this.characterDetails.description += completion;
this.characterDescriptionBusy = false;
}, this.$refs.characterDescription);
},
},
created() {
this.registerMessageHandler(this.handleMessage);

View File

@@ -14,7 +14,17 @@
<v-col cols="9">
<div v-if="tab == 'instructions'">
<v-sheet class="text-right">
<v-spacer></v-spacer>
<v-tooltip class="pre-wrap" :text="tooltipText" max-width="250" >
<template v-slot:activator="{ props }">
<v-btn v-bind="props" color="primary" @click.stop="generateCharacterDialogueInstructions" variant="text" size="x-small" prepend-icon="mdi-auto-fix">Generate</v-btn>
</template>
</v-tooltip>
</v-sheet>
<v-textarea
:loading="dialogueInstructionsBusy"
:disabled="dialogueInstructionsBusy"
placeholder="speak less formally, use more contractions, and be more casual."
v-model="dialogueInstructions" label="Acting Instructions"
:color="dialogueInstructionsDirty ? 'primary' : null"
@@ -78,6 +88,7 @@ export default {
dialogueExample: "",
dialogueInstructions: null,
dialogueInstructionsDirty: false,
dialogueInstructionsBusy: false,
updateCharacterActorTimeout: null,
}
},
@@ -86,6 +97,9 @@ export default {
return this.dialogueExamples.map((example) => {
return example.replace(this.character.name + ': ', '');
});
},
tooltipText() {
return `Automatically generate dialogue instructions for ${this.character.name}, based on their attributes and description`;
}
},
props: {
@@ -111,11 +125,24 @@ export default {
dialogue_examples: this.dialogueExamples,
}));
},
generateCharacterDialogueInstructions() {
this.dialogueInstructionsBusy = true;
this.getWebsocket().send(JSON.stringify({
type: "world_state_manager",
action: "generate_character_dialogue_instructions",
name: this.character.name,
}));
},
handleMessage(data) {
if(data.type === 'world_state_manager') {
console.log("WORLD STATE MANAGER", data);
if(data.action === 'character_actor_updated') {
this.dialogueInstructionsDirty = false;
} else if (data.action === 'character_dialogue_instructions_generated') {
this.dialogueInstructions = data.data.instructions;
this.dialogueInstructionsBusy = false;
}
}
},