Compare commits
35 Commits
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
95a17197ba | ||
|
|
d09f9f8ac4 | ||
|
|
de16feeed5 | ||
|
|
cdcc804ffa | ||
|
|
9a2bbd78a4 | ||
|
|
ddfbd6891b | ||
|
|
143dd47e02 | ||
|
|
cc7cb773d1 | ||
|
|
02c88f75a1 | ||
|
|
419371e0fb | ||
|
|
6e847bf283 | ||
|
|
ceedd3019f | ||
|
|
a28cf2a029 | ||
|
|
60cb271e30 | ||
|
|
1874234d2c | ||
|
|
ef99539e69 | ||
|
|
39bd02722d | ||
|
|
f0b627b900 | ||
|
|
95ae00e01f | ||
|
|
83027b3a0f | ||
|
|
27eba3bd63 | ||
|
|
ba64050eab | ||
|
|
199ffd1095 | ||
|
|
88b9fcb8bb | ||
|
|
2f5944bc09 | ||
|
|
abdfb1abbf | ||
|
|
2f07248211 | ||
|
|
9ae6fc822b | ||
|
|
5094359c4e | ||
|
|
28801b54bf | ||
|
|
4d69f0e837 | ||
|
|
d91b3f8042 | ||
|
|
03a0ab2fcf | ||
|
|
d860d62972 | ||
|
|
add4893939 |
30
.github/workflows/ci.yml
vendored
Normal file
@@ -0,0 +1,30 @@
|
||||
name: ci
|
||||
on:
|
||||
push:
|
||||
branches:
|
||||
- master
|
||||
- main
|
||||
- prep-0.26.0
|
||||
permissions:
|
||||
contents: write
|
||||
jobs:
|
||||
deploy:
|
||||
runs-on: ubuntu-latest
|
||||
steps:
|
||||
- uses: actions/checkout@v4
|
||||
- name: Configure Git Credentials
|
||||
run: |
|
||||
git config user.name github-actions[bot]
|
||||
git config user.email 41898282+github-actions[bot]@users.noreply.github.com
|
||||
- uses: actions/setup-python@v5
|
||||
with:
|
||||
python-version: 3.x
|
||||
- run: echo "cache_id=$(date --utc '+%V')" >> $GITHUB_ENV
|
||||
- uses: actions/cache@v4
|
||||
with:
|
||||
key: mkdocs-material-${{ env.cache_id }}
|
||||
path: .cache
|
||||
restore-keys: |
|
||||
mkdocs-material-
|
||||
- run: pip install mkdocs-material mkdocs-awesome-pages-plugin mkdocs-glightbox
|
||||
- run: mkdocs gh-deploy --force
|
||||
2
.gitignore
vendored
@@ -9,6 +9,7 @@ talemate_env
|
||||
chroma
|
||||
config.yaml
|
||||
templates/llm-prompt/user/*.jinja2
|
||||
templates/world-state/*.yaml
|
||||
scenes/
|
||||
!scenes/infinity-quest-dynamic-scenario/
|
||||
!scenes/infinity-quest-dynamic-scenario/assets/
|
||||
@@ -16,3 +17,4 @@ scenes/
|
||||
!scenes/infinity-quest-dynamic-scenario/infinity-quest.json
|
||||
!scenes/infinity-quest/assets/
|
||||
!scenes/infinity-quest/infinity-quest.json
|
||||
tts_voice_samples/*.wav
|
||||
25
Dockerfile.backend
Normal file
@@ -0,0 +1,25 @@
|
||||
# Use an official Python runtime as a parent image
|
||||
FROM python:3.11-slim
|
||||
|
||||
# Set the working directory in the container
|
||||
WORKDIR /app
|
||||
|
||||
# Copy the current directory contents into the container at /app
|
||||
COPY ./src /app/src
|
||||
|
||||
# Copy poetry files
|
||||
COPY pyproject.toml /app/
|
||||
# If there's a poetry lock file, include the following line
|
||||
COPY poetry.lock /app/
|
||||
|
||||
# Install poetry
|
||||
RUN pip install poetry
|
||||
|
||||
# Install dependencies
|
||||
RUN poetry install --no-dev
|
||||
|
||||
# Make port 5050 available to the world outside this container
|
||||
EXPOSE 5050
|
||||
|
||||
# Run backend server
|
||||
CMD ["poetry", "run", "python", "src/talemate/server/run.py", "runserver", "--host", "0.0.0.0", "--port", "5050"]
|
||||
23
Dockerfile.frontend
Normal file
@@ -0,0 +1,23 @@
|
||||
# Use an official node runtime as a parent image
|
||||
FROM node:20
|
||||
|
||||
# Make sure we are in a development environment (this isn't a production ready Dockerfile)
|
||||
ENV NODE_ENV=development
|
||||
|
||||
# Echo that this isn't a production ready Dockerfile
|
||||
RUN echo "This Dockerfile is not production ready. It is intended for development purposes only."
|
||||
|
||||
# Set the working directory in the container
|
||||
WORKDIR /app
|
||||
|
||||
# Copy the frontend directory contents into the container at /app
|
||||
COPY ./talemate_frontend /app
|
||||
|
||||
# Install all dependencies
|
||||
RUN npm install
|
||||
|
||||
# Make port 8080 available to the world outside this container
|
||||
EXPOSE 8080
|
||||
|
||||
# Run frontend server
|
||||
CMD ["npm", "run", "serve"]
|
||||
197
README.md
@@ -1,176 +1,43 @@
|
||||
# Talemate
|
||||
|
||||
Allows you to play roleplay scenarios with large language models.
|
||||
Roleplay with AI with a focus on strong narration and consistent world and game state tracking.
|
||||
|
||||
|
||||
|||
|
||||
|||
|
||||
|------------------------------------------|------------------------------------------|
|
||||
|||
|
||||
|||
|
||||
|||
|
||||
|
||||
> :warning: **It does not run any large language models itself but relies on existing APIs. Currently supports OpenAI, text-generation-webui and LMStudio. 0.18.0 also adds support for generic OpenAI api implementations, but generation quality on that will vary.**
|
||||
Supported APIs:
|
||||
- [OpenAI](https://platform.openai.com/overview)
|
||||
- [Anthropic](https://www.anthropic.com/)
|
||||
- [mistral.ai](https://mistral.ai/)
|
||||
- [Cohere](https://www.cohere.com/)
|
||||
- [Groq](https://www.groq.com/)
|
||||
- [Google Gemini](https://console.cloud.google.com/)
|
||||
|
||||
This means you need to either have:
|
||||
- an [OpenAI](https://platform.openai.com/overview) api key
|
||||
- setup local (or remote via runpod) LLM inference via:
|
||||
- [oobabooga/text-generation-webui](https://github.com/oobabooga/text-generation-webui)
|
||||
- [LMStudio](https://lmstudio.ai/)
|
||||
- Any other OpenAI api implementation that implements the v1/completions endpoint
|
||||
- tested llamacpp with the `api_like_OAI.py` wrapper
|
||||
- let me know if you have tested any other implementations and they failed / worked or landed somewhere in between
|
||||
Supported self-hosted APIs:
|
||||
- [KoboldCpp](https://koboldai.org/cpp) ([Local](https://koboldai.org/cpp), [Runpod](https://koboldai.org/runpodcpp), [VastAI](https://koboldai.org/vastcpp), also includes image gen support)
|
||||
- [oobabooga/text-generation-webui](https://github.com/oobabooga/text-generation-webui) (local or with runpod support)
|
||||
- [LMStudio](https://lmstudio.ai/)
|
||||
- [TabbyAPI](https://github.com/theroyallab/tabbyAPI/)
|
||||
|
||||
## Current features
|
||||
Generic OpenAI api implementations (tested and confirmed working):
|
||||
- [DeepInfra](https://deepinfra.com/)
|
||||
- [llamacpp](https://github.com/ggerganov/llama.cpp) with the `api_like_OAI.py` wrapper
|
||||
- let me know if you have tested any other implementations and they failed / worked or landed somewhere in between
|
||||
|
||||
- responive modern ui
|
||||
- agents
|
||||
- conversation: handles character dialogue
|
||||
- narration: handles narrative exposition
|
||||
- summarization: handles summarization to compress context while maintain history
|
||||
- director: can be used to direct the story / characters
|
||||
- editor: improves AI responses (very hit and miss at the moment)
|
||||
- world state: generates world snapshot and handles passage of time (objects and characters)
|
||||
- creator: character / scenario creator
|
||||
- tts: text to speech via elevenlabs, coqui studio, coqui local
|
||||
- multi-client support (agents can be connected to separate APIs)
|
||||
- long term memory
|
||||
- chromadb integration
|
||||
- passage of time
|
||||
- narrative world state
|
||||
- Automatically keep track and reinforce selected character and world truths / states.
|
||||
- narrative tools
|
||||
- creative tools
|
||||
- manage multiple NPCs
|
||||
- AI backed character creation with template support (jinja2)
|
||||
- AI backed scenario creation
|
||||
- context managegement
|
||||
- Manage character details and attributes
|
||||
- Manage world information / past events
|
||||
- Pin important information to the context (Manually or conditionally through AI)
|
||||
- runpod integration
|
||||
- overridable templates for all prompts. (jinja2)
|
||||
## Core Features
|
||||
|
||||
## Planned features
|
||||
- Multiple agents for dialogue, narration, summarization, direction, editing, world state management, character/scenario creation, text-to-speech, and visual generation
|
||||
- Supports per agent API selection
|
||||
- Long-term memory and passage of time tracking
|
||||
- Narrative world state management to reinforce character and world truths
|
||||
- Creative tools for managing NPCs, AI-assisted character, and scenario creation with template support
|
||||
- Context management for character details, world information, past events, and pinned information
|
||||
- Customizable templates for all prompts using Jinja2
|
||||
- Modern, responsive UI
|
||||
|
||||
Kinda making it up as i go along, but i want to lean more into gameplay through AI, keeping track of gamestates, moving away from simply roleplaying towards a more game-ified experience.
|
||||
## Documentation
|
||||
|
||||
In no particular order:
|
||||
|
||||
|
||||
- Extension support
|
||||
- modular agents and clients
|
||||
- Improved world state
|
||||
- Dynamic player choice generation
|
||||
- Better creative tools
|
||||
- node based scenario / character creation
|
||||
- Improved and consistent long term memory and accurate current state of the world
|
||||
- Improved director agent
|
||||
- Right now this doesn't really work well on anything but GPT-4 (and even there it's debatable). It tends to steer the story in a way that introduces pacing issues. It needs a model that is creative but also reasons really well i think.
|
||||
- Gameplay loop governed by AI
|
||||
- objectives
|
||||
- quests
|
||||
- win / lose conditions
|
||||
- stable-diffusion client for in place visual generation
|
||||
|
||||
# Quickstart
|
||||
|
||||
## Installation
|
||||
|
||||
Post [here](https://github.com/vegu-ai/talemate/issues/17) if you run into problems during installation.
|
||||
|
||||
There is also a [troubleshooting guide](docs/troubleshoot.md) that might help.
|
||||
|
||||
### Windows
|
||||
|
||||
1. Download and install Python 3.10 or Python 3.11 from the [official Python website](https://www.python.org/downloads/windows/). :warning: python3.12 is currently not supported.
|
||||
1. Download and install Node.js v20 from the [official Node.js website](https://nodejs.org/en/download/). This will also install npm. :warning: v21 is currently not supported.
|
||||
1. Download the Talemate project to your local machine. Download from [the Releases page](https://github.com/vegu-ai/talemate/releases).
|
||||
1. Unpack the download and run `install.bat` by double clicking it. This will set up the project on your local machine.
|
||||
1. Once the installation is complete, you can start the backend and frontend servers by running `start.bat`.
|
||||
1. Navigate your browser to http://localhost:8080
|
||||
|
||||
### Linux
|
||||
|
||||
`python 3.10` or `python 3.11` is required. :warning: `python 3.12` not supported yet.
|
||||
|
||||
`nodejs v19 or v20` :warning: `v21` not supported yet.
|
||||
|
||||
1. `git clone git@github.com:vegu-ai/talemate`
|
||||
1. `cd talemate`
|
||||
1. `source install.sh`
|
||||
1. Start the backend: `python src/talemate/server/run.py runserver --host 0.0.0.0 --port 5050`.
|
||||
1. Open a new terminal, navigate to the `talemate_frontend` directory, and start the frontend server by running `npm run serve`.
|
||||
|
||||
## Connecting to an LLM
|
||||
|
||||
On the right hand side click the "Add Client" button. If there is no button, you may need to toggle the client options by clicking this button:
|
||||
|
||||

|
||||
|
||||
### Text-generation-webui
|
||||
|
||||
> :warning: As of version 0.13.0 the legacy text-generator-webui API `--extension api` is no longer supported, please use their new `--extension openai` api implementation instead.
|
||||
|
||||
In the modal if you're planning to connect to text-generation-webui, you can likely leave everything as is and just click Save.
|
||||
|
||||

|
||||
|
||||
|
||||
#### Recommended Models
|
||||
|
||||
Any of the top models in any of the size classes here should work well (i wouldn't recommend going lower than 7B):
|
||||
|
||||
https://www.reddit.com/r/LocalLLaMA/comments/18yp9u4/llm_comparisontest_api_edition_gpt4_vs_gemini_vs/
|
||||
|
||||
|
||||
### OpenAI
|
||||
|
||||
If you want to add an OpenAI client, just change the client type and select the apropriate model.
|
||||
|
||||

|
||||
|
||||
If you are setting this up for the first time, you should now see the client, but it will have a red dot next to it, stating that it requires an API key.
|
||||
|
||||

|
||||
|
||||
Click the `SET API KEY` button. This will open a modal where you can enter your API key.
|
||||
|
||||

|
||||
|
||||
Click `Save` and after a moment the client should have a green dot next to it, indicating that it is ready to go.
|
||||
|
||||

|
||||
|
||||
## Ready to go
|
||||
|
||||
You will know you are good to go when the client and all the agents have a green dot next to them.
|
||||
|
||||

|
||||
|
||||
## Load the introductory scenario "Infinity Quest"
|
||||
|
||||
Generated using talemate creative tools, mostly used for testing / demoing.
|
||||
|
||||
You can load it (and any other talemate scenarios or save files) by expanding the "Load" menu in the top left corner and selecting the middle tab. Then simple search for a partial name of the scenario you want to load and click on the result.
|
||||
|
||||

|
||||
|
||||
## Loading character cards
|
||||
|
||||
Supports both v1 and v2 chara specs.
|
||||
|
||||
Expand the "Load" menu in the top left corner and either click on "Upload a character card" or simply drag and drop a character card file into the same area.
|
||||
|
||||

|
||||
|
||||
Once a character is uploaded, talemate may actually take a moment because it needs to convert it to a talemate format and will also run additional LLM prompts to generate character attributes and world state.
|
||||
|
||||
Make sure you save the scene after the character is loaded as it can then be loaded as normal talemate scenario in the future.
|
||||
|
||||
## Further documentation
|
||||
|
||||
Please read the documents in the `docs` folder for more advanced configuration and usage.
|
||||
|
||||
- [Prompt template overrides](docs/templates.md)
|
||||
- [Text-to-Speech (TTS)](docs/tts.md)
|
||||
- [ChromaDB (long term memory)](docs/chromadb.md)
|
||||
- [Runpod Integration](docs/runpod.md)
|
||||
- Creative mode
|
||||
- [Installation and Getting started](https://vegu-ai.github.io/talemate/)
|
||||
- [User Guide](https://vegu-ai.github.io/talemate/user-guide/interacting/)
|
||||
@@ -7,40 +7,6 @@ creator:
|
||||
- a thrilling action story
|
||||
- a mysterious adventure
|
||||
- an epic sci-fi adventure
|
||||
game:
|
||||
world_state:
|
||||
templates:
|
||||
state_reinforcement:
|
||||
Goals:
|
||||
auto_create: false
|
||||
description: Long term and short term goals
|
||||
favorite: true
|
||||
insert: conversation-context
|
||||
instructions: Create a long term goal and two short term goals for {character_name}. Your response must only be the long terms and two short term goals.
|
||||
interval: 20
|
||||
name: Goals
|
||||
query: Goals
|
||||
state_type: npc
|
||||
Physical Health:
|
||||
auto_create: false
|
||||
description: Keep track of health.
|
||||
favorite: true
|
||||
insert: sequential
|
||||
instructions: ''
|
||||
interval: 10
|
||||
name: Physical Health
|
||||
query: What is {character_name}'s current physical health status?
|
||||
state_type: character
|
||||
Time of day:
|
||||
auto_create: false
|
||||
description: Track night / day cycle
|
||||
favorite: true
|
||||
insert: sequential
|
||||
instructions: ''
|
||||
interval: 10
|
||||
name: Time of day
|
||||
query: What is the current time of day?
|
||||
state_type: world
|
||||
|
||||
## Long-term memory
|
||||
|
||||
@@ -48,6 +14,7 @@ game:
|
||||
# embeddings: instructor
|
||||
# instructor_device: cuda
|
||||
# instructor_model: hkunlp/instructor-xl
|
||||
# openai_model: text-embedding-3-small
|
||||
|
||||
## Remote LLMs
|
||||
|
||||
|
||||
27
docker-compose.yml
Normal file
@@ -0,0 +1,27 @@
|
||||
version: '3.8'
|
||||
|
||||
services:
|
||||
talemate-backend:
|
||||
build:
|
||||
context: .
|
||||
dockerfile: Dockerfile.backend
|
||||
ports:
|
||||
- "5050:5050"
|
||||
volumes:
|
||||
# can uncomment for dev purposes
|
||||
#- ./src/talemate:/app/src/talemate
|
||||
- ./config.yaml:/app/config.yaml
|
||||
- ./scenes:/app/scenes
|
||||
- ./templates:/app/templates
|
||||
- ./chroma:/app/chroma
|
||||
environment:
|
||||
- PYTHONUNBUFFERED=1
|
||||
|
||||
talemate-frontend:
|
||||
build:
|
||||
context: .
|
||||
dockerfile: Dockerfile.frontend
|
||||
ports:
|
||||
- "8080:8080"
|
||||
#volumes:
|
||||
# - ./talemate_frontend:/app
|
||||
5
docs/.pages
Normal file
@@ -0,0 +1,5 @@
|
||||
nav:
|
||||
- Home: index.md
|
||||
- Getting started: getting-started
|
||||
- User guide: user-guide
|
||||
- Developer guide: dev
|
||||
48
docs/dev/agents/example/test/__init__.py
Normal file
@@ -0,0 +1,48 @@
|
||||
from talemate.agents.base import Agent, AgentAction
|
||||
from talemate.agents.registry import register
|
||||
from talemate.events import GameLoopEvent
|
||||
import talemate.emit.async_signals
|
||||
from talemate.emit import emit
|
||||
|
||||
@register()
|
||||
class TestAgent(Agent):
|
||||
|
||||
agent_type = "test"
|
||||
verbose_name = "Test"
|
||||
|
||||
def __init__(self, client):
|
||||
self.client = client
|
||||
self.is_enabled = True
|
||||
self.actions = {
|
||||
"test": AgentAction(
|
||||
enabled=True,
|
||||
label="Test",
|
||||
description="Test",
|
||||
),
|
||||
}
|
||||
|
||||
@property
|
||||
def enabled(self):
|
||||
return self.is_enabled
|
||||
|
||||
@property
|
||||
def has_toggle(self):
|
||||
return True
|
||||
|
||||
@property
|
||||
def experimental(self):
|
||||
return True
|
||||
|
||||
def connect(self, scene):
|
||||
super().connect(scene)
|
||||
talemate.emit.async_signals.get("game_loop").connect(self.on_game_loop)
|
||||
|
||||
async def on_game_loop(self, emission: GameLoopEvent):
|
||||
"""
|
||||
Called on the beginning of every game loop
|
||||
"""
|
||||
|
||||
if not self.enabled:
|
||||
return
|
||||
|
||||
emit("status", status="info", message="Annoying you with a test message every game loop.")
|
||||
130
docs/dev/client/example/runpod_vllm/__init__.py
Normal file
@@ -0,0 +1,130 @@
|
||||
"""
|
||||
An attempt to write a client against the runpod serverless vllm worker.
|
||||
|
||||
This is close to functional, but since runpod serverless gpu availability is currently terrible, i have
|
||||
been unable to properly test it.
|
||||
|
||||
Putting it here for now since i think it makes a decent example of how to write a client against a new service.
|
||||
"""
|
||||
|
||||
import pydantic
|
||||
import structlog
|
||||
import runpod
|
||||
import asyncio
|
||||
import aiohttp
|
||||
from talemate.client.base import ClientBase, ExtraField
|
||||
from talemate.client.registry import register
|
||||
from talemate.emit import emit
|
||||
from talemate.config import Client as BaseClientConfig
|
||||
|
||||
log = structlog.get_logger("talemate.client.runpod_vllm")
|
||||
|
||||
class Defaults(pydantic.BaseModel):
|
||||
max_token_length: int = 4096
|
||||
model: str = ""
|
||||
runpod_id: str = ""
|
||||
|
||||
class ClientConfig(BaseClientConfig):
|
||||
runpod_id: str = ""
|
||||
|
||||
@register()
|
||||
class RunPodVLLMClient(ClientBase):
|
||||
client_type = "runpod_vllm"
|
||||
conversation_retries = 5
|
||||
config_cls = ClientConfig
|
||||
|
||||
class Meta(ClientBase.Meta):
|
||||
title: str = "Runpod VLLM"
|
||||
name_prefix: str = "Runpod VLLM"
|
||||
enable_api_auth: bool = True
|
||||
manual_model: bool = True
|
||||
defaults: Defaults = Defaults()
|
||||
extra_fields: dict[str, ExtraField] = {
|
||||
"runpod_id": ExtraField(
|
||||
name="runpod_id",
|
||||
type="text",
|
||||
label="Runpod ID",
|
||||
required=True,
|
||||
description="The Runpod ID to connect to.",
|
||||
)
|
||||
}
|
||||
|
||||
|
||||
def __init__(self, model=None, runpod_id=None, **kwargs):
|
||||
self.model_name = model
|
||||
self.runpod_id = runpod_id
|
||||
super().__init__(**kwargs)
|
||||
|
||||
@property
|
||||
def experimental(self):
|
||||
return False
|
||||
|
||||
|
||||
def set_client(self, **kwargs):
|
||||
log.debug("set_client", kwargs=kwargs, runpod_id=self.runpod_id)
|
||||
self.runpod_id = kwargs.get("runpod_id", self.runpod_id)
|
||||
|
||||
|
||||
def tune_prompt_parameters(self, parameters: dict, kind: str):
|
||||
super().tune_prompt_parameters(parameters, kind)
|
||||
|
||||
keys = list(parameters.keys())
|
||||
|
||||
valid_keys = ["temperature", "top_p", "max_tokens"]
|
||||
|
||||
for key in keys:
|
||||
if key not in valid_keys:
|
||||
del parameters[key]
|
||||
|
||||
async def get_model_name(self):
|
||||
return self.model_name
|
||||
|
||||
async def generate(self, prompt: str, parameters: dict, kind: str):
|
||||
"""
|
||||
Generates text from the given prompt and parameters.
|
||||
"""
|
||||
prompt = prompt.strip()
|
||||
|
||||
self.log.debug("generate", prompt=prompt[:128] + " ...", parameters=parameters)
|
||||
|
||||
try:
|
||||
|
||||
async with aiohttp.ClientSession() as session:
|
||||
endpoint = runpod.AsyncioEndpoint(self.runpod_id, session)
|
||||
|
||||
run_request = await endpoint.run({
|
||||
"input": {
|
||||
"prompt": prompt,
|
||||
}
|
||||
#"parameters": parameters
|
||||
})
|
||||
|
||||
while (await run_request.status()) not in ["COMPLETED", "FAILED", "CANCELLED"]:
|
||||
status = await run_request.status()
|
||||
log.debug("generate", status=status)
|
||||
await asyncio.sleep(0.1)
|
||||
|
||||
status = await run_request.status()
|
||||
|
||||
log.debug("generate", status=status)
|
||||
|
||||
response = await run_request.output()
|
||||
|
||||
log.debug("generate", response=response)
|
||||
|
||||
return response["choices"][0]["tokens"][0]
|
||||
|
||||
except Exception as e:
|
||||
self.log.error("generate error", e=e)
|
||||
emit(
|
||||
"status", message="Error during generation (check logs)", status="error"
|
||||
)
|
||||
return ""
|
||||
|
||||
def reconfigure(self, **kwargs):
|
||||
if kwargs.get("model"):
|
||||
self.model_name = kwargs["model"]
|
||||
if "runpod_id" in kwargs:
|
||||
self.api_auth = kwargs["runpod_id"]
|
||||
log.warning("reconfigure", kwargs=kwargs)
|
||||
self.set_client(**kwargs)
|
||||
67
docs/dev/client/example/test/__init__.py
Normal file
@@ -0,0 +1,67 @@
|
||||
import pydantic
|
||||
from openai import AsyncOpenAI
|
||||
|
||||
from talemate.client.base import ClientBase
|
||||
from talemate.client.registry import register
|
||||
|
||||
|
||||
class Defaults(pydantic.BaseModel):
|
||||
api_url: str = "http://localhost:1234"
|
||||
max_token_length: int = 4096
|
||||
|
||||
@register()
|
||||
class TestClient(ClientBase):
|
||||
client_type = "test"
|
||||
|
||||
class Meta(ClientBase.Meta):
|
||||
name_prefix: str = "test"
|
||||
title: str = "Test"
|
||||
defaults: Defaults = Defaults()
|
||||
|
||||
def set_client(self, **kwargs):
|
||||
self.client = AsyncOpenAI(base_url=self.api_url + "/v1", api_key="sk-1111")
|
||||
|
||||
def tune_prompt_parameters(self, parameters: dict, kind: str):
|
||||
|
||||
"""
|
||||
Talemate adds a bunch of parameters to the prompt, but not all of them are valid for all clients.
|
||||
|
||||
This method is called before the prompt is sent to the client, and it allows the client to remove
|
||||
any parameters that it doesn't support.
|
||||
"""
|
||||
|
||||
super().tune_prompt_parameters(parameters, kind)
|
||||
|
||||
keys = list(parameters.keys())
|
||||
|
||||
valid_keys = ["temperature", "top_p"]
|
||||
|
||||
for key in keys:
|
||||
if key not in valid_keys:
|
||||
del parameters[key]
|
||||
|
||||
async def get_model_name(self):
|
||||
|
||||
"""
|
||||
This should return the name of the model that is being used.
|
||||
"""
|
||||
|
||||
return "Mock test model"
|
||||
|
||||
async def generate(self, prompt: str, parameters: dict, kind: str):
|
||||
"""
|
||||
Generates text from the given prompt and parameters.
|
||||
"""
|
||||
human_message = {"role": "user", "content": prompt.strip()}
|
||||
|
||||
self.log.debug("generate", prompt=prompt[:128] + " ...", parameters=parameters)
|
||||
|
||||
try:
|
||||
response = await self.client.chat.completions.create(
|
||||
model=self.model_name, messages=[human_message], **parameters
|
||||
)
|
||||
|
||||
return response.choices[0].message.content
|
||||
except Exception as e:
|
||||
self.log.error("generate error", e=e)
|
||||
return ""
|
||||
3
docs/dev/index.md
Normal file
@@ -0,0 +1,3 @@
|
||||
# Coning soon
|
||||
|
||||
Developer documentation is coming soon. Stay tuned!
|
||||
@@ -1,4 +1,7 @@
|
||||
# Template Overrides in Talemate
|
||||
# Template Overrides
|
||||
|
||||
!!! warning "Old documentation"
|
||||
This is old documentation and needs to be updated, however may still contain useful information.
|
||||
|
||||
## Introduction to Templates
|
||||
|
||||
@@ -23,9 +26,9 @@ The creator agent templates allow for the creation of new characters within the
|
||||
|
||||
### Example Templates
|
||||
|
||||
- [Character Attributes Human Template](src/talemate/prompts/templates/creator/character-attributes-human.jinja2)
|
||||
- [Character Details Human Template](src/talemate/prompts/templates/creator/character-details-human.jinja2)
|
||||
- [Character Example Dialogue Human Template](src/talemate/prompts/templates/creator/character-example-dialogue-human.jinja2)
|
||||
- `src/talemate/prompts/templates/creator/character-attributes-human.jinja2`
|
||||
- `src/talemate/prompts/templates/creator/character-details-human.jinja2`
|
||||
- `src/talemate/prompts/templates/creator/character-example-dialogue-human.jinja2`
|
||||
|
||||
These example templates can serve as a guide for users to create their own custom templates for the character creator.
|
||||
|
||||
5
docs/getting-started/.pages
Normal file
@@ -0,0 +1,5 @@
|
||||
nav:
|
||||
- 1. Installation: installation
|
||||
- 2. Connect a client: connect-a-client.md
|
||||
- 3. Load a scene: load-a-scene.md
|
||||
- ...
|
||||
68
docs/getting-started/connect-a-client.md
Normal file
@@ -0,0 +1,68 @@
|
||||
# Connect a client
|
||||
|
||||
Once Talemate is up and running and you are connected, you will see a notification in the corner instructing you to configured a client.
|
||||
|
||||

|
||||
|
||||
Talemate uses client(s) to connect to local or remote AI text generation APIs like koboldcpp, text-generation-webui or OpenAI.
|
||||
|
||||
## Add a new client
|
||||
|
||||
On the right hand side click the **:material-plus-box: ADD CLIENT** button.
|
||||
|
||||

|
||||
|
||||
!!! note "No button?"
|
||||
If there is no button, you may need to toggle the client options by clicking this button
|
||||
|
||||

|
||||
|
||||
The client configuration window will appear. Here you can choose the type of client you want to add.
|
||||
|
||||

|
||||
|
||||
## Choose an API / Client Type
|
||||
|
||||
We have support for multiple local and remote APIs. You can choose to use one or more of them.
|
||||
|
||||
!!! note "Local vs remote"
|
||||
A local API runs on your machine, while a remote API runs on a server somewhere else.
|
||||
|
||||
Select the API you want to use and click through to follow the instructions to configure a client for it:
|
||||
|
||||
##### Remote APIs
|
||||
|
||||
- [OpenAI](/talemate/user-guide/clients/types/openai/)
|
||||
- [Anthropic](/talemate/user-guide/clients/types/anthropic/)
|
||||
- [mistral.ai](/talemate/user-guide/clients/types/mistral/)
|
||||
- [Cohere](/talemate/user-guide/clients/types/cohere/)
|
||||
- [Groq](/talemate/user-guide/clients/types/groq/)
|
||||
- [Google Gemini](/talemate/user-guide/clients/types/google/)
|
||||
|
||||
##### Local APIs
|
||||
|
||||
- [KoboldCpp](/talemate/user-guide/clients/types/koboldcpp/)
|
||||
- [Text-Generation-WebUI](/talemate/user-guide/clients/types/text-generation-webui/)
|
||||
- [LMStudio](/talemate/user-guide/clients/types/lmstudio/)
|
||||
- [TabbyAPI](/talemate/user-guide/clients/types/tabbyapi/)
|
||||
|
||||
##### Unofficial OpenAI API implementations
|
||||
|
||||
- [DeepInfra](/talemate/user-guide/clients/types/openai-compatible/#deepinfra)
|
||||
- llamacpp with the `api_like_OAI.py` wrapper
|
||||
|
||||
## Assign the client to the agents
|
||||
|
||||
Whenever you add your first client, Talemate will automatically assign it to all agents. Once the client is configured and assigned, all agents should have a green dot next to them. (Or grey if the agent is currently disabled)
|
||||
|
||||

|
||||
|
||||
You can tell the client is assigned to the agent by checking the tag beneath the agent name, which will contain the client name if it is assigned.
|
||||
|
||||

|
||||
|
||||
## Its not assigned!
|
||||
|
||||
If for some reason the client is not assigned to the agent, you can manually assign it to all agents by clicking the **:material-transit-connection-variant: Assign to all agents** button.
|
||||
|
||||

|
||||
5
docs/getting-started/installation/.pages
Normal file
@@ -0,0 +1,5 @@
|
||||
nav:
|
||||
- windows.md
|
||||
- linux.md
|
||||
- docker.md
|
||||
- ...
|
||||
17
docs/getting-started/installation/docker.md
Normal file
@@ -0,0 +1,17 @@
|
||||
!!! example "Experimental"
|
||||
Talemate through docker has not received a lot of testing from me, so please let me know if you encounter any issues.
|
||||
|
||||
You can do so by creating an issue on the [:material-github: GitHub repository](https://github.com/vegu-ai/talemate)
|
||||
|
||||
## Quick install instructions
|
||||
|
||||
1. `git clone https://github.com/vegu-ai/talemate.git`
|
||||
1. `cd talemate`
|
||||
1. copy config file
|
||||
1. linux: `cp config.example.yaml config.yaml`
|
||||
1. windows: `copy config.example.yaml config.yaml`
|
||||
1. `docker compose up`
|
||||
1. Navigate your browser to http://localhost:8080
|
||||
|
||||
!!! note
|
||||
When connecting local APIs running on the hostmachine (e.g. text-generation-webui), you need to use `host.docker.internal` as the hostname.
|
||||
@@ -1,3 +1,19 @@
|
||||
|
||||
## Quick install instructions
|
||||
|
||||
!!! warning
|
||||
python 3.12 and node.js v21 are currently not supported.
|
||||
|
||||
1. `git clone https://github.com/vegu-ai/talemate.git`
|
||||
1. `cd talemate`
|
||||
1. `source install.sh`
|
||||
1. Start the backend: `python src/talemate/server/run.py runserver --host 0.0.0.0 --port 5050`.
|
||||
1. Open a new terminal, navigate to the `talemate_frontend` directory, and start the frontend server by running `npm run serve`.
|
||||
|
||||
If everything went well, you can proceed to [connect a client](../../connect-a-client).
|
||||
|
||||
## Additional Information
|
||||
|
||||
### Setting Up a Virtual Environment
|
||||
|
||||
1. Open a terminal.
|
||||
28
docs/getting-started/installation/troubleshoot.md
Normal file
@@ -0,0 +1,28 @@
|
||||
# Common issues
|
||||
|
||||
## Windows
|
||||
|
||||
### Installation fails with "Microsoft Visual C++" error
|
||||
|
||||
If your installation errors with a notification to upgrade "Microsoft Visual C++" go to https://visualstudio.microsoft.com/visual-cpp-build-tools/ and click "Download Build Tools" and run it.
|
||||
|
||||
- During installation make sure you select the C++ development package (upper left corner)
|
||||
- Run `reinstall.bat` inside talemate directory
|
||||
|
||||
## Docker
|
||||
|
||||
### Docker has created `config.yaml` directory
|
||||
|
||||
If you do not copy the example config to `config.yaml` before running `docker compose up` docker will create a `config` directory in the root of the project. This will cause the backend to fail to start.
|
||||
|
||||
This happens because we mount the config file directly as a docker volume, and if it does not exist docker will create a directory with the same name.
|
||||
|
||||
This will eventually be fixed, for now please make sure to copy the example config file before running the docker compose command.
|
||||
|
||||
## General
|
||||
|
||||
### Running behind reverse proxy with ssl
|
||||
|
||||
Personally i have not been able to make this work yet, but its on my list, issue stems from some vue oddities when specifying the base urls while running in a dev environment. I expect once i start building the project for production this will be resolved.
|
||||
|
||||
If you do make it work, please reach out to me so i can update this documentation.
|
||||
@@ -1,16 +1,31 @@
|
||||
### How to Install Python 3.10
|
||||
## Quick install instructions
|
||||
|
||||
1. Visit the official Python website's download page for Windows at https://www.python.org/downloads/windows/.
|
||||
!!! warning
|
||||
python 3.12 and node.js v21 are currently not supported.
|
||||
|
||||
1. Download and install Python 3.10 or Python 3.11 from the [official Python website](https://www.python.org/downloads/windows/).
|
||||
1. Download and install Node.js v20 from the [official Node.js website](https://nodejs.org/en/download/). This will also install npm.
|
||||
1. Download the Talemate project to your local machine. Download from [the Releases page](https://github.com/vegu-ai/talemate/releases).
|
||||
1. Unpack the download and run `install.bat` by double clicking it. This will set up the project on your local machine.
|
||||
1. Once the installation is complete, you can start the backend and frontend servers by running `start.bat`.
|
||||
1. Navigate your browser to http://localhost:8080
|
||||
|
||||
If everything went well, you can proceed to [connect a client](../../connect-a-client).
|
||||
|
||||
## Additional Information
|
||||
|
||||
### How to Install Python 3.10 or 3.11
|
||||
|
||||
1. Visit the official Python website's download page for Windows at [https://www.python.org/downloads/windows/](https://www.python.org/downloads/windows/).
|
||||
2. Click on the link for the Latest Python 3 Release - Python 3.10.x.
|
||||
3. Scroll to the bottom and select either Windows x86-64 executable installer for 64-bit or Windows x86 executable installer for 32-bit.
|
||||
4. Run the installer file and follow the setup instructions. Make sure to check the box that says Add Python 3.10 to PATH before you click Install Now.
|
||||
|
||||
### How to Install npm
|
||||
|
||||
1. Download Node.js from the official site https://nodejs.org/en/download/.
|
||||
1. Download Node.js from the official site [https://nodejs.org/en/download/](https://nodejs.org/en/download/).
|
||||
2. Run the installer (the .msi installer is recommended).
|
||||
3. Follow the prompts in the installer (Accept the license agreement, click the NEXT button a bunch of times and accept the default installation settings).
|
||||
4. Restart your computer. You won’t be able to run Node.js® until you restart your computer.
|
||||
|
||||
### Usage of the Supplied bat Files
|
||||
|
||||
54
docs/getting-started/load-a-scene.md
Normal file
@@ -0,0 +1,54 @@
|
||||
# Load a scenario
|
||||
|
||||
Once you've set up a client and assigned it to all the agents, you will be presented with the `Home` screen. From here, you can load talemate scenarios and upload character cards.
|
||||
|
||||
To load the introductory `Infinity Quest` scenario, simply click on its entry in the `Quick Load` section.
|
||||
|
||||

|
||||
|
||||
## Interacting with the scenario
|
||||
|
||||
After a moment of loading, you will see the scenario's introductory message and be able to send a text interaction.
|
||||
|
||||

|
||||
|
||||
Its time to send the first message.
|
||||
|
||||
Spoken words should go into `"` and actions should be written in `*`. Talemate will automatically supply the other if you supply one.
|
||||
|
||||

|
||||
|
||||
Once sent, its now the AI's turn to respond - depending on the service and model selected this can take a a moment.
|
||||
|
||||

|
||||
|
||||
## Quick overview of UI elements
|
||||
|
||||
### Scenario tools
|
||||
|
||||
Above the chat input there is a set of tools to help you interact with the scenario.
|
||||
|
||||

|
||||
|
||||
These contain tools to, for example:
|
||||
|
||||
- regenrate the most recent AI response
|
||||
- give directions to characters
|
||||
- narrate the scene
|
||||
- advance time
|
||||
- save the current scene state
|
||||
- and more ...
|
||||
|
||||
A full guide can be found in the [Scenario Tools](/talemate/user-guide/scenario-tools) section of the user guide.
|
||||
|
||||
### World state
|
||||
|
||||
Shows a sumamrization of the current scene state.
|
||||
|
||||

|
||||
|
||||
Each item can be expanded for more information.
|
||||
|
||||

|
||||
|
||||
Find out more about the world state in the [World State](/talemate/user-guide/world-state) section of the user guide.
|
||||
BIN
docs/img/0.19.0/Screenshot_15.png
Normal file
|
After Width: | Height: | Size: 418 KiB |
BIN
docs/img/0.19.0/Screenshot_16.png
Normal file
|
After Width: | Height: | Size: 413 KiB |
BIN
docs/img/0.19.0/Screenshot_17.png
Normal file
|
After Width: | Height: | Size: 364 KiB |
BIN
docs/img/0.20.0/comfyui-base-workflow.png
Normal file
|
After Width: | Height: | Size: 128 KiB |
BIN
docs/img/0.20.0/visual-config-a1111.png
Normal file
|
After Width: | Height: | Size: 32 KiB |
BIN
docs/img/0.20.0/visual-config-comfyui.png
Normal file
|
After Width: | Height: | Size: 34 KiB |
BIN
docs/img/0.20.0/visual-config-openai.png
Normal file
|
After Width: | Height: | Size: 30 KiB |
BIN
docs/img/0.20.0/visual-queue.png
Normal file
|
After Width: | Height: | Size: 933 KiB |
BIN
docs/img/0.20.0/visualize-scene-tools.png
Normal file
|
After Width: | Height: | Size: 13 KiB |
BIN
docs/img/0.20.0/visualizer-busy.png
Normal file
|
After Width: | Height: | Size: 3.5 KiB |
BIN
docs/img/0.20.0/visualizer-ready.png
Normal file
|
After Width: | Height: | Size: 2.9 KiB |
BIN
docs/img/0.20.0/visualze-new-images.png
Normal file
|
After Width: | Height: | Size: 1.8 KiB |
BIN
docs/img/0.21.0/deepinfra-setup.png
Normal file
|
After Width: | Height: | Size: 56 KiB |
BIN
docs/img/0.21.0/no-clients.png
Normal file
|
After Width: | Height: | Size: 7.1 KiB |
BIN
docs/img/0.21.0/openai-add-api-key.png
Normal file
|
After Width: | Height: | Size: 35 KiB |
BIN
docs/img/0.21.0/openai-setup.png
Normal file
|
After Width: | Height: | Size: 20 KiB |
BIN
docs/img/0.21.0/prompt-template-default.png
Normal file
|
After Width: | Height: | Size: 17 KiB |
BIN
docs/img/0.21.0/ready-to-go.png
Normal file
|
After Width: | Height: | Size: 43 KiB |
BIN
docs/img/0.21.0/select-prompt-template.png
Normal file
|
After Width: | Height: | Size: 47 KiB |
BIN
docs/img/0.21.0/selected-prompt-template.png
Normal file
|
After Width: | Height: | Size: 49 KiB |
BIN
docs/img/0.21.0/text-gen-webui-setup.png
Normal file
|
After Width: | Height: | Size: 26 KiB |
BIN
docs/img/0.25.0/google-add-client.png
Normal file
|
After Width: | Height: | Size: 25 KiB |
BIN
docs/img/0.25.0/google-cloud-setup.png
Normal file
|
After Width: | Height: | Size: 59 KiB |
BIN
docs/img/0.25.0/google-ready.png
Normal file
|
After Width: | Height: | Size: 5.3 KiB |
BIN
docs/img/0.25.0/google-setup-incomplete.png
Normal file
|
After Width: | Height: | Size: 7.5 KiB |
BIN
docs/img/0.26.0/agent-disabled.png
Normal file
|
After Width: | Height: | Size: 1.1 KiB |
BIN
docs/img/0.26.0/agent-enabled.png
Normal file
|
After Width: | Height: | Size: 1.6 KiB |
BIN
docs/img/0.26.0/agent-has-client-assigned.png
Normal file
|
After Width: | Height: | Size: 1.1 KiB |
BIN
docs/img/0.26.0/anthropic-settings.png
Normal file
|
After Width: | Height: | Size: 43 KiB |
BIN
docs/img/0.26.0/auto-progress-off.png
Normal file
|
After Width: | Height: | Size: 1.4 KiB |
BIN
docs/img/0.26.0/autosave-blocked.png
Normal file
|
After Width: | Height: | Size: 1.3 KiB |
BIN
docs/img/0.26.0/autosave-disabled.png
Normal file
|
After Width: | Height: | Size: 1.2 KiB |
BIN
docs/img/0.26.0/autosave-enabled.png
Normal file
|
After Width: | Height: | Size: 1.3 KiB |
BIN
docs/img/0.26.0/client-anthropic-no-api-key.png
Normal file
|
After Width: | Height: | Size: 8.7 KiB |
BIN
docs/img/0.26.0/client-anthropic-ready.png
Normal file
|
After Width: | Height: | Size: 8.0 KiB |
BIN
docs/img/0.26.0/client-anthropic.png
Normal file
|
After Width: | Height: | Size: 22 KiB |
BIN
docs/img/0.26.0/client-assigned-prompt-template.png
Normal file
|
After Width: | Height: | Size: 46 KiB |
BIN
docs/img/0.26.0/client-cohere-no-api-key.png
Normal file
|
After Width: | Height: | Size: 8.4 KiB |
BIN
docs/img/0.26.0/client-cohere-ready.png
Normal file
|
After Width: | Height: | Size: 7.0 KiB |
BIN
docs/img/0.26.0/client-cohere.png
Normal file
|
After Width: | Height: | Size: 20 KiB |
BIN
docs/img/0.26.0/client-deepinfra-ready.png
Normal file
|
After Width: | Height: | Size: 8.6 KiB |
BIN
docs/img/0.26.0/client-deepinfra.png
Normal file
|
After Width: | Height: | Size: 78 KiB |
BIN
docs/img/0.26.0/client-google-creds-missing.png
Normal file
|
After Width: | Height: | Size: 9.3 KiB |
BIN
docs/img/0.26.0/client-google-ready.png
Normal file
|
After Width: | Height: | Size: 7.9 KiB |
BIN
docs/img/0.26.0/client-google.png
Normal file
|
After Width: | Height: | Size: 26 KiB |
BIN
docs/img/0.26.0/client-groq-no-api-key.png
Normal file
|
After Width: | Height: | Size: 8.1 KiB |
BIN
docs/img/0.26.0/client-groq-ready.png
Normal file
|
After Width: | Height: | Size: 6.9 KiB |
BIN
docs/img/0.26.0/client-groq.png
Normal file
|
After Width: | Height: | Size: 20 KiB |
BIN
docs/img/0.26.0/client-hibernate-1.png
Normal file
|
After Width: | Height: | Size: 19 KiB |
BIN
docs/img/0.26.0/client-hibernate-2.png
Normal file
|
After Width: | Height: | Size: 10 KiB |
BIN
docs/img/0.26.0/client-koboldcpp-could-not-connect.png
Normal file
|
After Width: | Height: | Size: 7.5 KiB |
BIN
docs/img/0.26.0/client-koboldcpp-ready.png
Normal file
|
After Width: | Height: | Size: 7.8 KiB |
BIN
docs/img/0.26.0/client-koboldcpp.png
Normal file
|
After Width: | Height: | Size: 28 KiB |
BIN
docs/img/0.26.0/client-lmstudio-could-not-connect.png
Normal file
|
After Width: | Height: | Size: 7.3 KiB |
BIN
docs/img/0.26.0/client-lmstudio-ready.png
Normal file
|
After Width: | Height: | Size: 7.7 KiB |
BIN
docs/img/0.26.0/client-lmstudio.png
Normal file
|
After Width: | Height: | Size: 27 KiB |
BIN
docs/img/0.26.0/client-mistral-no-api-key.png
Normal file
|
After Width: | Height: | Size: 8.6 KiB |
BIN
docs/img/0.26.0/client-mistral-ready.png
Normal file
|
After Width: | Height: | Size: 7.6 KiB |
BIN
docs/img/0.26.0/client-mistral.png
Normal file
|
After Width: | Height: | Size: 21 KiB |
BIN
docs/img/0.26.0/client-ooba-could-not-connect.png
Normal file
|
After Width: | Height: | Size: 8.2 KiB |
BIN
docs/img/0.26.0/client-ooba-no-model-loaded.png
Normal file
|
After Width: | Height: | Size: 8.1 KiB |
BIN
docs/img/0.26.0/client-ooba-ready.png
Normal file
|
After Width: | Height: | Size: 9.2 KiB |
BIN
docs/img/0.26.0/client-ooba.png
Normal file
|
After Width: | Height: | Size: 29 KiB |
BIN
docs/img/0.26.0/client-openai-no-api-key.png
Normal file
|
After Width: | Height: | Size: 8.5 KiB |
BIN
docs/img/0.26.0/client-openai-ready.png
Normal file
|
After Width: | Height: | Size: 6.7 KiB |
BIN
docs/img/0.26.0/client-openai.png
Normal file
|
After Width: | Height: | Size: 20 KiB |
BIN
docs/img/0.26.0/client-tabbyapi-could-not-connect.png
Normal file
|
After Width: | Height: | Size: 7.4 KiB |
BIN
docs/img/0.26.0/client-tabbyapi-ready.png
Normal file
|
After Width: | Height: | Size: 8.3 KiB |
BIN
docs/img/0.26.0/client-tabbyapi.png
Normal file
|
After Width: | Height: | Size: 39 KiB |
BIN
docs/img/0.26.0/client-unknown-prompt-template-modal.png
Normal file
|
After Width: | Height: | Size: 49 KiB |
BIN
docs/img/0.26.0/client-unknown-prompt-template.png
Normal file
|
After Width: | Height: | Size: 18 KiB |
BIN
docs/img/0.26.0/cohere-settings.png
Normal file
|
After Width: | Height: | Size: 42 KiB |
BIN
docs/img/0.26.0/connect-a-client-add-client-modal.png
Normal file
|
After Width: | Height: | Size: 29 KiB |
BIN
docs/img/0.26.0/connect-a-client-add-client.png
Normal file
|
After Width: | Height: | Size: 7.1 KiB |
BIN
docs/img/0.26.0/connect-a-client-assign-to-all-agents.png
Normal file
|
After Width: | Height: | Size: 10 KiB |
BIN
docs/img/0.26.0/connect-a-client-ready.png
Normal file
|
After Width: | Height: | Size: 43 KiB |
BIN
docs/img/0.26.0/conversation-agent-settings.png
Normal file
|
After Width: | Height: | Size: 64 KiB |
BIN
docs/img/0.26.0/create-new-scene-test.png
Normal file
|
After Width: | Height: | Size: 246 KiB |
BIN
docs/img/0.26.0/create-new-scene.png
Normal file
|
After Width: | Height: | Size: 3.4 KiB |