Compare commits

...

31 Commits

Author SHA1 Message Date
veguAI
113553c306 0.29.0 (#167)
* set 0.29.0

* tweaks for dig layered history (wip)

* move director agent to directory

* relock

* remove "none" from dig_layered_history response

* determine character development

* update character sheet from character development (wip)

* org imports

* alert outdated template overrides during startup

* editor controls normalization of exposition

* dialogue formatting refactor

* fix narrator.clean_result forcing * regardless of editor fix exposition setting

* move more of the dialogue cleanup logic into the editor fix exposition handlers

* remove cruft

* change ot normal selects and add some margin

* move formatting option up

* always strip partial sentences

* separates exposition fixes from other dialogue cleanup operations, since we still want those

* add novel formatting style

* honor formatting config when no markers are supplied

* fix issue where sometimes character message formatting would miss character name

* director can now guide actors through scene analysis

* style fixes

* typo

* select correct system message on direction type

* prompt tweaks

* disable by default

* add support for dynamic instruction injection and include missing guide for internal note usage

* change favicon and also indicate business through favicon

* img

* support xtc, dry and smoothing in text gen webui

* prompt tweaks

* support xtc, dry, smoothing in koboldcpp client

* reorder

* dry, xtc and smoothing factor exposed to tabby api client

* urls to third party API documentation

* remove bos token

* add missing preset

* focal

* focal progress

* focal progress and generated suggestions progress

* fix issue with discard all suggestions

* apply suggestions

* move suggestion ux into the world state manager

* support generation options for suggestion generation

* unused import

* refactor focal to json based approach

* focal and character suggestion tweaks

* rmeove cruft

* remove cruft

* relock

* prompt tweaks

* layout spacing updates

* ux elements for removal of scenes from quick load menu

* context investigation refactor WIP

* context investigation refactor

* context investigation refactor

* context investigation refactor

* cleanup

* move scene analysis to summarizer agent

* remove deprecated context investigation logic

* context investigation refactor continued - split into separate file for easier maint

* allow direct specification of response context length

* context investigation and scene analyzation progress

* change analysis length config to number

* remove old dig-layered-history templates

* summarizer - deep analysis is only available if there is layered history

* move world_state agent to dedicated directory

* remove unused imports

* automatic character progression WIP

* character suggestions progress

* app busy flag based on agent business

* indicate suggestions in world state overview

* fix issue with user input cleanup

* move conversation agent to a dedicated submodule

* Response in action analyze_text_and_extract_context is too short #162

* move narrator agent to its own submodule

* narrator improvements WIP

* narration improvements WIP

* fix issue with regen of character exit narration

* narration improvements WIP

* prompt tweaks

* last_message_of_type can set max iterations

* fix multiline parsing

* prompt tweaks

* director guide actors based of scene analysis

* director guidance for actors

* prompt tweaks

* prompt tweaks

* prompt tweaks

* fix automatic character proposals not propagating to the ux

* fix analysis length

* support director guidance in legacy chat format

* typo

* prompt tweaks

* prompt tweaks

* error handling

* length config

* prompt tweaks

* typo

* remove cruft

* prompt tweak

* prompt tweak

* time passage style changes

* remove cruft

* deep analysis context investigations honor call limit

* refactor conversation agent long term memory to use new memory rag mixin - also streamline prompts

* tweaks to RAG mixin agent config

* fix narration highlighting

* context investgiation fixes
director narration guidance
summarization tweaks

* direactor guide narration progress
context investigation fixes that would cause looping of investigations and failure to dig into the correct layers

* prompt tweaks

* summarization improvements

* separate deep analysis chapter selection from analysis into its own prompt

* character entry and exit

* cache analysis per subtype and some narrator prompt tweaks

* separate layered history logic into its own summarizer mixin and expose some additional options

* scene can now set an overral writing style using writing style templates
narrator option to enable writing style

* narrate query writing style support

* scene tools - narrator actions refactor to handler and own component

* narrator query / look at narrations emitted as context investigation messages
refactor context investigation messaage display
scene message meta data object

* include narrative direction

* improve context investigation message prompt insert

* reorg supported parameters

* fix bug when no message history exists

* WIP make regenerate work nicely with director guidance

* WIP make regenerate work nicely with director guidance

* regenerate conversation fixes

* help text

* ux tweaks

* relock

* turn off deep analysis and context investigations by default

* long term memory options for director and summarizer

* long term memory caching

* fix summarization cache toggle not showing up in ux

* ux tweaks

* layered history summarization includes character information for mentioned characters

* deepseek client added

* Add fork button to narrator message

* analyze and guidance support for time passage narration

* cache based on message fingerprint instead of id

* configurable system prompts WIP

* configurable system prompts WIP

* client overrides for system prompts wired to ux

* system prompt overhaul

* fix issue with unknown system prompt kind

* add button to manually request dynamic choices from the director
move the generate choices logic of the director agent to its own submodule

* remove cruft

* 30 may be too long and is causing the client to disappear temporarly

* suppoert dynamic choice generate for non player characters

* enable `actor` tab for player characters

* creator agent now has access to rag tools
improve acting instruction generation

* client timeout fixes

* fix issue where scene removal menu stayed open after remove

* expose scene restore functionality to ux

* create initial restore point

* fix creator extra-context template

* didn't mean to remove this

* intro scene should be edited through world editor

* fix alert

* fix partial quotes regardless of editor setting
director guidance for conversation reminds to put speech in quotes

* fix @ instructions not being passed through to director guidance prompt

* anthropic mode list updated

* default off

* cohere model list updated

* reset actAs on next scene load

* prompt tweaks

* prompt tweaks

* prompt tweaks

* prompt tweaks

* prompt tweaks

* remove debug cruft

* relock

* docs on changing host / port

* fix issue with narrator / director actiosn not available on fresh install

* fix issue with long content classification determination result

* take this reminder to put speech into quotes out for now, it seems to do more harm than good

* fix some remaining issues with auto expositon fixes

* prompt tweaks

* prompt tweaks

* fix issue during reload

* expensive and warning ux passthrough for agent config

* layered sumamry analysation defaults to on

* what's new info block added

* docs

* what's new updated

* remove old images

* old img cleanup script

* prompt tweaks

* improve auto prompt template detection via huggingface

* add gpt-4o-realtime-preview
add gpt-4o-mini-realtime-preview

* add o1 and o3-mini

* fix o1 and o3

* fix o1 and o3

* more o1 / o3 fixes

* o3 fixes
2025-02-01 17:44:06 +02:00
Kaldigo
736e6702f5 Dockerfile Update (#174)
* Update Dockerfile

Replaced --no-dev with --only main

* Update Dockerfile

Updated command
2025-01-30 02:29:44 +02:00
veguAI
80256012ad 0.28.0 (#148)
* fix issue where saving a new scene would save into a "new scenario" directory instead instead of a relevantly named directory

* implement function to fork new scene file from specific message

* dynamic choice generation

* dynamic choice generation progress

* prompt tweaks

* disable choice generation by default
prompt tweaks

* prompt tweaks for assisted RAG tasks

* allow analyze_text_and_extract_context to include character context

* more prompt tweaks for RAG assist during conversation generation

* open director settings from dynamic action dialog

* adjust wording

* remove player choice message if the trigger message is removed (or regenerated)

* fix issue with dialogue cleaqup where narration over multiple lines would end up being marked incorrectly

* dynamic action generation custom instructions
dynamic action generation narration for sensory actions

* fix actions when acting as another character

* 0.28.0

* conversation agent: split out generation settings, add actor instructions extension, add actor instruction offset slider

* prompt tweaks

* fix ai message regenerate if generated from choice

* cruft

* layered history implementation through summarizer
summarization tweaks

* show layered history in ux

* layered history fixes and tweaks
conversation actor instruction fixes

* more summarization fixes

* fix missing actor instructions

* prompt tweaks

* prompt tweaks

* force lower case when checking sensory type

* agent modal polish
implement find-natural-scene-termination summarizer action
some summarization tweaks

* integrate find_natural_scene_termination with layered history

* collect all denouements at once

* relock

* fix some issues with screenplay type formatting in conversation agent

* cleanup

* revert layered history summarization to use max_process_tokens instead of using ai to fine scene termination as that process falls apart in layer 1 and higher, at that point every item is a scene in itself.

* implement ai assisted digging through layered history to answer queries

* dig_layered_history tweaks and improvements

* prompt tweaks

* adjust budget

* adjust budget for RAG context

* layered_history disabled by default

* prompt tweaks to reinforcement updates

* prompt tweaks

* dig layered history - response without function call to be treated as answer

* clarify style keywords to avoid bleeding into the prompt as subject matter

* fix issue with cover image updates

* fix missing dialogue from context history

* fix issue where new scenes wouldn't load

* fix crash with layered summarization

* more context history fixes

* fix assured dialogue message in context history

* prompt tweaks

* tweaks to layered history generation

* prompt tweaks

* conversation agent can dig layered history for extra context

* some fixes to dig layered history

* scene fork adjust layered history

* layered history status indication

* allow configuration of message styles and colors

* fix issue where layered history generate would get stuck on layer 0

* dig layered history default to false

* prompt tweaks

* context investigation messages

* tweaks to context investigation

* context investigation polish of UX and allow specifying trigger

* prompt tweaks

* allow hiding of ci and director messages

* wire ci shrotcut buttons

* prompt tweaks

* prompt tweaks

* carry on analysis when digging layered history

* improve quality of generate choices by anchoring to last line in the scene

* update hint message

* prompt tweaks

* change default value for max_process_tokens

* docs

* dig layered history only if there are layers

* always enforce num choices limit

* relock

* typos

* prompt tweaks

* docs for forking a scene

* prompt tweaks

* world editor rubber banding fixes follow up

* layered history cleanup fixes

* gracefully handle malformed dig() call

* handle malformed answer() call

* only generate choices if last content isn't player message

* include more context in autocomplete prompts

* prompt tweaks

* typo

* fix issue where inactive characters could not be deleted

* more character delete bugs

* dig layered history fixes

* discard empty content investigations

* fix issue with autocomplete no longer working in world editor

* prompt tweaks

* support single quotes

* prompt tweaks

* fix issue with context investigation if final message was narrator text

* Include the query in the context investigation message

* context investigvations should note when historic events occured

* instructions on how to use internal notes

* time_diff return empty string no time supplied

* prompt tweaks

* fix date calculations for historic entries

* change default values

* prompt tweaks

* fix history regenerate continuing through page reload

* reorganize websocket tasks

* allow cancelling of history regenerate

* Capitalize first letter of summarization

* include base layer in context investigations

* prompt tweaks

* fix issue where context investigations would expand too much of the history at once

* attempt to determine character knowledge during context investigation

* prompt tweaks

* prompt tweaks

* fix mising timestamps

* more context during layer history digging

* fix issue with act-as not being able to select past the first npc if a scene had more than one active npcs in it

* docs

* error handling for malformed answer call

* timestamp calculation fixes and summarization improvements

* lock message manipulation while the ux is busy

* prompt tweaks

* toggling 'log debug messages' will log all messages to console even if no filter is specified

* layered history generation cancellable from ux

* prevent loading scene while another scene is currently loading

* improvements to choice generation prompt and error handling

* prompt tweaks

* prompt tweaks

* prompt tweaks

* fix issue with successive scene load not working

* correctly display timestamps and generated layers during history regen

* summarization improvements

* clean up context investigation prompt

* prompt tweaks

* increase response token size for dig_layered_history

* define missing presets

* missing preset

* prompt tweaks

* fix simulation suite

* attach punkt download to backend start, not frontend start

* dig layered history fixes

* prompt tweaks

* fix summarize_and_pin

* more fixes for time calculations

* relock

* prompt tweaks

* remove dupe entry from layered history

* bash version of update script

* prompt tweaks

* layered history defaults to enabled

* default decreased to 0.3 chance

* fix multi character natural flow selection with clients that don't support LLM coercion

* fix simulation suite call to change a character

* typo

* remove deprecated test

* use python3

* add missing 4o models

* add proper configs for 4o models

* prompt tweaks

* update reinforcement prompt ignores context investigations

* scene.snapshot formatting and dig_layered_history ignores reinforcments

* use end date instead of start date

* Reword 'Moments ago' to 'Recently' as it is more forgiving and applicable to longer time ranges

* fix time calculation issues during summarization of new entries

* no need for scoping

* dont display as range if start and end of entry are identical

* prompt tweaks
2024-11-24 15:43:27 +02:00
veguAI
bb1cf6941b 0.27.0 (#137)
* move memory agent to directory structure

* chromadb settings rework

* memory agent improvements
embedding presets
support switching embeddings without restart
support custom sentence transformer embeddings

* toggle to hide / show disabled clients

* add memory debug tools

* chromadb no longer needs its dedicated config entry

* add missing emits

* fix initial value

* hidden disabled clients no longer cause enumeration issues with client actions

* improve memory agent error handling and hot reloading

* more memory agent error handling

* DEBUG_MEMORY_REQUESTS off

* relock

* sim suite: fix issue with removing or changing characters

* relock

* fix issue where actor dialogue editor would break with multiple characters in the scene

* remove cruft

* implement interrupt function

* margin adjustments

* fix rubber banding issue in world editor when editing certain text fields

* status notification when re-importing vectorb due to embeddings change

* properly open new client context on agent actions

* move jiggle apply to the end of prompt tune stack

* narrator agent length limit and jiggle settings added - also improve post generation cleanup

* progress story prompt improvements

* narrator prompt and cleanup tweaks

* prompt tweak

* revert

* autocomplete dialogue improvements

* Unified process (#141)

* progress to unified process

* --dev arg

* use gunicorn to serve built frontend

* gunicorn config adjustments

* remove dist from gitignore

* revert

* uvicorn instead

* save decode

* graceful shutdown

* refactor unified process

* clean up frontend log messages

* more logging fixes

* 0.27.0

* startup message

* clean up scripts a bit

* fixes to update.bat

* fixes to install.bat

* sim suite supports generation cancellation

* debug

* simplify narrator prompts

* prompt tweaks

* unified docker file

* update docker compose config for unified docker file

* cruft

* fix startup in linux docker

* download punkt so its available

* prompt tweaks

* fix bug when editing scene outline would wipe message history

* add o1 models

* add sampler, scheduler and cfg config to a1111 visualizer

* update installation docs

* visualizer configurable timeout

* memory agent docs

* docs

* relock

* relock

* fix issue where changing embeddings on immutable scene would hang

* remove debug message

* take torch install out of poetry since conditionals don't work.

* torch gets installed through some dependency so put it back into poetry, but reinstall with cuda if cuda support exists

* fix install syntax

* no need for torchvision

* torch cuda install added to linux install script

* add torch cuda install to update.bat

* docs

* docs

* relock

* fix install.sh

* handle torch+cuda install in docker

* docs

* typo
2024-09-23 12:55:34 +03:00
veguAI
2c8b4b8186 Update README.md 2024-07-26 21:51:07 +03:00
veguAI
95a17197ba 0.26.0 (#133)
* implement manually disabling and enabling clients

* relock

* fix warning spam

* start moving stuff around

* move more stuff

* start separating world state manager into more managable submodules

* character title

* scroll home to top always

* finish separating character state editor into components

* fix defered nav to character sections

* separate components for pin and contextdb managing

* fix issue with context character filter search

* fix world state manage ux state reset issues

* wsm menu refactor
allow updating character image from wsm
cover image layout fixes

* remove debug spam

* fix client deletion / disabling rubber banding issue

* deactivate / activate / delete characters through wsm

* reload character instead

* fix koboldcpp client jiggle arguments

* save scene title

* fix deferred nav

* fix issue where blanking a character detail would bug out

* some layout changes

* character import copies cover image

* remove debug message

* character import via wsm

* deactivate imported characters

* images nav option placeholder

* start move towards new world state templating system

* prompt tweak

* add templates/world-state/*.yaml

* switch to new world state template system in manager

* template editor progress

* more wsm template changes

* template applicator component

* template applicate added to attributes and details

* selective template application

* fix issue with template editing

* attribute and detail templates dont require instructions

* adjust character attributes and details template applicator integration

* add gpt-4o

add gpt-4o-2024-05-13

* autocomplete prompt and postprocessing tweaks

* prompt tweaks

* fix issue where saving a new scene could cause recent config changes to revert

* only download punkt if its not downloaded yet

* working character attribute templates

* character detail generate working
move template generate logic to worldstate.templates

* character creator first steps

* support contextual generate when character doesn't exist

* move talemate wsm templates to their own dir, add supports_spice and supports_style flags

* wsm character creator progress

* character creator progress

* character creator progress and wire up image creation in character editor

* templating progress

* contextual generate generation options

* ux tweaks

* wirte up writing style and spice to generation

* wire spice / writing style to detail generation

* notify when spice is applied

* tweaks to generation spice notifications

* add some help / information to template editor

* fix some issues with detail and attribute generation

* some context gen tweaks

* character gen tweaks

* character color changer

* link to templates form gen option ux

* gen options for dialogue example genrate

* ctrl click to max spice level

* unify spice application notification into a component for reuse

* improvements to example dialogue generation

* some refinements to character editor

* remove some old cruft from scene schema

* wsm scene editor progress

* relock

* relock

* debug message cleanup

* fix issue with tab selection when loading a scene

* scene editor progress

* centralized generation options

* pass generation settings through to character creator

* save changes from wsm view

* scene settings
save copy

* refactor world entry / states editor

* fix issue with applying non-character world state templates

* layout fixes

* allow updating of scene cover image

* move history manager to world editor

* add phi-3 base template

* dialogue cleanup improvements

* refactor scoped game-engine api

* separate legacy creator functions to own file

* remove cruft

* some cleanup and fixes

* add photo style

* remove noisy log message

* better handling of active scene

* some fixes to pin editor

* don't enforce height

* active scene context fixes

* fix intro and scene description generration

* tweak preset for scene direction and summarization tasks

* ensure memory db is open

* update frontend dependencies

* update frontend dependencies

* fix issue with prompt query_memory function returning None

* typo

* default  world state templates

* new scene creation fixes
remove legacy creator ux

* scene export

* fix scene loading from upload

* add claude 3.5 sonnet

* fix automatic client selection when the current client is disabled

* remove cruft

* agent modal extended to support multiple config panels
visual agent prompt prefixes and suffixes addeed

* fix issue with world state template group saving

* resolve attribute name issue `copy`

* RequestInput: fix form validation and keystroke submit

* support chara load from json files also refactor character loading to load.py

* implement simple act-as feature using tab to cycle through active characters in the scene

* docs progress

* tts settings tweaks

* fix issue with loading older talemate scenes

* docs progress

* fix issue with config validation on new installs

* some tweaks for agent setting modals

* default template changed to alpaca

* docs dependencies

* gemma2 template

* nemotron4 template

* docs

* docs

* docs

* change prompt template section to autocomplete

* fix agent config not loading for some agents

* allow deletion of player character

* fix some oddities with scene outline commit

* automatically active player characters and create player characters with the correct actor class

* also set the first npc created as immediately acitve

* add has_active_npcs property and re-emit message history when scene outline is updated.

* indicate when visualizer is busy in the scene tools

* check for busy instead

* prompt tweaks for movie script type dialogue format

* gemma2 prompt fixed

* scene message colors updated

* act as narrator

* move to _old

* scene message appearance tweaks

* fix rubberbanding when editing text field in agent configs

* fix autocompletionm when acting as different character or narrator

* disable autocomplete during command execution

* remove autocomplete button from scene tools

* docs

* relock

* docs

* docs

* improve context pins in dialogue context

* better approximate token count

* fix pin condition editing

* fix issue where scene save as would lose long term memory entries

* immediately clean message history when loading a new scene

* docs

* ensure intro text has formatting markers

* narrator messages written by the player can now be deleted.

* scene editor

* move docs around

* start character editor docs

* more character editor docs:

* fix some ux bugs

* fix template group deletrion not removing the file

* docs

* typos

* docs

* relock

* docs

* notify image generation errors

* linting

* gh pages workflow

* use poetry

* dont use poetry

* link to docs site

* set site_url

* add trailing slash

* fix image paths

* re-add tabbyai link

* fix image generation error triggering incorrectly

* fix intro formatting incosistencies

* remove cruft

* add time passed label to history view

* date adjustments

* tests

* add gpt-4o-mini

* fix links

* remove hard ntlk requirement for voice generation chunking

ntlk error handling

fix typo

* docs

* fix issdue with dupe character card intro text

* disable character forms while templates are being applied.

* failure during context generate no longer locks ux

* refactor client and agent status display in system bar

* llama 3.1 8b claude

* fix format

* adjustments to automcomplete dialogue instructions

* add mistral nemo

* debug info

* fix system agent status getting stuck

* readme

* readme

* fix autocomplete responses when they are framed by quotes
2024-07-26 21:43:06 +03:00
veguAI
d09f9f8ac4 Update README.md 2024-05-31 13:23:40 +03:00
vegu-ai-tools
de16feeed5 0.25.6 prep 2024-05-31 13:10:23 +03:00
veguAI
cdcc804ffa 0.25.6 (#128)
* TabbyAPI Client Addition and presets refactoring (#126)

* feat: frequency_penalty (will make tabbyAPI custom wrapper)

* feat: add FREQUENCY_PENALTY_BASE and adj. conversation template

* feat: use `client_type` of `openai_compat` to send FIXED preset

* change from client name

* feat: pass client_type into presets.configure(...)

* wip: base TabbyAPI client

* feat: add import to register TabbyAPI client

* feat: adjust `presence_penalty` so it has a range of 0.1-0.5 (higher values will likely degrade performance)

* feat: add additional samplers/settings for TabbyAPI

* feat: keep presence_penalty in a range of 0.1-0.5

* feat: keep min_p in a range of 0.05 to 0.15

* update tabbyapi.py

* feat: add MIN_P_BASE and TEMP_LAST and change to tabbyapi client only for now

* fix: add /v1 as default API route to TabbyAPI

* feat: implement CustomAPIClient to allow all TabbyAPI parameters

* fix: change to "temperature_last" instead of "temp_last"

* feat: convert presets to dictionary mappings to make cleaner/more flexible

* fix: account for original substring/in statements and remove TabbyAPI client call

* fix: move down returning token values as it realistically should never be none, so substrings wouldn't be checked

* chore: remove automatic 'token' import due to IDE

---------

Co-authored-by: vegu-ai-tools <152010387+vegu-ai-tools@users.noreply.github.com>

* tabbyapi client auto-set model name
tabbyapi client use urljoin to prevent errors when user adds trailing slash

* expose presets to config and ux for editing

* some more help text

* tweak min, max and step size for some of the inference parameter sliders

* min_p step size to 0.01

* preset editor - allow reset to defaults

* fix preset reset

* dont perist inference_defaults to config file

* only persist presets to config if they have been changed

* ensure defaults are loaded

* rename config to parameters for more clarity

* update default inference params
textgenwebui support for min_p, frequence_penalty and presence_penalty

* overridable function to clean promp params

* add `supported_parameters` class property to clients and revisit all of the clients to add any missing supported parameters

* ux tweaks

* support_parameters moved to propert function

* top p decrease step size

* only show audio stop button if there is actually audio playing

* relock

* allow setting presence and frequency penalty to 0

* lower default frequency penalty

* frequency and presence penalty step size to 0.01

* set default model to gpt-4o

---------

Co-authored-by: official-elinas <57051565+official-elinas@users.noreply.github.com>
2024-05-31 13:07:57 +03:00
Ikko Eltociear Ashimine
9a2bbd78a4 docs: update README.md (#127)
apropriate -> appropriate
2024-05-25 10:07:20 +03:00
veguAI
ddfbd6891b 0.25.5 (#121)
* oepnai compat client to /completions instead of chat/completions
openai compat client pass frequency penalty

* 0.25.5

* fix version

* remove debug message

* fix openai compat client not saving coercion settings

* openai compatible client: API handles prompt template switches over to chat/completions api

* wording

* mistral std template

* fix error when setting llm prompt template if model name contained /

* lock sentence transformers to 2.2.2 since >=2.3.0 breaks instructor model loading

* support png tEXt

* openai compat client: fix repetition_penality KeyError issue

* presence_penalty is not equal to repetition_penalty and needs its own dedicated definition

* round presence penalty randomization to one decimal place

* fix filename

* same fixes for presence_penalty ported to koboldcpp client

* kcpp client: remove a1111 setup spam
kcpp client: fixes to presence_penalty jiggle

* mistral.ai: default model 8x22b
mistral.ai: 7b and 8x7b taken out of JSON_OBJECT_RESPONSE_MODELS
2024-05-24 18:17:55 +03:00
veguAI
143dd47e02 0.25.4 (#118)
* dont run npm install during container build

* fix const var issue when ALLOWED_HOSTS is anything but `all`

* ensure docker env sets NODE_ENV to development for now

* 0.25.4

* dont mount frontend volume by default
2024-05-18 16:22:57 +03:00
veguAI
cc7cb773d1 Update README.md 2024-05-18 12:31:32 +03:00
veguAI
02c88f75a1 0.25.3 (#113)
* add gpt-4o

add gpt-4o-2024-05-13

* fix koboldcpp client jiggle arguments

* kcpp api url default port 5001

* fix repetition breaking issues with kcpp client

* use tokencount endpoint if available

* auto configure visual agent with koboldcpp

* env var config for frontend serve

* its not clear that gpt-4o is better than turbo, dont default to it yet

* 0.25.3

* handle kcpp being down during a1111 setup check

* only check a1111 setup if client is connected

* fix kcpp a1111 setup check

* fix issue where saving a new scene could cause recent config changes to revert
2024-05-15 00:31:36 +03:00
veguAI
419371e0fb Update README.md 2024-05-14 15:36:33 +03:00
veguAI
6e847bf283 Update README.md 2024-05-14 15:29:37 +03:00
veguAI
ceedd3019f Update README.md 2024-05-14 15:29:02 +03:00
veguAI
a28cf2a029 0.25.2 (#108)
* fix typo

* fix openai compat config save issue maybe

* fix api_handles_prompt_template no longer saving changes after last fix

* koboldcpp client

* default to kobold ai api

* linting

* conversation cleanup tweak

* 0.25.2

* allowed hosts to all on dev instance

* ensure numbers on parameters when sending edited values

* fix prompt parameter issues

* remove debug message
2024-05-10 21:29:29 +03:00
henk717
60cb271e30 List KoboldCpp as compatible (#104)
KoboldCpp is a great fit for TaleMate, it supports fast local generations across a variety of machines including the cloud and is compatible with both text and image gen trough the OpenAI API, and A1111 API.
2024-05-10 00:22:57 +03:00
veguAI
1874234d2c Prep 0.25.1 (#103)
* remove auto client disable

* 0.25.1
2024-05-05 23:23:30 +03:00
veguAI
ef99539e69 Update README.md 2024-05-05 22:30:24 +03:00
veguAI
39bd02722d 0.25.0 (#100)
* flip title and name in recent scenes

* fix issue where a message could not be regenerated after applying continuity error fixes

* prompt tweaks

* allow json parameters for commands

* autocomplete improvements

* dialogue cleanup fixes

* fix issue with narrate after dialogue and llama3 (and other models that don't have a line break after the user prompt in their prompt template.

* expose ability to auto generate dialogue instructions to wsm character ux

* use b64_json response type

* move tag checks up so they match first

* fix typo

* prompt tweak

* api key support

* prompt tweaks

* editable parameters in prompt debugger / tester

* allow reseting of prompt params

* codemirror for prompt editor

* prompt tweaks

* more prompt debug tool tweaks

* some extra control for `context_history`

* new analytical preset (testing)

* add `join` and `llm_can_be_coerced` to jinja env

* support factual list summaries

* prompt tweaks to continuity check and fix

* new summarization method `facts` exposed to ux

* clamp mistral ai temperature according to their new requirements

* prompt tweaks

* better parsing of fixed dialogue response

* prompt tweaks

* fix intermittent empty meta issue

* history regen status progression and small ux tweaks

* summary entries should always be condensed

* google gemini support

* relock to install google-cloud-aiplatform for vertex ai inference

* fix instruction link

* better error handling of google safety validation and allow disabling of safety validation

* docs

* clarify credentials path requirements

* tweak error line identification

* handle quota limit error

* autocomplete ux wired to assistant plugin instead of command

* autocomplete narrative editing and fixes to autocomplete during dialog edit

* main input autocomplete tweaks

* allow new lines in main input

* 0.25.0 and relock

* fix issue with autocomplete elsewhere locking out main input

* better way to determine remote service

* prompt tweak

* fix rubberbanding issue when editing character attributes

* add open mistral 8x22

* fix continuity error check summary inclusion of target entry

* docs

* default context length to 8192

* linting
2024-05-05 22:16:03 +03:00
veguAI
f0b627b900 Update README.md 2024-04-27 00:46:39 +03:00
veguAI
95ae00e01f 0.24.0 (#97)
* groq client

* adjust max token length

* more openai image download  fixes

* graphic novel style

* dialogue cleanup

* fix issue where auto-break repetition would trigger on empty responses

* reduce default convo retries to 1

* prompt tweaks

* fix some clients not handling autocomplete well

* screenplay dialogue generation tweaks

* message flags

* better cleanup of redundant change_ai_character calls

* super experimental continuity error fix mode for editor agent

* clamp temperature

* tweaks to continuity error fixing and expose to ux

* expose to ux

* allow CmdFixContinuityErrors to work even if editor has check_continuity_errors disabled

* prompt tweak

* support --endofline-- as well

* double coercion client option added

* fix issue with double coercion inserting "None" if not set

* client ux refactor to make room for coercion config

* rest of -- can be treated as *

* disable double coercion when json coercion is active since it kills accuracy

* prompt tweaks

* prompt tweaks

* show coercion status in client list

* change preset for edit_fix_continuity

* interim commit of coninuity error handling progress

* tag based presets

* special tokens to keep trailing whitespace if needed

* fix continuity errors finalized for now

* change double coercion formatting

* 0.24.0 and relock

* add groq and cohere to supported services

* linting
2024-04-27 00:24:53 +03:00
veguAI
83027b3a0f 0.23.0 (#91)
* dockerfiles and docker-compose

* containerization fixes

* docker instructions

* readme

* readme

* dont mount src by default, readme

* hf template determine fixes

* auto determine prompt template

* script to start talemate listening only to 127.0.0.1

* prompt tweaks

* auto narrate round every 3 rounds

* tweaks

* Add return to startscreen button

* Only show return to start screen button if scene is active

* improvements to character creation

* dedicated property for scene title separate fromn the save directory name

* filter out negations into negative keywords

* increase auto narrate delay

* add character portrait keyword

* summarization should ignore most recent message, as it is often regenerated.

* cohere client

* specify python3

* improve viable runpod text gen detection

* fix formatting in template preview

* cohere command-r plus template that i am not sure if correct or not

* mistral client set to decensor

* fix issue with parsing json responses

* command-r prompts updated

* use official mistralai python client

* send max_tokens

* new input autocomplete functionality

* prompt tweeaks

* llama 3 templates

* add <|eot_id|> to stopping strings

* prompt tweak

* tooltip

* llama-3 identifier

* command-r and command-r plus prompt identifiers

* text-gen-webui client tweaks to make llama3 eos tokens work correctly

* better llama-3 detection

* better llama-3 finalizing of parameters

* streamline client prompt finalizers
reduce YY model smoothing factor from 0.3 to 0.1 for text-generation-webui client

* relock

* linting

* set 0.23.0

* add new gpt-4 models

* set 0.23.0

* add note about conecting to text-gen-webui from docker

* fix openai image generation no longer working

* default to concept_art
2024-04-20 01:01:06 +03:00
veguAI
27eba3bd63 0.22.0 2024-03-29 21:41:45 +02:00
veguAI
ba64050eab 0.22.0 (#89)
* linux dev instance shortcuts

* add voice samples to gitignore

* direction mode: inner monologue

* actor direction fixes

* py script support for scene logic

* fix end_simulation call

* port sim suite logic to python

* remove dupe log

* fix typing

* section off the text

* fix end simulation command

* simulation goal, prompt tweaks

* prompt tweaks

* dialogue format improvements

* director action logged with message

* call director action log and other fixes

* generate character dialogue instructions, prompt fixes, director action ux

* fix question / answer call

* generate dialogue instructions when loading from character cards

* more dialogue format improvements

* set scene content context more reliably.

* fix innermonologue perspective

* conversation prompt should honor the client's decensor setting

* fix comfyui checkpoint list not loading

* more dialogue format fixes

* prompt tweaks

* fix sim suite group characters, prompt fixes

* npm relock

* handle inanimate objects, handle player name change issues

* don't rename details if the original name was "You"

* As the conversation goes on, dialogue instructions should be moved backwards further to have a weaker effect on immediate generations.

* add more context to character creation prompt

* fix select next talking actor when natural language flow is turned on and the LLM returns multiple character names

* prompt fixes for dialogue generation

* summarization fixes

* default to script format

* seperate dialogue prompt by formatting style, tweak conversation system prompt

* remove cruft

* add gen format to agent details

* relock

* relock

* prep 0.22.0

* add claude-3-haiku-20240307

* readme
2024-03-29 21:37:28 +02:00
veguAI
199ffd1095 Update README.md 2024-03-17 01:09:59 +02:00
veguAI
88b9fcb8bb Update README.md 2024-03-11 00:42:42 +02:00
vegu-ai-tools
2f5944bc09 remove unnecessary link 2024-03-10 18:05:33 +02:00
veguAI
abdfb1abbf WIP: Prep 0.21.0 (#83)
* cleanup

* refactor clean_dialogue

* prompt fixes

* prompt fixes

* conversation format types - movie script and chat (legacy)

* stopping strings updated

* mistral.ai client

* prompt tweaks

* mistral client return token counts

* anthropic client

* archive history emits whole object so we can inspectr time stamps

* show timestamp in history dialog

* openai compat fixes to stop trying to coerce openai url path schema and to never attempt to retrieve the model name automatically, hopefully improving compatibility with the various openai api implementations across the board

* openai compat client let api control prompt template via config option

* fix custom client configs and implement max backscroll

* fix backscroll limit

* remove debug message

* prep 0.21.0

* include model name in prompt template selection label

* use tabs for side nav in app config modal

* readme / docs

* fix issue where "No API key set" could be persisted as the selected model name to the config

* deepinfra example

* linting
2024-03-10 18:03:12 +02:00
735 changed files with 44147 additions and 21386 deletions

30
.github/workflows/ci.yml vendored Normal file
View File

@@ -0,0 +1,30 @@
name: ci
on:
push:
branches:
- master
- main
- prep-0.26.0
permissions:
contents: write
jobs:
deploy:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: Configure Git Credentials
run: |
git config user.name github-actions[bot]
git config user.email 41898282+github-actions[bot]@users.noreply.github.com
- uses: actions/setup-python@v5
with:
python-version: 3.x
- run: echo "cache_id=$(date --utc '+%V')" >> $GITHUB_ENV
- uses: actions/cache@v4
with:
key: mkdocs-material-${{ env.cache_id }}
path: .cache
restore-keys: |
mkdocs-material-
- run: pip install mkdocs-material mkdocs-awesome-pages-plugin mkdocs-glightbox
- run: mkdocs gh-deploy --force

2
.gitignore vendored
View File

@@ -9,6 +9,7 @@ talemate_env
chroma
config.yaml
templates/llm-prompt/user/*.jinja2
templates/world-state/*.yaml
scenes/
!scenes/infinity-quest-dynamic-scenario/
!scenes/infinity-quest-dynamic-scenario/assets/
@@ -16,3 +17,4 @@ scenes/
!scenes/infinity-quest-dynamic-scenario/infinity-quest.json
!scenes/infinity-quest/assets/
!scenes/infinity-quest/infinity-quest.json
tts_voice_samples/*.wav

86
Dockerfile Normal file
View File

@@ -0,0 +1,86 @@
# Stage 1: Frontend build
FROM node:21 AS frontend-build
ENV NODE_ENV=development
WORKDIR /app
# Copy the frontend directory contents into the container at /app
COPY ./talemate_frontend /app
# Install all dependencies and build
RUN npm install && npm run build
# Stage 2: Backend build
FROM python:3.11-slim AS backend-build
WORKDIR /app
# Install system dependencies
RUN apt-get update && apt-get install -y \
bash \
gcc \
&& rm -rf /var/lib/apt/lists/*
# Install poetry
RUN pip install poetry
# Copy poetry files
COPY pyproject.toml poetry.lock* /app/
# Create a virtual environment
RUN python -m venv /app/talemate_env
# Activate virtual environment and install dependencies
RUN . /app/talemate_env/bin/activate && \
poetry config virtualenvs.create false && \
poetry install --only main --no-root
# Copy the Python source code
COPY ./src /app/src
# Conditional PyTorch+CUDA install
ARG CUDA_AVAILABLE=false
RUN . /app/talemate_env/bin/activate && \
if [ "$CUDA_AVAILABLE" = "true" ]; then \
echo "Installing PyTorch with CUDA support..." && \
pip uninstall torch torchaudio -y && \
pip install torch~=2.4.1 torchaudio~=2.4.1 --index-url https://download.pytorch.org/whl/cu121; \
fi
# Stage 3: Final image
FROM python:3.11-slim
WORKDIR /app
RUN apt-get update && apt-get install -y \
bash \
&& rm -rf /var/lib/apt/lists/*
# Copy virtual environment from backend-build stage
COPY --from=backend-build /app/talemate_env /app/talemate_env
# Copy Python source code
COPY --from=backend-build /app/src /app/src
# Copy Node.js build artifacts from frontend-build stage
COPY --from=frontend-build /app/dist /app/talemate_frontend/dist
# Copy the frontend WSGI file if it exists
COPY frontend_wsgi.py /app/frontend_wsgi.py
# Copy base config
COPY config.example.yaml /app/config.yaml
# Copy essentials
COPY scenes templates chroma* /app/
# Set PYTHONPATH to include the src directory
ENV PYTHONPATH=/app/src:$PYTHONPATH
# Make ports available to the world outside this container
EXPOSE 5050
EXPOSE 8080
# Use bash as the shell, activate the virtual environment, and run backend server
CMD ["/bin/bash", "-c", "source /app/talemate_env/bin/activate && python src/talemate/server/run.py runserver --host 0.0.0.0 --port 5050 --frontend-host 0.0.0.0 --frontend-port 8080"]

213
README.md
View File

@@ -2,194 +2,43 @@
Roleplay with AI with a focus on strong narration and consistent world and game state tracking.
|![Screenshot 3](docs/img/0.17.0/ss-1.png)|![Screenshot 3](docs/img/0.17.0/ss-2.png)|
|![Screenshot 3](docs/img/ss-1.png)|![Screenshot 3](docs/img/ss-2.png)|
|------------------------------------------|------------------------------------------|
|![Screenshot 4](docs/img/0.17.0/ss-4.png)|![Screenshot 1](docs/img/0.19.0/Screenshot_15.png)|
|![Screenshot 2](docs/img/0.19.0/Screenshot_16.png)|![Screenshot 3](docs/img/0.19.0/Screenshot_17.png)|
|![Screenshot 4](docs/img/ss-4.png)|![Screenshot 1](docs/img/Screenshot_15.png)|
|![Screenshot 2](docs/img/Screenshot_16.png)|![Screenshot 3](docs/img/Screenshot_17.png)|
> :warning: **It does not run any large language models itself but relies on existing APIs. Currently supports OpenAI, text-generation-webui and LMStudio. 0.18.0 also adds support for generic OpenAI api implementations, but generation quality on that will vary.**
## Core Features
This means you need to either have:
- an [OpenAI](https://platform.openai.com/overview) api key
- setup local (or remote via runpod) LLM inference via:
- [oobabooga/text-generation-webui](https://github.com/oobabooga/text-generation-webui)
- [LMStudio](https://lmstudio.ai/)
- Any other OpenAI api implementation that implements the v1/completions endpoint
- tested llamacpp with the `api_like_OAI.py` wrapper
- let me know if you have tested any other implementations and they failed / worked or landed somewhere in between
- Multiple agents for dialogue, narration, summarization, direction, editing, world state management, character/scenario creation, text-to-speech, and visual generation
- Supports per agent API selection
- Long-term memory and passage of time tracking
- Narrative world state management to reinforce character and world truths
- Creative tools for managing NPCs, AI-assisted character, and scenario creation with template support
- Context management for character details, world information, past events, and pinned information
- Customizable templates for all prompts using Jinja2
- Modern, responsive UI
## Current features
## Documentation
- responsive modern ui
- agents
- conversation: handles character dialogue
- narration: handles narrative exposition
- summarization: handles summarization to compress context while maintaining history
- director: can be used to direct the story / characters
- editor: improves AI responses (very hit and miss at the moment)
- world state: generates world snapshot and handles passage of time (objects and characters)
- creator: character / scenario creator
- tts: text to speech via elevenlabs, OpenAI or local tts
- visual: stable-diffusion client for in place visual generation via AUTOMATIC1111, ComfyUI or OpenAI
- multi-client support (agents can be connected to separate APIs)
- long term memory
- chromadb integration
- passage of time
- narrative world state
- Automatically keep track and reinforce selected character and world truths / states.
- narrative tools
- creative tools
- manage multiple NPCs
- AI backed character creation with template support (jinja2)
- AI backed scenario creation
- context managegement
- Manage character details and attributes
- Manage world information / past events
- Pin important information to the context (Manually or conditionally through AI)
- runpod integration
- overridable templates for all prompts. (jinja2)
- [Installation and Getting started](https://vegu-ai.github.io/talemate/)
- [User Guide](https://vegu-ai.github.io/talemate/user-guide/interacting/)
## Planned features
## Supported APIs
Kinda making it up as i go along, but i want to lean more into gameplay through AI, keeping track of gamestates, moving away from simply roleplaying towards a more game-ified experience.
- [OpenAI](https://platform.openai.com/overview)
- [Anthropic](https://www.anthropic.com/)
- [mistral.ai](https://mistral.ai/)
- [Cohere](https://www.cohere.com/)
- [Groq](https://www.groq.com/)
- [Google Gemini](https://console.cloud.google.com/)
In no particular order:
Supported self-hosted APIs:
- [KoboldCpp](https://koboldai.org/cpp) ([Local](https://koboldai.org/cpp), [Runpod](https://koboldai.org/runpodcpp), [VastAI](https://koboldai.org/vastcpp), also includes image gen support)
- [oobabooga/text-generation-webui](https://github.com/oobabooga/text-generation-webui) (local or with runpod support)
- [LMStudio](https://lmstudio.ai/)
- [TabbyAPI](https://github.com/theroyallab/tabbyAPI/)
- Extension support
- modular agents and clients
- Improved world state
- Dynamic player choice generation
- Better creative tools
- node based scenario / character creation
- Improved and consistent long term memory and accurate current state of the world
- Improved director agent
- Right now this doesn't really work well on anything but GPT-4 (and even there it's debatable). It tends to steer the story in a way that introduces pacing issues. It needs a model that is creative but also reasons really well i think.
- Gameplay loop governed by AI
- objectives
- quests
- win / lose conditions
# Instructions
Please read the documents in the `docs` folder for more advanced configuration and usage.
- [Quickstart](#quickstart)
- [Installation](#installation)
- [Connecting to an LLM](#connecting-to-an-llm)
- [Text-generation-webui](#text-generation-webui)
- [Recommended Models](#recommended-models)
- [OpenAI](#openai)
- [Ready to go](#ready-to-go)
- [Load the introductory scenario "Infinity Quest"](#load-the-introductory-scenario-infinity-quest)
- [Loading character cards](#loading-character-cards)
- [Text-to-Speech (TTS)](docs/tts.md)
- [Visual Generation](docs/visual.md)
- [ChromaDB (long term memory) configuration](docs/chromadb.md)
- [Runpod Integration](docs/runpod.md)
- [Prompt template overrides](docs/templates.md)
# Quickstart
## Installation
Post [here](https://github.com/vegu-ai/talemate/issues/17) if you run into problems during installation.
There is also a [troubleshooting guide](docs/troubleshoot.md) that might help.
### Windows
1. Download and install Python 3.10 or Python 3.11 from the [official Python website](https://www.python.org/downloads/windows/). :warning: python3.12 is currently not supported.
1. Download and install Node.js v20 from the [official Node.js website](https://nodejs.org/en/download/). This will also install npm. :warning: v21 is currently not supported.
1. Download the Talemate project to your local machine. Download from [the Releases page](https://github.com/vegu-ai/talemate/releases).
1. Unpack the download and run `install.bat` by double clicking it. This will set up the project on your local machine.
1. Once the installation is complete, you can start the backend and frontend servers by running `start.bat`.
1. Navigate your browser to http://localhost:8080
### Linux
`python 3.10` or `python 3.11` is required. :warning: `python 3.12` not supported yet.
`nodejs v19 or v20` :warning: `v21` not supported yet.
1. `git clone git@github.com:vegu-ai/talemate`
1. `cd talemate`
1. `source install.sh`
1. Start the backend: `python src/talemate/server/run.py runserver --host 0.0.0.0 --port 5050`.
1. Open a new terminal, navigate to the `talemate_frontend` directory, and start the frontend server by running `npm run serve`.
## Connecting to an LLM
On the right hand side click the "Add Client" button. If there is no button, you may need to toggle the client options by clicking this button:
![Client options](docs/img/client-options-toggle.png)
### Text-generation-webui
> :warning: As of version 0.13.0 the legacy text-generator-webui API `--extension api` is no longer supported, please use their new `--extension openai` api implementation instead.
In the modal if you're planning to connect to text-generation-webui, you can likely leave everything as is and just click Save.
![Add client modal](docs/img/client-setup-0.13.png)
#### Recommended Models
As of 2024.02.06 my personal regular drivers (the ones i test with) are:
- Kunoichi-7B
- sparsetral-16x7B
- Nous-Hermes-2-SOLAR-10.7B
- brucethemoose_Yi-34B-200K-RPMerge
- dolphin-2.7-mixtral-8x7b
- Mixtral-8x7B-instruct
- GPT-3.5-turbo 0125
- GPT-4-turbo 0116
That said, any of the top models in any of the size classes here should work well (i wouldn't recommend going lower than 7B):
https://www.reddit.com/r/LocalLLaMA/comments/18yp9u4/llm_comparisontest_api_edition_gpt4_vs_gemini_vs/
### OpenAI
If you want to add an OpenAI client, just change the client type and select the apropriate model.
![Add client modal](docs/img/add-client-modal-openai.png)
If you are setting this up for the first time, you should now see the client, but it will have a red dot next to it, stating that it requires an API key.
![OpenAI API Key missing](docs/img/0.18.0/openai-api-key-1.png)
Click the `SET API KEY` button. This will open a modal where you can enter your API key.
![OpenAI API Key missing](docs/img/0.18.0/openai-api-key-2.png)
Click `Save` and after a moment the client should have a green dot next to it, indicating that it is ready to go.
![OpenAI API Key set](docs/img/0.18.0/openai-api-key-3.png)
## Ready to go
You will know you are good to go when the client and all the agents have a green dot next to them.
![Ready to go](docs/img/client-setup-complete.png)
## Load the introductory scenario "Infinity Quest"
Generated using talemate creative tools, mostly used for testing / demoing.
You can load it (and any other talemate scenarios or save files) by expanding the "Load" menu in the top left corner and selecting the middle tab. Then simple search for a partial name of the scenario you want to load and click on the result.
![Load scenario location](docs/img/load-scene-location.png)
## Loading character cards
Supports both v1 and v2 chara specs.
Expand the "Load" menu in the top left corner and either click on "Upload a character card" or simply drag and drop a character card file into the same area.
![Load character card location](docs/img/load-card-location.png)
Once a character is uploaded, talemate may actually take a moment because it needs to convert it to a talemate format and will also run additional LLM prompts to generate character attributes and world state.
Make sure you save the scene after the character is loaded as it can then be loaded as normal talemate scenario in the future.
Generic OpenAI api implementations (tested and confirmed working):
- [DeepInfra](https://deepinfra.com/)
- [llamacpp](https://github.com/ggerganov/llama.cpp) with the `api_like_OAI.py` wrapper
- let me know if you have tested any other implementations and they failed / worked or landed somewhere in between

View File

@@ -7,40 +7,6 @@ creator:
- a thrilling action story
- a mysterious adventure
- an epic sci-fi adventure
game:
world_state:
templates:
state_reinforcement:
Goals:
auto_create: false
description: Long term and short term goals
favorite: true
insert: conversation-context
instructions: Create a long term goal and two short term goals for {character_name}. Your response must only be the long terms and two short term goals.
interval: 20
name: Goals
query: Goals
state_type: npc
Physical Health:
auto_create: false
description: Keep track of health.
favorite: true
insert: sequential
instructions: ''
interval: 10
name: Physical Health
query: What is {character_name}'s current physical health status?
state_type: character
Time of day:
auto_create: false
description: Track night / day cycle
favorite: true
insert: sequential
instructions: ''
interval: 10
name: Time of day
query: What is the current time of day?
state_type: world
## Long-term memory

21
docker-compose.yml Normal file
View File

@@ -0,0 +1,21 @@
version: '3.8'
services:
talemate:
build:
context: .
dockerfile: Dockerfile
args:
- CUDA_AVAILABLE=${CUDA_AVAILABLE:-false}
ports:
- "${FRONTEND_PORT:-8080}:8080"
- "${BACKEND_PORT:-5050}:5050"
volumes:
- ./config.yaml:/app/config.yaml
- ./scenes:/app/scenes
- ./templates:/app/templates
- ./chroma:/app/chroma
environment:
- PYTHONUNBUFFERED=1
- PYTHONPATH=/app/src:$PYTHONPATH
command: ["/bin/bash", "-c", "source /app/talemate_env/bin/activate && python src/talemate/server/run.py runserver --host 0.0.0.0 --port 5050 --frontend-host 0.0.0.0 --frontend-port 8080"]

5
docs/.pages Normal file
View File

@@ -0,0 +1,5 @@
nav:
- Home: index.md
- Getting started: getting-started
- User guide: user-guide
- Developer guide: dev

View File

@@ -1,62 +0,0 @@
# ChromaDB
Talemate uses ChromaDB to maintain long-term memory. The default embeddings used are really fast but also not incredibly accurate. If you want to use more accurate embeddings you can use the instructor embeddings or the openai embeddings. See below for instructions on how to enable these.
In my testing so far, instructor-xl has proved to be the most accurate (even more-so than openai)
## Local instructor embeddings
If you want chromaDB to use the more accurate (but much slower) instructor embeddings add the following to `config.yaml`:
**Note**: The `xl` model takes a while to load even with cuda. Expect a minute of loading time on the first scene you load.
```yaml
chromadb:
embeddings: instructor
instructor_device: cpu
instructor_model: hkunlp/instructor-xl
```
### Instructor embedding models
- `hkunlp/instructor-base` (smallest / fastest)
- `hkunlp/instructor-large`
- `hkunlp/instructor-xl` (largest / slowest) - requires about 5GB of memory
You will need to restart the backend for this change to take effect.
**NOTE** - The first time you do this it will need to download the instructor model you selected. This may take a while, and the talemate backend will be un-responsive during that time.
Once the download is finished, if talemate is still un-responsive, try reloading the front-end to reconnect. When all fails just restart the backend as well. I'll try to make this more robust in the future.
### GPU support
If you want to use the instructor embeddings with GPU support, you will need to install pytorch with CUDA support.
To do this on windows, run `install-pytorch-cuda.bat` from the project directory. Then change your device in the config to `cuda`:
```yaml
chromadb:
embeddings: instructor
instructor_device: cuda
instructor_model: hkunlp/instructor-xl
```
## OpenAI embeddings
First make sure your openai key is specified in the `config.yaml` file
```yaml
openai:
api_key: <your-key-here>
```
Then add the following to `config.yaml` for chromadb:
```yaml
chromadb:
embeddings: openai
openai_model: text-embedding-3-small
```
**Note**: As with everything openai, using this isn't free. It's way cheaper than their text completion though. ALSO - if you send super explicit content they may flag / ban your key, so keep that in mind (i hear they usually send warnings first though), and always monitor your usage on their dashboard.

166
docs/cleanup.py Normal file
View File

@@ -0,0 +1,166 @@
import os
import re
import subprocess
from pathlib import Path
import argparse
def find_image_references(md_file):
"""Find all image references in a markdown file."""
with open(md_file, 'r', encoding='utf-8') as f:
content = f.read()
pattern = r'!\[.*?\]\((.*?)\)'
matches = re.findall(pattern, content)
cleaned_paths = []
for match in matches:
path = match.lstrip('/')
if 'img/' in path:
path = path[path.index('img/') + 4:]
# Only keep references to versioned images
parts = os.path.normpath(path).split(os.sep)
if len(parts) >= 2 and parts[0].replace('.', '').isdigit():
cleaned_paths.append(path)
return cleaned_paths
def scan_markdown_files(docs_dir):
"""Recursively scan all markdown files in the docs directory."""
md_files = []
for root, _, files in os.walk(docs_dir):
for file in files:
if file.endswith('.md'):
md_files.append(os.path.join(root, file))
return md_files
def find_all_images(img_dir):
"""Find all image files in version subdirectories."""
image_files = []
for root, _, files in os.walk(img_dir):
# Get the relative path from img_dir to current directory
rel_dir = os.path.relpath(root, img_dir)
# Skip if we're in the root img directory
if rel_dir == '.':
continue
# Check if the immediate parent directory is a version number
parent_dir = rel_dir.split(os.sep)[0]
if not parent_dir.replace('.', '').isdigit():
continue
for file in files:
if file.lower().endswith(('.png', '.jpg', '.jpeg', '.gif', '.svg')):
rel_path = os.path.relpath(os.path.join(root, file), img_dir)
image_files.append(rel_path)
return image_files
def grep_check_image(docs_dir, image_path):
"""
Check if versioned image is referenced anywhere using grep.
Returns True if any reference is found, False otherwise.
"""
try:
# Split the image path to get version and filename
parts = os.path.normpath(image_path).split(os.sep)
version = parts[0] # e.g., "0.29.0"
filename = parts[-1] # e.g., "world-state-suggestions-2.png"
# For versioned images, require both version and filename to match
version_pattern = f"{version}.*{filename}"
try:
result = subprocess.run(
['grep', '-r', '-l', version_pattern, docs_dir],
capture_output=True,
text=True
)
if result.stdout.strip():
print(f"Found reference to {image_path} with version pattern: {version_pattern}")
return True
except subprocess.CalledProcessError:
pass
except Exception as e:
print(f"Error during grep check for {image_path}: {e}")
return False
def main():
parser = argparse.ArgumentParser(description='Find and optionally delete unused versioned images in MkDocs project')
parser.add_argument('--docs-dir', type=str, required=True, help='Path to the docs directory')
parser.add_argument('--img-dir', type=str, required=True, help='Path to the images directory')
parser.add_argument('--delete', action='store_true', help='Delete unused images')
parser.add_argument('--verbose', action='store_true', help='Show all found references and files')
parser.add_argument('--skip-grep', action='store_true', help='Skip the additional grep validation')
args = parser.parse_args()
# Convert paths to absolute paths
docs_dir = os.path.abspath(args.docs_dir)
img_dir = os.path.abspath(args.img_dir)
print(f"Scanning markdown files in: {docs_dir}")
print(f"Looking for versioned images in: {img_dir}")
# Get all markdown files
md_files = scan_markdown_files(docs_dir)
print(f"Found {len(md_files)} markdown files")
# Collect all image references
used_images = set()
for md_file in md_files:
refs = find_image_references(md_file)
used_images.update(refs)
# Get all actual images (only from version directories)
all_images = set(find_all_images(img_dir))
if args.verbose:
print("\nAll versioned image references found in markdown:")
for img in sorted(used_images):
print(f"- {img}")
print("\nAll versioned images in directory:")
for img in sorted(all_images):
print(f"- {img}")
# Find potentially unused images
unused_images = all_images - used_images
# Additional grep validation if not skipped
if not args.skip_grep and unused_images:
print("\nPerforming additional grep validation...")
actually_unused = set()
for img in unused_images:
if not grep_check_image(docs_dir, img):
actually_unused.add(img)
if len(actually_unused) != len(unused_images):
print(f"\nGrep validation found {len(unused_images) - len(actually_unused)} additional image references!")
unused_images = actually_unused
# Report findings
print("\nResults:")
print(f"Total versioned images found: {len(all_images)}")
print(f"Versioned images referenced in markdown: {len(used_images)}")
print(f"Unused versioned images: {len(unused_images)}")
if unused_images:
print("\nUnused versioned images:")
for img in sorted(unused_images):
print(f"- {img}")
if args.delete:
print("\nDeleting unused versioned images...")
for img in unused_images:
full_path = os.path.join(img_dir, img)
try:
os.remove(full_path)
print(f"Deleted: {img}")
except Exception as e:
print(f"Error deleting {img}: {e}")
print("\nDeletion complete")
else:
print("\nNo unused versioned images found!")
if __name__ == "__main__":
main()

3
docs/dev/index.md Normal file
View File

@@ -0,0 +1,3 @@
# Coming soon
Developer documentation is coming soon. Stay tuned!

View File

@@ -1,4 +1,7 @@
# Template Overrides in Talemate
# Template Overrides
!!! warning "Old documentation"
This is old documentation and needs to be updated, however may still contain useful information.
## Introduction to Templates
@@ -23,9 +26,9 @@ The creator agent templates allow for the creation of new characters within the
### Example Templates
- [Character Attributes Human Template](src/talemate/prompts/templates/creator/character-attributes-human.jinja2)
- [Character Details Human Template](src/talemate/prompts/templates/creator/character-details-human.jinja2)
- [Character Example Dialogue Human Template](src/talemate/prompts/templates/creator/character-example-dialogue-human.jinja2)
- `src/talemate/prompts/templates/creator/character-attributes-human.jinja2`
- `src/talemate/prompts/templates/creator/character-details-human.jinja2`
- `src/talemate/prompts/templates/creator/character-example-dialogue-human.jinja2`
These example templates can serve as a guide for users to create their own custom templates for the character creator.

View File

@@ -0,0 +1,14 @@
## Third Party API docs
### Chat completions
- [Anthropic](https://docs.anthropic.com/en/api/messages)
- [Cohere](https://docs.cohere.com/reference/chat)
- [Google AI](https://ai.google.dev/api/generate-content#v1beta.GenerationConfig)
- [Groq](https://console.groq.com/docs/api-reference#chat-create)
- [KoboldCpp](https://lite.koboldai.net/koboldcpp_api#/api/v1)
- [LMStudio](https://lmstudio.ai/docs/api/rest-api)
- [Mistral AI](https://docs.mistral.ai/api/)
- [OpenAI](https://platform.openai.com/docs/api-reference/completions)
- [TabbyAPI](https://theroyallab.github.io/tabbyAPI/#operation/chat_completion_request_v1_chat_completions_post)
- [Text-Generation-WebUI](https://github.com/oobabooga/text-generation-webui/blob/main/extensions/openai/typing.py)

View File

@@ -0,0 +1,5 @@
nav:
- 1. Installation: installation
- 2. Connect a client: connect-a-client.md
- 3. Load a scene: load-a-scene.md
- ...

View File

@@ -0,0 +1,3 @@
nav:
- change-host-and-port.md
- ...

View File

@@ -0,0 +1,102 @@
# Changing host and port
## Backend
By default, the backend listens on `localhost:5050`.
To run the server on a different host and port, you need to change the values passed to the `--host` and `--port` parameters during startup and also make sure the frontend knows the new values.
### Changing the host and port for the backend
#### :material-linux: Linux
Copy `start.sh` to `start_custom.sh` and edit the `--host` and `--port` parameters in the `uvicorn` command.
```bash
#!/bin/sh
. talemate_env/bin/activate
python src/talemate/server/run.py runserver --host 0.0.0.0 --port 1234
```
#### :material-microsoft-windows: Windows
Copy `start.bat` to `start_custom.bat` and edit the `--host` and `--port` parameters in the `uvicorn` command.
```batch
start cmd /k "cd talemate_env\Scripts && activate && cd ../../ && python src\talemate\server\run.py runserver --host 0.0.0.0 --port 1234"
```
### Letting the frontend know about the new host and port
Copy `talemate_frontend/example.env.development.local` to `talemate_frontend/.env.production.local` and edit the `VUE_APP_TALEMATE_BACKEND_WEBSOCKET_URL`.
```env
VUE_APP_TALEMATE_BACKEND_WEBSOCKET_URL=ws://localhost:1234
```
Next rebuild the frontend.
```bash
cd talemate_frontend
npm run build
```
### Start the backend and frontend
Start the backend and frontend as usual.
#### :material-linux: Linux
```bash
./start_custom.sh
```
#### :material-microsoft-windows: Windows
```batch
start_custom.bat
```
## Frontend
By default, the frontend listens on `localhost:8080`.
To change the frontend host and port, you need to change the values passed to the `--frontend-host` and `--frontend-port` parameters during startup.
### Changing the host and port for the frontend
#### :material-linux: Linux
Copy `start.sh` to `start_custom.sh` and edit the `--frontend-host` and `--frontend-port` parameters.
```bash
#!/bin/sh
. talemate_env/bin/activate
python src/talemate/server/run.py runserver --host 0.0.0.0 --port 5055 \
--frontend-host localhost --frontend-port 8082
```
#### :material-microsoft-windows: Windows
Copy `start.bat` to `start_custom.bat` and edit the `--frontend-host` and `--frontend-port` parameters.
```batch
start cmd /k "cd talemate_env\Scripts && activate && cd ../../ && python src\talemate\server\run.py runserver --host 0.0.0.0 --port 5055 --frontend-host localhost --frontend-port 8082"
```
### Start the backend and frontend
Start the backend and frontend as usual.
#### :material-linux: Linux
```bash
./start_custom.sh
```
#### :material-microsoft-windows: Windows
```batch
start_custom.bat
```

View File

@@ -0,0 +1,68 @@
# Connect a client
Once Talemate is up and running and you are connected, you will see a notification in the corner instructing you to configured a client.
![no clients](/talemate/img/0.26.0/no-clients.png)
Talemate uses client(s) to connect to local or remote AI text generation APIs like koboldcpp, text-generation-webui or OpenAI.
## Add a new client
On the right hand side click the **:material-plus-box: ADD CLIENT** button.
![connect a client add client](/talemate/img/0.26.0/connect-a-client-add-client.png)
!!! note "No button?"
If there is no button, you may need to toggle the client options by clicking this button
![open clients](/talemate/img/0.26.0/open-clients.png)
The client configuration window will appear. Here you can choose the type of client you want to add.
![connect a client add client modal](/talemate/img/0.26.0/connect-a-client-add-client-modal.png)
## Choose an API / Client Type
We have support for multiple local and remote APIs. You can choose to use one or more of them.
!!! note "Local vs remote"
A local API runs on your machine, while a remote API runs on a server somewhere else.
Select the API you want to use and click through to follow the instructions to configure a client for it:
##### Remote APIs
- [OpenAI](/talemate/user-guide/clients/types/openai/)
- [Anthropic](/talemate/user-guide/clients/types/anthropic/)
- [mistral.ai](/talemate/user-guide/clients/types/mistral/)
- [Cohere](/talemate/user-guide/clients/types/cohere/)
- [Groq](/talemate/user-guide/clients/types/groq/)
- [Google Gemini](/talemate/user-guide/clients/types/google/)
##### Local APIs
- [KoboldCpp](/talemate/user-guide/clients/types/koboldcpp/)
- [Text-Generation-WebUI](/talemate/user-guide/clients/types/text-generation-webui/)
- [LMStudio](/talemate/user-guide/clients/types/lmstudio/)
- [TabbyAPI](/talemate/user-guide/clients/types/tabbyapi/)
##### Unofficial OpenAI API implementations
- [DeepInfra](/talemate/user-guide/clients/types/openai-compatible/#deepinfra)
- llamacpp with the `api_like_OAI.py` wrapper
## Assign the client to the agents
Whenever you add your first client, Talemate will automatically assign it to all agents. Once the client is configured and assigned, all agents should have a green dot next to them. (Or grey if the agent is currently disabled)
![Connect a client assigned](/talemate/img/0.26.0/connect-a-client-ready.png)
You can tell the client is assigned to the agent by checking the tag beneath the agent name, which will contain the client name if it is assigned.
![Agent has client assigned](/talemate/img/0.26.0/agent-has-client-assigned.png)
## Its not assigned!
If for some reason the client is not assigned to the agent, you can manually assign it to all agents by clicking the **:material-transit-connection-variant: Assign to all agents** button.
![Connect a client assign to all agents](/talemate/img/0.26.0/connect-a-client-assign-to-all-agents.png)

View File

@@ -0,0 +1,5 @@
nav:
- windows.md
- linux.md
- docker.md
- ...

View File

@@ -0,0 +1,22 @@
!!! example "Experimental"
Talemate through docker has not received a lot of testing from me, so please let me know if you encounter any issues.
You can do so by creating an issue on the [:material-github: GitHub repository](https://github.com/vegu-ai/talemate)
## Quick install instructions
1. `git clone https://github.com/vegu-ai/talemate.git`
1. `cd talemate`
1. copy config file
1. linux: `cp config.example.yaml config.yaml`
1. windows: `copy config.example.yaml config.yaml`
1. If your host has a CUDA compatible Nvidia GPU
1. Windows (via PowerShell): `$env:CUDA_AVAILABLE="true"; docker compose up`
1. Linux: `CUDA_AVAILABLE=true docker compose up`
1. If your host does **NOT** have a CUDA compatible Nvidia GPU
1. Windows: `docker compose up`
1. Linux: `docker compose up`
1. Navigate your browser to http://localhost:8080
!!! note
When connecting local APIs running on the hostmachine (e.g. text-generation-webui), you need to use `host.docker.internal` as the hostname.

View File

@@ -1,3 +1,27 @@
## Quick install instructions
!!! warning
python 3.12 is currently not supported.
### Dependencies
1. node.js and npm - see instructions [here](https://nodejs.org/en/download/package-manager/)
1. python 3.10 or 3.11 - see instructions [here](https://www.python.org/downloads/)
### Installation
1. `git clone https://github.com/vegu-ai/talemate.git`
1. `cd talemate`
1. `source install.sh`
- When asked if you want to install pytorch with CUDA support choose `y` if you have
a CUDA compatible Nvidia GPU and have installed the necessary drivers.
1. `source start.sh`
If everything went well, you can proceed to [connect a client](../../connect-a-client).
## Additional Information
### Setting Up a Virtual Environment
1. Open a terminal.

View File

@@ -0,0 +1,28 @@
# Common issues
## Windows
### Installation fails with "Microsoft Visual C++" error
If your installation errors with a notification to upgrade "Microsoft Visual C++" go to https://visualstudio.microsoft.com/visual-cpp-build-tools/ and click "Download Build Tools" and run it.
- During installation make sure you select the C++ development package (upper left corner)
- Run `reinstall.bat` inside talemate directory
## Docker
### Docker has created `config.yaml` directory
If you do not copy the example config to `config.yaml` before running `docker compose up` docker will create a `config` directory in the root of the project. This will cause the backend to fail to start.
This happens because we mount the config file directly as a docker volume, and if it does not exist docker will create a directory with the same name.
This will eventually be fixed, for now please make sure to copy the example config file before running the docker compose command.
## General
### Running behind reverse proxy with ssl
Personally i have not been able to make this work yet, but its on my list, issue stems from some vue oddities when specifying the base urls while running in a dev environment. I expect once i start building the project for production this will be resolved.
If you do make it work, please reach out to me so i can update this documentation.

View File

@@ -0,0 +1,42 @@
## Quick install instructions
!!! warning
python 3.12 is currently not supported
1. Download and install Python 3.10 or Python 3.11 from the [official Python website](https://www.python.org/downloads/windows/).
- [Click here for direct link to python 3.11.9 download](https://www.python.org/downloads/release/python-3119/)
1. Download and install Node.js from the [official Node.js website](https://nodejs.org/en/download/prebuilt-installer). This will also install npm.
1. Download the Talemate project to your local machine. Download from [the Releases page](https://github.com/vegu-ai/talemate/releases).
1. Unpack the download and run `install.bat` by double clicking it. This will set up the project on your local machine.
1. Once the installation is complete, you can start the backend and frontend servers by running `start.bat`.
1. Navigate your browser to http://localhost:8080
If everything went well, you can proceed to [connect a client](../../connect-a-client).
## Additional Information
### How to Install Python 3.10 or 3.11
1. Visit the official Python website's download page for Windows at [https://www.python.org/downloads/windows/](https://www.python.org/downloads/windows/).
2. Find the latest version of Python 3.10 or 3.11 and click on one of the download links. (You will likely want the Windows installer (64-bit))
4. Run the installer file and follow the setup instructions. Make sure to check the box that says Add Python 3.10 to PATH before you click Install Now.
### How to Install npm
1. Download Node.js from the official site [https://nodejs.org/en/download/prebuilt-installer](https://nodejs.org/en/download/prebuilt-installer).
2. Run the installer (the .msi installer is recommended).
3. Follow the prompts in the installer (Accept the license agreement, click the NEXT button a bunch of times and accept the default installation settings).
### Usage of the Supplied bat Files
#### install.bat
This batch file is used to set up the project on your local machine. It creates a virtual environment, activates it, installs poetry, and uses poetry to install dependencies. It then navigates to the frontend directory and installs the necessary npm packages.
To run this file, simply double click on it or open a command prompt in the same directory and type `install.bat`.
#### start.bat
This batch file is used to start the backend and frontend servers. It opens two command prompts, one for the frontend and one for the backend.
To run this file, simply double click on it or open a command prompt in the same directory and type `start.bat`.

View File

@@ -0,0 +1,57 @@
# Load a scenario
Once you've set up a client and assigned it to all the agents, you will be presented with the `Home` screen. From here, you can load talemate scenarios and upload character cards.
To load the introductory `Infinity Quest` scenario, simply click on its entry in the `Quick Load` section.
![Load infinity quest](/talemate/img/0.26.0/getting-started-load-screen.png)
!!! info "First time may take a moment"
When you load the a scenario for the first time, Talemate will need to initialize the long term memory model. Which likely means a download. Just be patient and it will be ready soon.
## Interacting with the scenario
After a moment of loading, you will see the scenario's introductory message and be able to send a text interaction.
![Getting stgarted scene 1](/talemate/img/0.26.0/getting-started-scene-1.png)
Its time to send the first message.
Spoken words should go into `"` and actions should be written in `*`. Talemate will automatically supply the other if you supply one.
![Getting started first interaction](/talemate/img/0.26.0/getting-started-first-interaction.png)
Once sent, its now the AI's turn to respond - depending on the service and model selected this can take a a moment.
![Getting started first ai response](/talemate/img/0.26.0/getting-started-first-ai-response.png)
## Quick overview of UI elements
### Scenario tools
Above the chat input there is a set of tools to help you interact with the scenario.
![Getting started ui element tools](/talemate/img/0.26.0/getting-started-ui-element-tools.png)
These contain tools to, for example:
- regenrate the most recent AI response
- give directions to characters
- narrate the scene
- advance time
- save the current scene state
- and more ...
A full guide can be found in the [Scenario Tools](/talemate/user-guide/scenario-tools) section of the user guide.
### World state
Shows a sumamrization of the current scene state.
![getting started world state 1](/talemate/img/0.26.0/getting-started-world-state-1.png)
Each item can be expanded for more information.
![getting started world state 2](/talemate/img/0.26.0/getting-started-world-state-2.png)
Find out more about the world state in the [World State](/talemate/user-guide/world-state) section of the user guide.

Binary file not shown.

Before

Width:  |  Height:  |  Size: 5.6 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 24 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 4.7 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 32 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 34 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 30 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 2.9 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 1.1 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 1.6 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 1.1 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 43 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 1.4 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 1.3 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 1.2 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 1.3 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 8.7 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 8.0 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 22 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 46 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 8.4 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 7.0 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 20 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 8.6 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 78 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 9.3 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 7.9 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 26 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 8.1 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 6.9 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 20 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 19 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 10 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 7.5 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 7.8 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 28 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 7.3 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 7.7 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 27 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 8.6 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 7.6 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 21 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 8.2 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 8.1 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 9.2 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 29 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 8.5 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 6.7 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 20 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 7.4 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 8.3 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 39 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 49 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 18 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 42 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 29 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 7.1 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 10 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 43 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 246 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 3.4 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 3.1 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 46 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 111 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 27 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 634 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 441 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 5.6 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 289 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 14 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 67 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 41 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 82 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 5.6 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 5.8 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 5.4 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 42 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 1.0 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 10 KiB

Some files were not shown because too many files have changed in this diff Show More