mirror of
https://github.com/vegu-ai/talemate.git
synced 2025-12-16 03:37:51 +01:00
Prep 0.18.0 (#58)
* vuetify update recent saves * use placeholder instead of prefilling text * fix scene loading when no coverage image is set * improve summarize and pin response quality * summarization use previous entries as informative context * fixes #49: auto save indicator missleading * regenerate with instructions * allow resetting of state reinforcement * creative tools: introduce new character creative tools: introduce passive character as active character * character creation adjustments * no longer needed * activate, deactivate characters (work in progress) * worldstate manager show inactive characters * allow setting of llm prompt template from ux reorganize llm prompt template directory for easier local overriding support a more sane way to write llm prompt templates * determine prompt template from huggingface * ignore user overrides * fix issue with removing narrator messages * summarization agent config for prev entry inclusion agent config attribute notes * client code clean up to allow modularity of clients + generic openai compatible api client * more client cleanup * remove debug msg, step size for ctx upped to 1024 * wip on stepped history summarization * summarization prompt fixes * include time message for hisory context pushed in scene.context_history * add / remove characters toggle narration of via ctrl * fix pydantic namespace warning fix client emit after reconfig * set memory ids on character detail entries * deal with chromadb race condition (maybe) * activate / deactivate characters from creative editor switch creative editor to edit characters through world state manager * set 0.18.0 * relock dependencies * openai client shortcut to set api key if not set * set error_action to null * if scene has just started provide intro for extra context in is_prsent and is_leaving queries * nice error if determine template via huggingface doesn't work * fix issue where regenerate would sometimes pick the wrong npc if there are multiple characters talking * add new openai models * default to gpt-4-turbo-preview
This commit is contained in:
7
templates/llm-prompt/std/Alpaca.jinja2
Normal file
7
templates/llm-prompt/std/Alpaca.jinja2
Normal file
@@ -0,0 +1,7 @@
|
||||
{{ system_message }}
|
||||
|
||||
### Instruction:
|
||||
{{ user_message }}
|
||||
|
||||
### Response:
|
||||
{{ coercion_message }}
|
||||
6
templates/llm-prompt/std/ChatML.jinja2
Normal file
6
templates/llm-prompt/std/ChatML.jinja2
Normal file
@@ -0,0 +1,6 @@
|
||||
<|im_start|>system
|
||||
{{ system_message }}<|im_end|>
|
||||
<|im_start|>user
|
||||
{{ user_message }}<|im_end|>
|
||||
<|im_start|>assistant
|
||||
{{ coercion_message }}
|
||||
8
templates/llm-prompt/std/InstructionInputResponse.jinja2
Normal file
8
templates/llm-prompt/std/InstructionInputResponse.jinja2
Normal file
@@ -0,0 +1,8 @@
|
||||
### Instruction:
|
||||
{{ system_message }}
|
||||
|
||||
### Input:
|
||||
{{ user_message }}
|
||||
|
||||
### Response:
|
||||
{{ coercion_message }}
|
||||
1
templates/llm-prompt/std/Llama2.jinja2
Normal file
1
templates/llm-prompt/std/Llama2.jinja2
Normal file
@@ -0,0 +1 @@
|
||||
<s>[INST] {{ system_message }} {{ user_message }} [/INST] {{ coercion_message }}
|
||||
1
templates/llm-prompt/std/OpenChat.jinja2
Normal file
1
templates/llm-prompt/std/OpenChat.jinja2
Normal file
@@ -0,0 +1 @@
|
||||
GPT4 Correct System: {{ system_message }}<|end_of_turn|>GPT4 Correct User: {{ user_message }}<|end_of_turn|>GPT4 Correct Assistant: {{ coercion_message }}
|
||||
1
templates/llm-prompt/std/USER_ASSISTANT.jinja2
Normal file
1
templates/llm-prompt/std/USER_ASSISTANT.jinja2
Normal file
@@ -0,0 +1 @@
|
||||
USER: {{ system_message }} {{ user_message }} ASSISTANT: {{ coercion_message }}
|
||||
2
templates/llm-prompt/std/UserAssistant.jinja2
Normal file
2
templates/llm-prompt/std/UserAssistant.jinja2
Normal file
@@ -0,0 +1,2 @@
|
||||
User: {{ system_message }} {{ user_message }}
|
||||
Assistant: {{ coercion_message }}
|
||||
3
templates/llm-prompt/std/Vicuna.jinja2
Normal file
3
templates/llm-prompt/std/Vicuna.jinja2
Normal file
@@ -0,0 +1,3 @@
|
||||
SYSTEM: {{ system_message }}
|
||||
USER: {{ user_message }}
|
||||
ASSISTANT: {{ coercion_message }}
|
||||
6
templates/llm-prompt/std/Zephyr.jinja2
Normal file
6
templates/llm-prompt/std/Zephyr.jinja2
Normal file
@@ -0,0 +1,6 @@
|
||||
<|system|>
|
||||
{{ system_message }}</s>
|
||||
<|user|>
|
||||
{{ user_message }}</s>
|
||||
<|assistant|>
|
||||
{{ coercion_message }}
|
||||
5
templates/llm-prompt/talemate/Umbra.jinja2
Normal file
5
templates/llm-prompt/talemate/Umbra.jinja2
Normal file
@@ -0,0 +1,5 @@
|
||||
### System:{{ system_message }}
|
||||
|
||||
### USER:{{ user_message }}
|
||||
|
||||
### Assistant:{{ coercion_message }}
|
||||
@@ -1,5 +1,4 @@
|
||||
{{ system_message }}
|
||||
|
||||
### Instruction:
|
||||
|
||||
{{ set_response(prompt, "\n\n### Response:\n") }}
|
||||
Reference in New Issue
Block a user