Files
talemate/talemate_frontend/package.json

55 lines
1.2 KiB
JSON
Raw Normal View History

2023-05-05 00:50:02 +03:00
{
"name": "talemate_frontend",
0.25.0 (#100) * flip title and name in recent scenes * fix issue where a message could not be regenerated after applying continuity error fixes * prompt tweaks * allow json parameters for commands * autocomplete improvements * dialogue cleanup fixes * fix issue with narrate after dialogue and llama3 (and other models that don't have a line break after the user prompt in their prompt template. * expose ability to auto generate dialogue instructions to wsm character ux * use b64_json response type * move tag checks up so they match first * fix typo * prompt tweak * api key support * prompt tweaks * editable parameters in prompt debugger / tester * allow reseting of prompt params * codemirror for prompt editor * prompt tweaks * more prompt debug tool tweaks * some extra control for `context_history` * new analytical preset (testing) * add `join` and `llm_can_be_coerced` to jinja env * support factual list summaries * prompt tweaks to continuity check and fix * new summarization method `facts` exposed to ux * clamp mistral ai temperature according to their new requirements * prompt tweaks * better parsing of fixed dialogue response * prompt tweaks * fix intermittent empty meta issue * history regen status progression and small ux tweaks * summary entries should always be condensed * google gemini support * relock to install google-cloud-aiplatform for vertex ai inference * fix instruction link * better error handling of google safety validation and allow disabling of safety validation * docs * clarify credentials path requirements * tweak error line identification * handle quota limit error * autocomplete ux wired to assistant plugin instead of command * autocomplete narrative editing and fixes to autocomplete during dialog edit * main input autocomplete tweaks * allow new lines in main input * 0.25.0 and relock * fix issue with autocomplete elsewhere locking out main input * better way to determine remote service * prompt tweak * fix rubberbanding issue when editing character attributes * add open mistral 8x22 * fix continuity error check summary inclusion of target entry * docs * default context length to 8192 * linting
2024-05-05 22:16:03 +03:00
"version": "0.25.0",
2023-05-05 00:50:02 +03:00
"private": true,
"scripts": {
"serve": "vue-cli-service serve",
"build": "vue-cli-service build",
"lint": "vue-cli-service lint"
},
"dependencies": {
0.25.0 (#100) * flip title and name in recent scenes * fix issue where a message could not be regenerated after applying continuity error fixes * prompt tweaks * allow json parameters for commands * autocomplete improvements * dialogue cleanup fixes * fix issue with narrate after dialogue and llama3 (and other models that don't have a line break after the user prompt in their prompt template. * expose ability to auto generate dialogue instructions to wsm character ux * use b64_json response type * move tag checks up so they match first * fix typo * prompt tweak * api key support * prompt tweaks * editable parameters in prompt debugger / tester * allow reseting of prompt params * codemirror for prompt editor * prompt tweaks * more prompt debug tool tweaks * some extra control for `context_history` * new analytical preset (testing) * add `join` and `llm_can_be_coerced` to jinja env * support factual list summaries * prompt tweaks to continuity check and fix * new summarization method `facts` exposed to ux * clamp mistral ai temperature according to their new requirements * prompt tweaks * better parsing of fixed dialogue response * prompt tweaks * fix intermittent empty meta issue * history regen status progression and small ux tweaks * summary entries should always be condensed * google gemini support * relock to install google-cloud-aiplatform for vertex ai inference * fix instruction link * better error handling of google safety validation and allow disabling of safety validation * docs * clarify credentials path requirements * tweak error line identification * handle quota limit error * autocomplete ux wired to assistant plugin instead of command * autocomplete narrative editing and fixes to autocomplete during dialog edit * main input autocomplete tweaks * allow new lines in main input * 0.25.0 and relock * fix issue with autocomplete elsewhere locking out main input * better way to determine remote service * prompt tweak * fix rubberbanding issue when editing character attributes * add open mistral 8x22 * fix continuity error check summary inclusion of target entry * docs * default context length to 8192 * linting
2024-05-05 22:16:03 +03:00
"@codemirror/lang-markdown": "^6.2.5",
"@codemirror/theme-one-dark": "^6.1.2",
Prep 0.17.0 (#48) * improve windows install script to check for compatible python versions, also work with multi version python installs * bunch of llm prompt templates * first gamestate directing impl * lower similarity threshold when checking for repetition in llm responses * tweaks to narrate after dialog prompt tweaks to extract character sheet prompt * set_context cmd * Xwin MoE * thematic generator for randomized content stimuli * add a memory query to extract character sheet * direct-scene prompt tweaks * conversation prompt tweaks * inline character creation from gameplay instruction template expose thematic generator to prompt templates * Mixtral Synthia-MoE * display prompt and response side by side * improve ensure_dialogue_format * prompt tweaks * prevent double passive narration in one round improvements to persist character logic * SlimOrca OpenBuddy * prompt tweaks * runpod status check wrapped in asyncio * generate_json_list creator agent action * limit conversation retries to 2 fix issue where REPETITION signal trigger would get sent with the prompt * smaller agent tweaks * thematic generator personality list thematic generator generate from sets of lists * adjust tests * mistral prompt adjustment * director: update content context * prompt adjustments * nous-hermes-2-yi dolphin-2.2-yo dolphin-2.6-mixtral * status messages * determine character goals generate json lists * fix error when chromadb add was called before db was ready (wait until the db is fully initiazed) * only strip extra spaces off of prompt textgenwebui: half temperature on -yi- models * prompt tweaks * more thematic generators * direct scene without character should just run the scene instructions if they exist * as_question_answer for query_scene * context_history revamp * Aurora-Nights MixtgralOrochi dolphin-2.7-mixtral nous-hermas-2-solar * remove old context_history calls * mv world_state.py to subdir FlatDolphinMaid Goliath Norobara Nous-Capybara * world state manager first progress * context db manager * fix issue with some clients not remembering context length settings after talemate restart * Sensualize-Solar * improve RAG prompt * conversation agent use [ as a stopping string since the new reinforcement messages use that * new method for RAG during conversation * mixtral_11bx2_moe * option to reset context db from manager ui * fix context db cleanup if scene is closed without saving * didnt mean to commit that * hide internal meta tags * keep track of manual context entries in scene save file so it can be rebuilt. * auto save auto progress quick settings hotbar options * manual mode actor dialogue tools refactor toolbar * narrate directed progress reorganiza narration tools into one cmd module * 0.17.0 * Mixtral_34Bx2 Sensualize-Mixtral openchat * fix save-as action * fix issue where too little context was joined in via RAG * context pins implementation * show active pins in world state component * pin condition eval and world state agent action config * Open_Gpt4 * summarization prompt improvements system prompt for summarization * guidance prompt for time passage narration * fix rerun for generic / unhandled messages * prompt fixes * summarization methods * prompt adjustments * world tools to hot bar ux tweaks * bagel-dpo * context state reinforcements support different insertion methods now (sequential, all context or conversation specific context) * first progress on world state reinforcement templating * Kunoichi * tweaks to update reinforcements prompt * world state templates progress * world state templates integration into main ux * fix issue where openai client wouldn't accept context length override * dont reconfigure client if no arguments are provided * pin condition prompt fixes world state apply template comman label set * world information / lore entries and reinforcement * show world entry states reinforcers in ux * gitignore * dynamic scenario generation progress * dynamic scenario experiment * gitignore * need to emit world state even if we dont run it during scene init * summarize and pin action * poetry relock * template question / attribute cannot be empty * fix issue with summarize and pin not respecting selected line * keep reinforcement messages in history, but keep the same one from stacking up * narrate query prompt more natural sounding response * manage pins from world entry editor * pin_only tag * ts aware summarize and pin pin text rendered to context with time label context reuse session id (this fixes issue of editing context entry and not saving the scene causing removal of context entry next time scene is loaded) * UX to add character state from template within the worldstate manager UX * move divider * handle agent emit error fix issue with state reinforcer validation * layout fixes in world state character panel physical health template added to example config * fix pin_only undefined error in world entry editor * laser-dolphin Noromaid-v0.4-Mixtral-Instruct * show state templates for world and players in favorite list fix applying world state template * refresh world entry list on state creation * changing a state from non-sequential to sequential should queue it as due * quicksettings to bar * fix error during memory db delete * status messages during scene load * removing a sequential state reinforcement should remove the reinforcement messages * Nous-Hermes-2-Mixtral * fix sync issue when editing character details through contextdb * immutable save property * enable director * update example config * enable director when loading a scene file that has instructions * fix more openai client funkyness with context size and losing model * iq dyn scenario prompt fixes * delay client save so that dragging the ctx slider doesnt send off a million requests default openai ctx to 8k * input disabled while clients active * declare event * narrate query prompt tweaks * fixes to dialogue cleanup that would cause messages after : to be cut off. * init git repo if not exist * pull current branch * add 12 hours as option * world-state persist deactivated * install npm packages * fix typo * prompt tweaks * new screenshots and features updated * update screenshot
2024-01-19 11:47:38 +02:00
"@mdi/font": "7.4.47",
0.25.0 (#100) * flip title and name in recent scenes * fix issue where a message could not be regenerated after applying continuity error fixes * prompt tweaks * allow json parameters for commands * autocomplete improvements * dialogue cleanup fixes * fix issue with narrate after dialogue and llama3 (and other models that don't have a line break after the user prompt in their prompt template. * expose ability to auto generate dialogue instructions to wsm character ux * use b64_json response type * move tag checks up so they match first * fix typo * prompt tweak * api key support * prompt tweaks * editable parameters in prompt debugger / tester * allow reseting of prompt params * codemirror for prompt editor * prompt tweaks * more prompt debug tool tweaks * some extra control for `context_history` * new analytical preset (testing) * add `join` and `llm_can_be_coerced` to jinja env * support factual list summaries * prompt tweaks to continuity check and fix * new summarization method `facts` exposed to ux * clamp mistral ai temperature according to their new requirements * prompt tweaks * better parsing of fixed dialogue response * prompt tweaks * fix intermittent empty meta issue * history regen status progression and small ux tweaks * summary entries should always be condensed * google gemini support * relock to install google-cloud-aiplatform for vertex ai inference * fix instruction link * better error handling of google safety validation and allow disabling of safety validation * docs * clarify credentials path requirements * tweak error line identification * handle quota limit error * autocomplete ux wired to assistant plugin instead of command * autocomplete narrative editing and fixes to autocomplete during dialog edit * main input autocomplete tweaks * allow new lines in main input * 0.25.0 and relock * fix issue with autocomplete elsewhere locking out main input * better way to determine remote service * prompt tweak * fix rubberbanding issue when editing character attributes * add open mistral 8x22 * fix continuity error check summary inclusion of target entry * docs * default context length to 8192 * linting
2024-05-05 22:16:03 +03:00
"codemirror": "^6.0.1",
2023-05-05 00:50:02 +03:00
"core-js": "^3.8.3",
Prep 0.20.0 (#77) * fix issue where recent save cover images would sometimes not load * paraphrase prompt tweaks * action_to_narration regenerate compatibility fixes * sim suite add asnwer question instruction * more sim suite tweaks * refactor agent details display in agent bar * visual agent progres (a1111 support) * visual gen prompt tweaks * openai compat client pass max_tokens * world state sequential reinforcement max tokens tightened * improve item names * Improve item names * attempt to remove "changed from.." notes when altering an existing character sheet * prompt improvements for single character portraits * visual agent progress * fix issue where character.update wouldn't update long-term memory * remove experimental flag for now * add better instructions for updating existing character sheet * background processing for agents, visual and tts * fix selected voice not saving between restarts for elevenlabs * lessen timeout * clean up agent status logic * conditional agent configs * comfyui support * visualization queue * refactor visual styles, comfyui progress * regen images auto cover image assign websocket handler plugin abstraction agent websocket handler * automatic1111 fixes agent status and ready checks * tweaks to character portrait prompt * system prompt for visualize * textgenwebui use temp smoothing on yi models * comment out api key for now * fixes issues with openai compat client for retaining api key and auto fixing urls * update_reinforcment tweaks * agent status emit from one place * emit agent status as asyncio task * remove debug output * tts add openai support * openai img gen support * fix issue with confyui checkbox list not loading * tts model selection for openai * narrate_query include character sheet if character is referenced in query improve visual character portrit generation prompt * client implementation extra field support and runpod vllm client example * relock * fix issue where changing context length would cause next generation to error * visual agent tweaks and auto gen character cover image in sim suite * fix issue with readyness lock when there werent any clients defined * load scene readiness fixes * linting * docs * notes for the runpod vllm example
2024-02-16 13:57:45 +02:00
"dot-prop": "^8.0.2",
2023-05-05 00:50:02 +03:00
"roboto-fontface": "*",
"vue": "^3.2.13",
0.25.0 (#100) * flip title and name in recent scenes * fix issue where a message could not be regenerated after applying continuity error fixes * prompt tweaks * allow json parameters for commands * autocomplete improvements * dialogue cleanup fixes * fix issue with narrate after dialogue and llama3 (and other models that don't have a line break after the user prompt in their prompt template. * expose ability to auto generate dialogue instructions to wsm character ux * use b64_json response type * move tag checks up so they match first * fix typo * prompt tweak * api key support * prompt tweaks * editable parameters in prompt debugger / tester * allow reseting of prompt params * codemirror for prompt editor * prompt tweaks * more prompt debug tool tweaks * some extra control for `context_history` * new analytical preset (testing) * add `join` and `llm_can_be_coerced` to jinja env * support factual list summaries * prompt tweaks to continuity check and fix * new summarization method `facts` exposed to ux * clamp mistral ai temperature according to their new requirements * prompt tweaks * better parsing of fixed dialogue response * prompt tweaks * fix intermittent empty meta issue * history regen status progression and small ux tweaks * summary entries should always be condensed * google gemini support * relock to install google-cloud-aiplatform for vertex ai inference * fix instruction link * better error handling of google safety validation and allow disabling of safety validation * docs * clarify credentials path requirements * tweak error line identification * handle quota limit error * autocomplete ux wired to assistant plugin instead of command * autocomplete narrative editing and fixes to autocomplete during dialog edit * main input autocomplete tweaks * allow new lines in main input * 0.25.0 and relock * fix issue with autocomplete elsewhere locking out main input * better way to determine remote service * prompt tweak * fix rubberbanding issue when editing character attributes * add open mistral 8x22 * fix continuity error check summary inclusion of target entry * docs * default context length to 8192 * linting
2024-05-05 22:16:03 +03:00
"vue-codemirror": "^6.1.1",
Prep 0.18.0 (#58) * vuetify update recent saves * use placeholder instead of prefilling text * fix scene loading when no coverage image is set * improve summarize and pin response quality * summarization use previous entries as informative context * fixes #49: auto save indicator missleading * regenerate with instructions * allow resetting of state reinforcement * creative tools: introduce new character creative tools: introduce passive character as active character * character creation adjustments * no longer needed * activate, deactivate characters (work in progress) * worldstate manager show inactive characters * allow setting of llm prompt template from ux reorganize llm prompt template directory for easier local overriding support a more sane way to write llm prompt templates * determine prompt template from huggingface * ignore user overrides * fix issue with removing narrator messages * summarization agent config for prev entry inclusion agent config attribute notes * client code clean up to allow modularity of clients + generic openai compatible api client * more client cleanup * remove debug msg, step size for ctx upped to 1024 * wip on stepped history summarization * summarization prompt fixes * include time message for hisory context pushed in scene.context_history * add / remove characters toggle narration of via ctrl * fix pydantic namespace warning fix client emit after reconfig * set memory ids on character detail entries * deal with chromadb race condition (maybe) * activate / deactivate characters from creative editor switch creative editor to edit characters through world state manager * set 0.18.0 * relock dependencies * openai client shortcut to set api key if not set * set error_action to null * if scene has just started provide intro for extra context in is_prsent and is_leaving queries * nice error if determine template via huggingface doesn't work * fix issue where regenerate would sometimes pick the wrong npc if there are multiple characters talking * add new openai models * default to gpt-4-turbo-preview
2024-01-26 12:42:21 +02:00
"vuetify": "^3.5.0",
2023-05-05 00:50:02 +03:00
"webfontloader": "^1.0.0"
},
"devDependencies": {
"@babel/core": "^7.12.16",
"@babel/eslint-parser": "^7.12.16",
"@vue/cli-plugin-babel": "~5.0.0",
"@vue/cli-plugin-eslint": "~5.0.0",
"@vue/cli-service": "~5.0.0",
"eslint": "^7.32.0",
"eslint-plugin-vue": "^8.0.3",
"vue-cli-plugin-vuetify": "~2.5.8",
"webpack-plugin-vuetify": "^2.0.0-alpha.0"
},
"eslintConfig": {
"root": true,
"env": {
"node": true
},
"extends": [
"plugin:vue/vue3-essential",
"eslint:recommended"
],
"parserOptions": {
"parser": "@babel/eslint-parser"
},
"rules": {}
},
"browserslist": [
"> 1%",
"last 2 versions",
"not dead",
"not ie 11"
]
}