Compare commits

..

9 Commits

Author SHA1 Message Date
FInalWombat
c36fd3a9b0 fixes character descriptions missing from dialogue prompt (#21)
* character description is no longer part of the sheet and needs to be added separately

* prep 0.10.1
2023-10-19 03:05:17 +03:00
FInalWombat
5874d6f05c Update README.md 2023-10-18 03:28:30 +03:00
FInalWombat
4c15ca5290 Update linux-install.md 2023-10-15 16:09:41 +03:00
FInalWombat
595b04b8dd Update README.md 2023-10-15 12:44:30 +03:00
FInalWombat
c7e614c01a Update README.md 2023-10-13 16:14:42 +03:00
FInalWombat
626da5c551 Update README.md 2023-10-13 16:08:18 +03:00
FInalWombat
e5de5dad4d Update .gitignore
clean up cruft
2023-10-09 12:39:49 +03:00
FinalWombat
ce2517dd03 readme 2023-10-02 01:59:47 +03:00
FinalWombat
4b26d5e410 readme 2023-10-02 01:55:41 +03:00
6 changed files with 25 additions and 16 deletions

3
.gitignore vendored
View File

@@ -1,10 +1,7 @@
.lmer
*.pyc
problems
*.swp
*.swo
*.egg-info
tales/
*-internal*
*.internal*
*_internal*

View File

@@ -4,7 +4,7 @@ Allows you to play roleplay scenarios with large language models.
It does not run any large language models itself but relies on existing APIs. Currently supports **text-generation-webui** and **openai**.
This means you need to either have an openai api key or know how to setup [oobabooga/text-generation-webui](https://github.com/oobabooga/text-generation-webui) (locally or remotely via gpu renting.)
This means you need to either have an openai api key or know how to setup [oobabooga/text-generation-webui](https://github.com/oobabooga/text-generation-webui) (locally or remotely via gpu renting. `--api` flag needs to be set)
![Screenshot 1](docs/img/Screenshot_8.png)
![Screenshot 2](docs/img/Screenshot_2.png)
@@ -18,15 +18,17 @@ This means you need to either have an openai api key or know how to setup [oobab
- summarization
- director
- creative
- multi-client (agents can be connected to separate LLMs)
- long term memory (very experimental at this point)
- multi-client (agents can be connected to separate APIs)
- long term memory (experimental)
- chromadb integration
- passage of time
- narrative world state
- narrative tools
- creative tools
- AI backed character creation with template support (jinja2)
- AI backed scenario creation
- runpod integration
- overridable templates foe all LLM prompts. (jinja2)
- overridable templates for all prompts. (jinja2)
## Planned features
@@ -34,20 +36,27 @@ Kinda making it up as i go along, but i want to lean more into gameplay through
In no particular order:
- Gameplay loop governed by AI
- Extension support
- modular agents and clients
- Improved world state
- Dynamic player choice generation
- Better creative tools
- node based scenario / character creation
- Improved long term memory (base is there, but its very rough at the moment)
- Improved and consistent long term memory
- Improved director agent
- Right now this doesn't really work well on anything but GPT-4 (and even there it's debatable). It tends to steer the story in a way that introduces pacing issues. It needs a model that is creative but also reasons really well i think.
- Gameplay loop governed by AI
- objectives
- quests
- win / lose conditions
- Automatic1111 client
# Quickstart
## Installation
Post [here](https://github.com/final-wombat/talemate/issues/17) if you run into problems during installation.
### Windows
1. Download and install Python 3.10 or higher from the [official Python website](https://www.python.org/downloads/windows/).
@@ -64,7 +73,7 @@ In no particular order:
1. `git clone git@github.com:final-wombat/talemate`
1. `cd talemate`
1. `source install.sh`
1. Start the backend: `python src/talemate/server/run.py runserver --host 0.0.0.0 --port 5001`.
1. Start the backend: `python src/talemate/server/run.py runserver --host 0.0.0.0 --port 5050`.
1. Open a new terminal, navigate to the `talemate_frontend` directory, and start the frontend server by running `npm run serve`.
## Configuration
@@ -108,7 +117,7 @@ Will be updated as i test more models and over time.
| [Synthia v1.2 34B](https://huggingface.co/TheBloke/Synthia-34B-v1.2-GPTQ) | 34B model | Cannot be run at full context together with chromadb instructor models on a single 4090. But a great choice if you're running chromadb with the default embeddings (or on cpu). |
| [Xwin-LM-70B](https://huggingface.co/TheBloke/Xwin-LM-70B-V0.1-GPTQ) | 70B model | Great choice if you have the hardware to run it (or can rent it). |
| [Synthia v1.2 70B](https://huggingface.co/TheBloke/Synthia-70B-v1.2-GPTQ) | 70B model | Great choice if you have the hardware to run it (or can rent it). |
| [GPT-4](https://platform.openai.com/) | Remote | Still the best for consistency and reasoning, but is heavily censored. While talemate will send a general "decensor" system prompt, depending on the type of content you want to roleplay, there is a chance your key will be banned. **If you do use this make sure to monitor your api usage, talemate tends to send a lot more requests than other roleplaying applications.** |
| [GPT-4](https://platform.openai.com/) | Remote | Still the best for consistency and reasoning, but is heavily censored. Talemate will send a general "decensor" system prompt, ymmv. **If you do use this make sure to monitor your api usage, talemate tends to send a lot more requests than other roleplaying applications.** |
| [GPT-3.5-turbo](https://platform.openai.com/) | Remote | It's really inconsistent with JSON responses, plus its probably still just as heavily censored as GPT-4. If you want to run it i'd suggest running it for the conversation agent, and use GPT-4 for the other agents. **If you do use this make sure to monitor your api usage, talemate tends to send a lot more requests than other roleplaying applications.** |
I have not tested with Llama 1 models in a while, Lazarus was really good at roleplay, but started failing on JSON requirements.

View File

@@ -14,7 +14,7 @@
1. With the virtual environment activated and dependencies installed, you can start the backend server.
2. Navigate to the `src/talemate/server` directory.
3. Run the server with `python run.py runserver --host 0.0.0.0 --port 5001`.
3. Run the server with `python run.py runserver --host 0.0.0.0 --port 5050`.
### Running the Frontend
@@ -22,4 +22,4 @@
2. If you haven't already, install npm dependencies by running `npm install`.
3. Start the server with `npm run serve`.
Please note that you may need to set environment variables or modify the host and port as per your setup. You can refer to the `runserver.sh` and `frontend.sh` files for more details.
Please note that you may need to set environment variables or modify the host and port as per your setup. You can refer to the `runserver.sh` and `frontend.sh` files for more details.

View File

@@ -4,7 +4,7 @@ build-backend = "poetry.masonry.api"
[tool.poetry]
name = "talemate"
version = "0.10.0"
version = "0.10.1"
description = "AI-backed roleplay and narrative tools"
authors = ["FinalWombat"]
license = "GNU Affero General Public License v3.0"

View File

@@ -2,4 +2,4 @@ from .agents import Agent
from .client import TextGeneratorWebuiClient
from .tale_mate import *
VERSION = "0.10.0"
VERSION = "0.10.1"

View File

@@ -6,9 +6,12 @@
<|SECTION:CHARACTERS|>
{% for character in characters -%}
{{ character.name }}:
{{ character.filtered_sheet(['name', 'description', 'age', 'gender']) }}
{{ character.filtered_sheet(['name', 'age', 'gender']) }}
{{ query_memory("what is "+character.name+"'s personality?", as_question_answer=False) }}
{{ character.description }}
{% endfor %}
<|CLOSE_SECTION|>
<|SECTION:DIALOGUE EXAMPLES|>