Files
open-webui/backend
Sihyeon Jang 3ccbb46938 perf: fix cache key generation for model list caching
- Replace Request object with user.id in cache key for get_all_models
- Request objects are new instances per HTTP request, preventing cache hits
- Cache keys now use user.id ensuring proper cache functionality
- Affects both Ollama and OpenAI model list endpoints

Signed-off-by: Sihyeon Jang <sihyeon.jang@navercorp.com>
2025-09-03 05:17:41 +09:00
..
2024-09-06 04:59:20 +02:00
2025-07-22 21:36:35 -06:00
2025-06-27 15:46:38 +04:00