Files
modelscope/docker/scripts/install_flash_attension.sh
mulin.lyh 5ba9fd2307 modify auto gptq and vllm env
Link: https://code.alibaba-inc.com/Ali-MaaS/MaaS-lib/codereview/14790283
* upgrade to python3.10

* modify auto gptq and vllm env

* fix lint issue

* Merge remote-tracking branch 'origin/master' into python10_support

* python310 support

* build from repo

* add commit id force install modelscope every build

* add commit id force install modelscope every build

* fix cpu build issue

* fix datahub error message

* Merge branch 'python10_support' of gitlab.alibaba-inc.com:Ali-MaaS/MaaS-lib into python10_support

* add --no-cache-dir install auto_gptq
2023-11-27 20:21:00 +08:00

5 lines
180 B
Bash

git clone -b v2.3.3 https://github.com/Dao-AILab/flash-attention && \
cd flash-attention && MAX_JOBS=46 python setup.py install && \
cd .. && \
rm -rf flash-attention