mirror of
https://github.com/modelscope/modelscope.git
synced 2025-12-16 08:17:45 +01:00
add command line usage, optimize command line log, optimize packaging… (#888)
* add command line usage, optimize command line log, optimize packaging version compatible * Update command.md * fix logging dup * Refactor install dependencies (#889) * refactor install dependencies, default only depencies of hub and datasets * move pandas import to function * update hub deps * test * remove generate ast file * remove gast dependency * replace gast with ast * add dependency version * remove lap for compile error * fix comments issue * add install ollama --------- Co-authored-by: mulin.lyh <mulin.lyh@taobao.com> --------- Co-authored-by: mulin.lyh <mulin.lyh@taobao.com>
This commit is contained in:
@@ -9,6 +9,8 @@ ARG CUDA_VERSION=cu121
|
|||||||
# install jupyter plugin
|
# install jupyter plugin
|
||||||
RUN mkdir -p /root/.local/share/jupyter/labextensions/ && \
|
RUN mkdir -p /root/.local/share/jupyter/labextensions/ && \
|
||||||
cp -r /tmp/resources/jupyter_plugins/* /root/.local/share/jupyter/labextensions/
|
cp -r /tmp/resources/jupyter_plugins/* /root/.local/share/jupyter/labextensions/
|
||||||
|
# install ollama
|
||||||
|
RUN curl -fsSL https://ollama.com/install.sh | sh
|
||||||
|
|
||||||
COPY docker/scripts/modelscope_env_init.sh /usr/local/bin/ms_env_init.sh
|
COPY docker/scripts/modelscope_env_init.sh /usr/local/bin/ms_env_init.sh
|
||||||
# python3.8 pip install git+https://github.com/jin-s13/xtcocoapi.git@v1.13
|
# python3.8 pip install git+https://github.com/jin-s13/xtcocoapi.git@v1.13
|
||||||
|
|||||||
157
docs/source/command.md
Normal file
157
docs/source/command.md
Normal file
@@ -0,0 +1,157 @@
|
|||||||
|
# ModelScope command line usage
|
||||||
|
## Supported commands
|
||||||
|
```bash
|
||||||
|
modelscope --help
|
||||||
|
usage: modelscope <command> [<args>]
|
||||||
|
|
||||||
|
positional arguments:
|
||||||
|
{download,plugin,pipeline,modelcard,model,server,login}
|
||||||
|
modelscope commands helpers
|
||||||
|
|
||||||
|
options:
|
||||||
|
-h, --help show this help message and exit
|
||||||
|
|
||||||
|
```
|
||||||
|
## login
|
||||||
|
```bash
|
||||||
|
modelscope login --help
|
||||||
|
usage: modelscope <command> [<args>] login [-h] --token TOKEN
|
||||||
|
|
||||||
|
options:
|
||||||
|
-h, --help show this help message and exit
|
||||||
|
--token TOKEN The Access Token for modelscope.
|
||||||
|
```
|
||||||
|
Get access token: [我的页面](https://modelscope.cn/my/myaccesstoken)获取**SDK 令牌**
|
||||||
|
|
||||||
|
|
||||||
|
## download model
|
||||||
|
```bash
|
||||||
|
modelscope download --help
|
||||||
|
|
||||||
|
usage: modelscope <command> [<args>] download [-h] --model MODEL [--revision REVISION] [--cache_dir CACHE_DIR] [--local_dir LOCAL_DIR] [--include [INCLUDE ...]] [--exclude [EXCLUDE ...]] [files ...]
|
||||||
|
|
||||||
|
positional arguments:
|
||||||
|
files Specify relative path to the repository file(s) to download.(e.g 'tokenizer.json', 'onnx/decoder_model.onnx').
|
||||||
|
|
||||||
|
options:
|
||||||
|
-h, --help show this help message and exit
|
||||||
|
--model MODEL The model id to be downloaded.
|
||||||
|
--revision REVISION Revision of the model.
|
||||||
|
--cache_dir CACHE_DIR
|
||||||
|
Cache directory to save model.
|
||||||
|
--local_dir LOCAL_DIR
|
||||||
|
File will be downloaded to local location specified bylocal_dir, in this case, cache_dir parameter will be ignored.
|
||||||
|
--include [INCLUDE ...]
|
||||||
|
Glob patterns to match files to download.Ignored if file is specified
|
||||||
|
--exclude [EXCLUDE ...]
|
||||||
|
Glob patterns to exclude from files to download.Ignored if file is specified
|
||||||
|
```
|
||||||
|
## Usage Examples
|
||||||
|
|
||||||
|
Command Examples([gpt2](https://www.modelscope.cn/models/AI-ModelScope/gpt2/files))
|
||||||
|
|
||||||
|
### Specify downloading of a single file
|
||||||
|
```bash
|
||||||
|
modelscope download --model 'AI-ModelScope/gpt2' 64.tflite
|
||||||
|
```
|
||||||
|
|
||||||
|
### Specify multiple files to download
|
||||||
|
```bash
|
||||||
|
modelscope download --model 'AI-ModelScope/gpt2' 64.tflite config.json
|
||||||
|
```
|
||||||
|
### Specify certain files to download
|
||||||
|
```bash
|
||||||
|
modelscope download --model 'AI-ModelScope/gpt2' --include 'onnx/*' '*.tflite'
|
||||||
|
```
|
||||||
|
### Filter specified files
|
||||||
|
```bash
|
||||||
|
modelscope download --model 'AI-ModelScope/gpt2' --exclude 'onnx/*' '*.tflite'
|
||||||
|
```
|
||||||
|
### Specify the download cache directory
|
||||||
|
```bash
|
||||||
|
modelscope download --model 'AI-ModelScope/gpt2' --include '*.json' --cache_dir './cache_dir'
|
||||||
|
```
|
||||||
|
The model files will be downloaded to cache\_dir/AI-ModelScope/gpt2/
|
||||||
|
|
||||||
|
### Specify the local directory for downloading
|
||||||
|
```bash
|
||||||
|
modelscope download --model 'AI-ModelScope/gpt2' --include '*.json' --cache_dir './local_dir'
|
||||||
|
```
|
||||||
|
The model files will be downloaded to ./local\_dir
|
||||||
|
|
||||||
|
If both the local directory and the cache directory are specified, the local directory will take precedence.
|
||||||
|
|
||||||
|
## model operation
|
||||||
|
Supports creating models and uploading model files.
|
||||||
|
```bash
|
||||||
|
modelscope model --help
|
||||||
|
usage: modelscope <command> [<args>] modelcard [-h] [-tk ACCESS_TOKEN] -act {create,upload,download} [-gid GROUP_ID] -mid MODEL_ID [-vis VISIBILITY] [-lic LICENSE] [-ch CHINESE_NAME] [-md MODEL_DIR] [-vt VERSION_TAG] [-vi VERSION_INFO]
|
||||||
|
|
||||||
|
options:
|
||||||
|
-h, --help show this help message and exit
|
||||||
|
-tk ACCESS_TOKEN, --access_token ACCESS_TOKEN
|
||||||
|
the certification of visit ModelScope
|
||||||
|
-act {create,upload,download}, --action {create,upload,download}
|
||||||
|
the action of api ModelScope[create, upload]
|
||||||
|
-gid GROUP_ID, --group_id GROUP_ID
|
||||||
|
the group name of ModelScope, eg, damo
|
||||||
|
-mid MODEL_ID, --model_id MODEL_ID
|
||||||
|
the model name of ModelScope
|
||||||
|
-vis VISIBILITY, --visibility VISIBILITY
|
||||||
|
the visibility of ModelScope[PRIVATE: 1, INTERNAL:3, PUBLIC:5]
|
||||||
|
-lic LICENSE, --license LICENSE
|
||||||
|
the license of visit ModelScope[Apache License 2.0|GPL-2.0|GPL-3.0|LGPL-2.1|LGPL-3.0|AFL-3.0|ECL-2.0|MIT]
|
||||||
|
-ch CHINESE_NAME, --chinese_name CHINESE_NAME
|
||||||
|
the chinese name of ModelScope
|
||||||
|
-md MODEL_DIR, --model_dir MODEL_DIR
|
||||||
|
the model_dir of configuration.json
|
||||||
|
-vt VERSION_TAG, --version_tag VERSION_TAG
|
||||||
|
the tag of uploaded model
|
||||||
|
-vi VERSION_INFO, --version_info VERSION_INFO
|
||||||
|
the info of uploaded model
|
||||||
|
```
|
||||||
|
|
||||||
|
### Create model
|
||||||
|
```bash
|
||||||
|
modelscope model -act create -gid 'YOUR_GROUP_ID' -mid 'THE_MODEL_ID' -vis 1 -lic 'MIT' -ch '中文名字'
|
||||||
|
```
|
||||||
|
Will create model THE_MODEL_ID in www.modelscope.cn
|
||||||
|
|
||||||
|
### Upload model files
|
||||||
|
```bash
|
||||||
|
modelscope model -act upload -gid 'YOUR_GROUP_ID' -mid 'THE_MODEL_ID' -md modelfiles/ -vt 'v0.0.1' -vi 'upload model files'
|
||||||
|
```
|
||||||
|
|
||||||
|
## Pipeline
|
||||||
|
Create the template files needed for pipeline.
|
||||||
|
|
||||||
|
```bash
|
||||||
|
modelscope pipeline --help
|
||||||
|
usage: modelscope <command> [<args>] pipeline [-h] -act {create} [-tpl TPL_FILE_PATH] [-s SAVE_FILE_PATH] [-f FILENAME] -t TASK_NAME [-m MODEL_NAME] [-p PREPROCESSOR_NAME] [-pp PIPELINE_NAME] [-config CONFIGURATION_PATH]
|
||||||
|
|
||||||
|
options:
|
||||||
|
-h, --help show this help message and exit
|
||||||
|
-act {create}, --action {create}
|
||||||
|
the action of command pipeline[create]
|
||||||
|
-tpl TPL_FILE_PATH, --tpl_file_path TPL_FILE_PATH
|
||||||
|
the template be selected for ModelScope[template.tpl]
|
||||||
|
-s SAVE_FILE_PATH, --save_file_path SAVE_FILE_PATH
|
||||||
|
the name of custom template be saved for ModelScope
|
||||||
|
-f FILENAME, --filename FILENAME
|
||||||
|
the init name of custom template be saved for ModelScope
|
||||||
|
-t TASK_NAME, --task_name TASK_NAME
|
||||||
|
the unique task_name for ModelScope
|
||||||
|
-m MODEL_NAME, --model_name MODEL_NAME
|
||||||
|
the class of model name for ModelScope
|
||||||
|
-p PREPROCESSOR_NAME, --preprocessor_name PREPROCESSOR_NAME
|
||||||
|
the class of preprocessor name for ModelScope
|
||||||
|
-pp PIPELINE_NAME, --pipeline_name PIPELINE_NAME
|
||||||
|
the class of pipeline name for ModelScope
|
||||||
|
-config CONFIGURATION_PATH, --configuration_path CONFIGURATION_PATH
|
||||||
|
the path of configuration.json for ModelScope
|
||||||
|
```
|
||||||
|
|
||||||
|
### Create pipeline files
|
||||||
|
```bash
|
||||||
|
modelscope pipeline -act 'create' -t 'THE_PIPELINE_TASK' -m 'THE_MODEL_NAME' -pp 'THE_PIPELINE_NAME'
|
||||||
|
```
|
||||||
@@ -96,10 +96,6 @@ else:
|
|||||||
'AutoModelForTokenClassification', 'AutoImageProcessor',
|
'AutoModelForTokenClassification', 'AutoImageProcessor',
|
||||||
'BatchFeature'
|
'BatchFeature'
|
||||||
]
|
]
|
||||||
else:
|
|
||||||
print(
|
|
||||||
'transformer is not installed, please install it if you want to use related modules'
|
|
||||||
)
|
|
||||||
|
|
||||||
import sys
|
import sys
|
||||||
|
|
||||||
|
|||||||
@@ -1,6 +1,7 @@
|
|||||||
# Copyright (c) Alibaba, Inc. and its affiliates.
|
# Copyright (c) Alibaba, Inc. and its affiliates.
|
||||||
|
|
||||||
import argparse
|
import argparse
|
||||||
|
import logging
|
||||||
|
|
||||||
from modelscope.cli.download import DownloadCMD
|
from modelscope.cli.download import DownloadCMD
|
||||||
from modelscope.cli.login import LoginCMD
|
from modelscope.cli.login import LoginCMD
|
||||||
@@ -8,6 +9,9 @@ from modelscope.cli.modelcard import ModelCardCMD
|
|||||||
from modelscope.cli.pipeline import PipelineCMD
|
from modelscope.cli.pipeline import PipelineCMD
|
||||||
from modelscope.cli.plugins import PluginsCMD
|
from modelscope.cli.plugins import PluginsCMD
|
||||||
from modelscope.cli.server import ServerCMD
|
from modelscope.cli.server import ServerCMD
|
||||||
|
from modelscope.utils.logger import get_logger
|
||||||
|
|
||||||
|
logger = get_logger(log_level=logging.WARNING)
|
||||||
|
|
||||||
|
|
||||||
def run_cmd():
|
def run_cmd():
|
||||||
|
|||||||
@@ -1,4 +1,5 @@
|
|||||||
# Copyright (c) Alibaba, Inc. and its affiliates.
|
# Copyright (c) Alibaba, Inc. and its affiliates.
|
||||||
|
import logging
|
||||||
import os
|
import os
|
||||||
import shutil
|
import shutil
|
||||||
import tempfile
|
import tempfile
|
||||||
@@ -11,7 +12,7 @@ from modelscope.hub.snapshot_download import snapshot_download
|
|||||||
from modelscope.hub.utils.utils import get_endpoint
|
from modelscope.hub.utils.utils import get_endpoint
|
||||||
from modelscope.utils.logger import get_logger
|
from modelscope.utils.logger import get_logger
|
||||||
|
|
||||||
logger = get_logger()
|
logger = get_logger(log_level=logging.WARNING)
|
||||||
|
|
||||||
current_path = os.path.dirname(os.path.abspath(__file__))
|
current_path = os.path.dirname(os.path.abspath(__file__))
|
||||||
template_path = os.path.join(current_path, 'template')
|
template_path = os.path.join(current_path, 'template')
|
||||||
@@ -29,7 +30,8 @@ class ModelCardCMD(CLICommand):
|
|||||||
def __init__(self, args):
|
def __init__(self, args):
|
||||||
self.args = args
|
self.args = args
|
||||||
self.api = HubApi()
|
self.api = HubApi()
|
||||||
self.api.login(args.access_token)
|
if args.access_token:
|
||||||
|
self.api.login(args.access_token)
|
||||||
self.model_id = os.path.join(
|
self.model_id = os.path.join(
|
||||||
self.args.group_id, self.args.model_id
|
self.args.group_id, self.args.model_id
|
||||||
) if '/' not in self.args.model_id else self.args.model_id
|
) if '/' not in self.args.model_id else self.args.model_id
|
||||||
@@ -39,12 +41,12 @@ class ModelCardCMD(CLICommand):
|
|||||||
def define_args(parsers: ArgumentParser):
|
def define_args(parsers: ArgumentParser):
|
||||||
""" define args for create or upload modelcard command.
|
""" define args for create or upload modelcard command.
|
||||||
"""
|
"""
|
||||||
parser = parsers.add_parser(ModelCardCMD.name)
|
parser = parsers.add_parser(ModelCardCMD.name, aliases=['model'])
|
||||||
parser.add_argument(
|
parser.add_argument(
|
||||||
'-tk',
|
'-tk',
|
||||||
'--access_token',
|
'--access_token',
|
||||||
type=str,
|
type=str,
|
||||||
required=True,
|
required=False,
|
||||||
help='the certification of visit ModelScope')
|
help='the certification of visit ModelScope')
|
||||||
parser.add_argument(
|
parser.add_argument(
|
||||||
'-act',
|
'-act',
|
||||||
@@ -70,13 +72,15 @@ class ModelCardCMD(CLICommand):
|
|||||||
'--visibility',
|
'--visibility',
|
||||||
type=int,
|
type=int,
|
||||||
default=5,
|
default=5,
|
||||||
help='the visibility of ModelScope')
|
help=
|
||||||
|
'the visibility of ModelScope[PRIVATE: 1, INTERNAL:3, PUBLIC:5]')
|
||||||
parser.add_argument(
|
parser.add_argument(
|
||||||
'-lic',
|
'-lic',
|
||||||
'--license',
|
'--license',
|
||||||
type=str,
|
type=str,
|
||||||
default='Apache License 2.0',
|
default='Apache License 2.0',
|
||||||
help='the license of visit ModelScope')
|
help='the license of visit ModelScope[Apache License 2.0|'
|
||||||
|
'GPL-2.0|GPL-3.0|LGPL-2.1|LGPL-3.0|AFL-3.0|ECL-2.0|MIT]')
|
||||||
parser.add_argument(
|
parser.add_argument(
|
||||||
'-ch',
|
'-ch',
|
||||||
'--chinese_name',
|
'--chinese_name',
|
||||||
|
|||||||
@@ -1,4 +1,5 @@
|
|||||||
# Copyright (c) Alibaba, Inc. and its affiliates.
|
# Copyright (c) Alibaba, Inc. and its affiliates.
|
||||||
|
import logging
|
||||||
import os
|
import os
|
||||||
from argparse import ArgumentParser
|
from argparse import ArgumentParser
|
||||||
from string import Template
|
from string import Template
|
||||||
@@ -6,7 +7,7 @@ from string import Template
|
|||||||
from modelscope.cli.base import CLICommand
|
from modelscope.cli.base import CLICommand
|
||||||
from modelscope.utils.logger import get_logger
|
from modelscope.utils.logger import get_logger
|
||||||
|
|
||||||
logger = get_logger()
|
logger = get_logger(log_level=logging.WARNING)
|
||||||
|
|
||||||
current_path = os.path.dirname(os.path.abspath(__file__))
|
current_path = os.path.dirname(os.path.abspath(__file__))
|
||||||
template_path = os.path.join(current_path, 'template')
|
template_path = os.path.join(current_path, 'template')
|
||||||
|
|||||||
@@ -1,15 +1,14 @@
|
|||||||
# Copyright (c) Alibaba, Inc. and its affiliates.
|
# Copyright (c) Alibaba, Inc. and its affiliates.
|
||||||
|
import logging
|
||||||
import os
|
import os
|
||||||
from argparse import ArgumentParser
|
from argparse import ArgumentParser
|
||||||
from string import Template
|
from string import Template
|
||||||
|
|
||||||
import uvicorn
|
|
||||||
|
|
||||||
from modelscope.cli.base import CLICommand
|
from modelscope.cli.base import CLICommand
|
||||||
from modelscope.server.api_server import add_server_args, get_app
|
from modelscope.server.api_server import add_server_args, run_server
|
||||||
from modelscope.utils.logger import get_logger
|
from modelscope.utils.logger import get_logger
|
||||||
|
|
||||||
logger = get_logger()
|
logger = get_logger(log_level=logging.WARNING)
|
||||||
|
|
||||||
current_path = os.path.dirname(os.path.abspath(__file__))
|
current_path = os.path.dirname(os.path.abspath(__file__))
|
||||||
template_path = os.path.join(current_path, 'template')
|
template_path = os.path.join(current_path, 'template')
|
||||||
@@ -36,5 +35,4 @@ class ServerCMD(CLICommand):
|
|||||||
parser.set_defaults(func=subparser_func)
|
parser.set_defaults(func=subparser_func)
|
||||||
|
|
||||||
def execute(self):
|
def execute(self):
|
||||||
app = get_app(self.args)
|
run_server(self.args)
|
||||||
uvicorn.run(app, host=self.args.host, port=self.args.port)
|
|
||||||
|
|||||||
@@ -1,4 +1,22 @@
|
|||||||
# Copyright (c) Alibaba, Inc. and its affiliates.
|
# Copyright (c) Alibaba, Inc. and its affiliates.
|
||||||
|
|
||||||
from .file import File, LocalStorage
|
from typing import TYPE_CHECKING
|
||||||
from .io import dump, dumps, load
|
|
||||||
|
from modelscope.utils.import_utils import LazyImportModule
|
||||||
|
|
||||||
|
if TYPE_CHECKING:
|
||||||
|
from .file import File, LocalStorage
|
||||||
|
from .io import dump, dumps, load
|
||||||
|
else:
|
||||||
|
_import_structure = {
|
||||||
|
'io': ['dump', 'dumps', 'load'],
|
||||||
|
'file': ['File', 'LocalStorage']
|
||||||
|
}
|
||||||
|
import sys
|
||||||
|
sys.modules[__name__] = LazyImportModule(
|
||||||
|
__name__,
|
||||||
|
globals()['__file__'],
|
||||||
|
_import_structure,
|
||||||
|
module_spec=__spec__,
|
||||||
|
extra_objects={},
|
||||||
|
)
|
||||||
|
|||||||
@@ -1,11 +1,9 @@
|
|||||||
# Copyright (c) Alibaba, Inc. and its affiliates.
|
# Copyright (c) Alibaba, Inc. and its affiliates.
|
||||||
import numpy as np
|
|
||||||
|
|
||||||
from . import jsonplus
|
|
||||||
from .base import FormatHandler
|
from .base import FormatHandler
|
||||||
|
|
||||||
|
|
||||||
def set_default(obj):
|
def set_default(obj):
|
||||||
|
import numpy as np
|
||||||
"""Set default json values for non-serializable values.
|
"""Set default json values for non-serializable values.
|
||||||
|
|
||||||
It helps convert ``set``, ``range`` and ``np.ndarray`` data types to list.
|
It helps convert ``set``, ``range`` and ``np.ndarray`` data types to list.
|
||||||
@@ -25,10 +23,13 @@ class JsonHandler(FormatHandler):
|
|||||||
"""Use jsonplus, serialization of Python types to JSON that "just works"."""
|
"""Use jsonplus, serialization of Python types to JSON that "just works"."""
|
||||||
|
|
||||||
def load(self, file):
|
def load(self, file):
|
||||||
|
from . import jsonplus
|
||||||
return jsonplus.loads(file.read())
|
return jsonplus.loads(file.read())
|
||||||
|
|
||||||
def dump(self, obj, file, **kwargs):
|
def dump(self, obj, file, **kwargs):
|
||||||
|
from . import jsonplus
|
||||||
file.write(self.dumps(obj, **kwargs))
|
file.write(self.dumps(obj, **kwargs))
|
||||||
|
|
||||||
def dumps(self, obj, **kwargs):
|
def dumps(self, obj, **kwargs):
|
||||||
|
from . import jsonplus
|
||||||
return jsonplus.dumps(obj, **kwargs)
|
return jsonplus.dumps(obj, **kwargs)
|
||||||
|
|||||||
@@ -9,7 +9,7 @@ import simplejson as json
|
|||||||
import threading
|
import threading
|
||||||
import uuid
|
import uuid
|
||||||
from collections import namedtuple
|
from collections import namedtuple
|
||||||
from datetime import date, datetime, time, timedelta
|
from datetime import timedelta
|
||||||
from dateutil.parser import parse as parse_datetime
|
from dateutil.parser import parse as parse_datetime
|
||||||
from decimal import Decimal
|
from decimal import Decimal
|
||||||
from fractions import Fraction
|
from fractions import Fraction
|
||||||
|
|||||||
@@ -18,7 +18,6 @@ from typing import Dict, List, Optional, Tuple, Union
|
|||||||
from urllib.parse import urlencode
|
from urllib.parse import urlencode
|
||||||
|
|
||||||
import json
|
import json
|
||||||
import pandas as pd
|
|
||||||
import requests
|
import requests
|
||||||
from requests import Session
|
from requests import Session
|
||||||
from requests.adapters import HTTPAdapter, Retry
|
from requests.adapters import HTTPAdapter, Retry
|
||||||
@@ -820,6 +819,8 @@ class HubApi:
|
|||||||
"""
|
"""
|
||||||
import hashlib
|
import hashlib
|
||||||
from tqdm import tqdm
|
from tqdm import tqdm
|
||||||
|
import pandas as pd
|
||||||
|
|
||||||
out_path = os.path.join(out_path, hashlib.md5(url.encode(encoding='UTF-8')).hexdigest())
|
out_path = os.path.join(out_path, hashlib.md5(url.encode(encoding='UTF-8')).hexdigest())
|
||||||
if mode == DownloadMode.FORCE_REDOWNLOAD and os.path.exists(out_path):
|
if mode == DownloadMode.FORCE_REDOWNLOAD and os.path.exists(out_path):
|
||||||
os.remove(out_path)
|
os.remove(out_path)
|
||||||
@@ -1086,6 +1087,7 @@ class ModelScopeConfig:
|
|||||||
GIT_TOKEN_FILE_NAME = 'git_token'
|
GIT_TOKEN_FILE_NAME = 'git_token'
|
||||||
USER_INFO_FILE_NAME = 'user'
|
USER_INFO_FILE_NAME = 'user'
|
||||||
USER_SESSION_ID_FILE_NAME = 'session'
|
USER_SESSION_ID_FILE_NAME = 'session'
|
||||||
|
cookie_expired_warning = False
|
||||||
|
|
||||||
@staticmethod
|
@staticmethod
|
||||||
def make_sure_credential_path_exist():
|
def make_sure_credential_path_exist():
|
||||||
@@ -1107,7 +1109,8 @@ class ModelScopeConfig:
|
|||||||
with open(cookies_path, 'rb') as f:
|
with open(cookies_path, 'rb') as f:
|
||||||
cookies = pickle.load(f)
|
cookies = pickle.load(f)
|
||||||
for cookie in cookies:
|
for cookie in cookies:
|
||||||
if cookie.is_expired():
|
if cookie.is_expired() and not ModelScopeConfig.cookie_expired_warning:
|
||||||
|
ModelScopeConfig.cookie_expired_warning = True
|
||||||
logger.warning(
|
logger.warning(
|
||||||
'Authentication has expired, '
|
'Authentication has expired, '
|
||||||
'please re-login if you need to access private models or datasets.')
|
'please re-login if you need to access private models or datasets.')
|
||||||
|
|||||||
@@ -1,5 +1,6 @@
|
|||||||
# Copyright (c) Alibaba, Inc. and its affiliates.
|
# Copyright (c) Alibaba, Inc. and its affiliates.
|
||||||
|
|
||||||
|
import logging
|
||||||
from http import HTTPStatus
|
from http import HTTPStatus
|
||||||
|
|
||||||
import requests
|
import requests
|
||||||
@@ -8,7 +9,7 @@ from requests.exceptions import HTTPError
|
|||||||
from modelscope.hub.constants import MODELSCOPE_REQUEST_ID
|
from modelscope.hub.constants import MODELSCOPE_REQUEST_ID
|
||||||
from modelscope.utils.logger import get_logger
|
from modelscope.utils.logger import get_logger
|
||||||
|
|
||||||
logger = get_logger()
|
logger = get_logger(log_level=logging.WARNING)
|
||||||
|
|
||||||
|
|
||||||
class NotSupportError(Exception):
|
class NotSupportError(Exception):
|
||||||
|
|||||||
@@ -4,9 +4,9 @@ import urllib
|
|||||||
import warnings
|
import warnings
|
||||||
from typing import Any, List, Union
|
from typing import Any, List, Union
|
||||||
|
|
||||||
|
import packaging
|
||||||
import torch
|
import torch
|
||||||
from PIL import Image
|
from PIL import Image
|
||||||
from pkg_resources import packaging
|
|
||||||
from torchvision.transforms import (CenterCrop, Compose, Normalize, Resize,
|
from torchvision.transforms import (CenterCrop, Compose, Normalize, Resize,
|
||||||
ToTensor)
|
ToTensor)
|
||||||
from tqdm import tqdm
|
from tqdm import tqdm
|
||||||
|
|||||||
@@ -8,9 +8,10 @@ import urllib
|
|||||||
import warnings
|
import warnings
|
||||||
from typing import Any, List, Union
|
from typing import Any, List, Union
|
||||||
|
|
||||||
|
import packaging
|
||||||
|
import packaging.version
|
||||||
import torch
|
import torch
|
||||||
from PIL import Image
|
from PIL import Image
|
||||||
from pkg_resources import packaging
|
|
||||||
from torchvision.transforms import (CenterCrop, Compose, Normalize, Resize,
|
from torchvision.transforms import (CenterCrop, Compose, Normalize, Resize,
|
||||||
ToTensor)
|
ToTensor)
|
||||||
from tqdm import tqdm
|
from tqdm import tqdm
|
||||||
|
|||||||
@@ -6,10 +6,10 @@ from typing import Any, Dict
|
|||||||
|
|
||||||
import json
|
import json
|
||||||
import numpy as np
|
import numpy as np
|
||||||
|
import packaging
|
||||||
import torch
|
import torch
|
||||||
import torch.cuda
|
import torch.cuda
|
||||||
from PIL import Image
|
from PIL import Image
|
||||||
from pkg_resources import packaging
|
|
||||||
from taming.models.vqgan import GumbelVQ, VQModel
|
from taming.models.vqgan import GumbelVQ, VQModel
|
||||||
from torchvision.transforms import (CenterCrop, Compose, Normalize, Resize,
|
from torchvision.transforms import (CenterCrop, Compose, Normalize, Resize,
|
||||||
ToTensor)
|
ToTensor)
|
||||||
|
|||||||
@@ -0,0 +1,17 @@
|
|||||||
|
# Copyright (c) Alibaba, Inc. and its affiliates.
|
||||||
|
from typing import TYPE_CHECKING
|
||||||
|
|
||||||
|
from modelscope.utils.import_utils import LazyImportModule
|
||||||
|
|
||||||
|
if TYPE_CHECKING:
|
||||||
|
from .api_server import run_server, add_server_args
|
||||||
|
else:
|
||||||
|
_import_structure = {'api_server': ['run_server', 'add_server_arg']}
|
||||||
|
import sys
|
||||||
|
sys.modules[__name__] = LazyImportModule(
|
||||||
|
__name__,
|
||||||
|
globals()['__file__'],
|
||||||
|
_import_structure,
|
||||||
|
module_spec=__spec__,
|
||||||
|
extra_objects={},
|
||||||
|
)
|
||||||
|
|||||||
@@ -1,5 +1,4 @@
|
|||||||
from fastapi import APIRouter
|
from fastapi import APIRouter
|
||||||
from starlette.routing import Route, WebSocketRoute
|
|
||||||
|
|
||||||
from modelscope.server.api.routers import health, model_router
|
from modelscope.server.api.routers import health, model_router
|
||||||
|
|
||||||
|
|||||||
@@ -1,28 +1,7 @@
|
|||||||
import argparse
|
import argparse
|
||||||
|
|
||||||
import uvicorn
|
|
||||||
from fastapi import FastAPI
|
|
||||||
|
|
||||||
from modelscope.server.api.routers.router import api_router
|
def add_server_args(parser: argparse.ArgumentParser):
|
||||||
from modelscope.server.core.event_handlers import (start_app_handler,
|
|
||||||
stop_app_handler)
|
|
||||||
|
|
||||||
|
|
||||||
def get_app(args) -> FastAPI:
|
|
||||||
app = FastAPI(
|
|
||||||
title='modelscope_server',
|
|
||||||
version='0.1',
|
|
||||||
debug=True,
|
|
||||||
swagger_ui_parameters={'tryItOutEnabled': True})
|
|
||||||
app.state.args = args
|
|
||||||
app.include_router(api_router)
|
|
||||||
|
|
||||||
app.add_event_handler('startup', start_app_handler(app))
|
|
||||||
app.add_event_handler('shutdown', stop_app_handler(app))
|
|
||||||
return app
|
|
||||||
|
|
||||||
|
|
||||||
def add_server_args(parser):
|
|
||||||
parser.add_argument(
|
parser.add_argument(
|
||||||
'--model_id', required=True, type=str, help='The target model id')
|
'--model_id', required=True, type=str, help='The target model id')
|
||||||
parser.add_argument(
|
parser.add_argument(
|
||||||
@@ -37,7 +16,42 @@ def add_server_args(parser):
|
|||||||
help='Use LLMPipeline first for llm models.')
|
help='Use LLMPipeline first for llm models.')
|
||||||
|
|
||||||
|
|
||||||
|
def run_server(args):
|
||||||
|
try:
|
||||||
|
import uvicorn
|
||||||
|
app = get_app(args)
|
||||||
|
uvicorn.run(app, host=args.host, port=args.port)
|
||||||
|
except ModuleNotFoundError as e:
|
||||||
|
print(e)
|
||||||
|
print(
|
||||||
|
'To execute the server command, first '
|
||||||
|
'install the domain dependencies with: '
|
||||||
|
'pip install modelscope[DOMAIN] -f https://modelscope.oss-cn-beijing.aliyuncs.com/releases/repo.html '
|
||||||
|
'the "DOMAIN" include [cv|nlp|audio|multi-modal|science] '
|
||||||
|
'and then install server dependencies with: pip install modelscope[server]'
|
||||||
|
)
|
||||||
|
|
||||||
|
|
||||||
|
def get_app(args):
|
||||||
|
from fastapi import FastAPI
|
||||||
|
from modelscope.server.api.routers.router import api_router
|
||||||
|
from modelscope.server.core.event_handlers import (start_app_handler,
|
||||||
|
stop_app_handler)
|
||||||
|
app = FastAPI(
|
||||||
|
title='modelscope_server',
|
||||||
|
version='0.1',
|
||||||
|
debug=True,
|
||||||
|
swagger_ui_parameters={'tryItOutEnabled': True})
|
||||||
|
app.state.args = args
|
||||||
|
app.include_router(api_router)
|
||||||
|
|
||||||
|
app.add_event_handler('startup', start_app_handler(app))
|
||||||
|
app.add_event_handler('shutdown', stop_app_handler(app))
|
||||||
|
return app
|
||||||
|
|
||||||
|
|
||||||
if __name__ == '__main__':
|
if __name__ == '__main__':
|
||||||
|
import uvicorn
|
||||||
parser = argparse.ArgumentParser('modelscope_server')
|
parser = argparse.ArgumentParser('modelscope_server')
|
||||||
add_server_args(parser)
|
add_server_args(parser)
|
||||||
args = parser.parse_args()
|
args = parser.parse_args()
|
||||||
|
|||||||
@@ -3,12 +3,12 @@ from http import HTTPStatus
|
|||||||
from typing import Generic, Optional, Type, TypeVar
|
from typing import Generic, Optional, Type, TypeVar
|
||||||
|
|
||||||
import json
|
import json
|
||||||
from pydantic.generics import GenericModel
|
from pydantic import BaseModel
|
||||||
|
|
||||||
ResultType = TypeVar('ResultType')
|
ResultType = TypeVar('ResultType')
|
||||||
|
|
||||||
|
|
||||||
class ApiResponse(GenericModel, Generic[ResultType]):
|
class ApiResponse(BaseModel, Generic[ResultType]):
|
||||||
Code: Optional[int] = HTTPStatus.OK
|
Code: Optional[int] = HTTPStatus.OK
|
||||||
Success: Optional[bool] = True
|
Success: Optional[bool] = True
|
||||||
RequestId: Optional[str] = ''
|
RequestId: Optional[str] = ''
|
||||||
|
|||||||
@@ -1 +0,0 @@
|
|||||||
from .hub import create_model_if_not_exist, read_config
|
|
||||||
|
|||||||
@@ -2,6 +2,7 @@
|
|||||||
|
|
||||||
import ast
|
import ast
|
||||||
import hashlib
|
import hashlib
|
||||||
|
import logging
|
||||||
import os
|
import os
|
||||||
import os.path as osp
|
import os.path as osp
|
||||||
import time
|
import time
|
||||||
@@ -10,10 +11,8 @@ from functools import reduce
|
|||||||
from pathlib import Path
|
from pathlib import Path
|
||||||
from typing import Union
|
from typing import Union
|
||||||
|
|
||||||
import gast
|
|
||||||
import json
|
import json
|
||||||
|
|
||||||
from modelscope.fileio.file import LocalStorage
|
|
||||||
# do not delete
|
# do not delete
|
||||||
from modelscope.metainfo import (CustomDatasets, Heads, Hooks, LR_Schedulers,
|
from modelscope.metainfo import (CustomDatasets, Heads, Hooks, LR_Schedulers,
|
||||||
Metrics, Models, Optimizers, Pipelines,
|
Metrics, Models, Optimizers, Pipelines,
|
||||||
@@ -23,8 +22,7 @@ from modelscope.utils.file_utils import get_modelscope_cache_dir
|
|||||||
from modelscope.utils.logger import get_logger
|
from modelscope.utils.logger import get_logger
|
||||||
from modelscope.utils.registry import default_group
|
from modelscope.utils.registry import default_group
|
||||||
|
|
||||||
logger = get_logger()
|
logger = get_logger(log_level=logging.WARNING)
|
||||||
storage = LocalStorage()
|
|
||||||
p = Path(__file__)
|
p = Path(__file__)
|
||||||
|
|
||||||
# get the path of package 'modelscope'
|
# get the path of package 'modelscope'
|
||||||
@@ -361,8 +359,7 @@ class AstScanning(object):
|
|||||||
with open(file, 'r', encoding='utf8') as code:
|
with open(file, 'r', encoding='utf8') as code:
|
||||||
data = code.readlines()
|
data = code.readlines()
|
||||||
data = ''.join(data)
|
data = ''.join(data)
|
||||||
|
node = ast.parse(data)
|
||||||
node = gast.parse(data)
|
|
||||||
output = self.scan_import(node, show_offsets=False)
|
output = self.scan_import(node, show_offsets=False)
|
||||||
output[DECORATOR_KEY] = self.parse_decorators(output[DECORATOR_KEY])
|
output[DECORATOR_KEY] = self.parse_decorators(output[DECORATOR_KEY])
|
||||||
output[EXPRESS_KEY] = self.parse_decorators(output[EXPRESS_KEY])
|
output[EXPRESS_KEY] = self.parse_decorators(output[EXPRESS_KEY])
|
||||||
@@ -574,6 +571,25 @@ class FilesAstScanning(object):
|
|||||||
file_scanner = FilesAstScanning()
|
file_scanner = FilesAstScanning()
|
||||||
|
|
||||||
|
|
||||||
|
def ensure_write(obj: bytes, filepath: Union[str, Path]) -> None:
|
||||||
|
"""Write data to a given ``filepath`` with 'wb' mode.
|
||||||
|
|
||||||
|
Note:
|
||||||
|
``write`` will create a directory if the directory of ``filepath``
|
||||||
|
does not exist.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
obj (bytes): Data to be written.
|
||||||
|
filepath (str or Path): Path to write data.
|
||||||
|
"""
|
||||||
|
dirname = os.path.dirname(filepath)
|
||||||
|
if dirname and not os.path.exists(dirname):
|
||||||
|
os.makedirs(dirname, exist_ok=True)
|
||||||
|
|
||||||
|
with open(filepath, 'wb') as f:
|
||||||
|
f.write(obj)
|
||||||
|
|
||||||
|
|
||||||
def _save_index(index, file_path, file_list=None, with_template=False):
|
def _save_index(index, file_path, file_list=None, with_template=False):
|
||||||
# convert tuple key to str key
|
# convert tuple key to str key
|
||||||
index[INDEX_KEY] = {str(k): v for k, v in index[INDEX_KEY].items()}
|
index[INDEX_KEY] = {str(k): v for k, v in index[INDEX_KEY].items()}
|
||||||
@@ -586,7 +602,7 @@ def _save_index(index, file_path, file_list=None, with_template=False):
|
|||||||
if with_template:
|
if with_template:
|
||||||
json_index = json_index.replace(MODELSCOPE_PATH.as_posix(),
|
json_index = json_index.replace(MODELSCOPE_PATH.as_posix(),
|
||||||
TEMPLATE_PATH)
|
TEMPLATE_PATH)
|
||||||
storage.write(json_index.encode(), file_path)
|
ensure_write(json_index.encode(), file_path)
|
||||||
index[INDEX_KEY] = {
|
index[INDEX_KEY] = {
|
||||||
ast.literal_eval(k): v
|
ast.literal_eval(k): v
|
||||||
for k, v in index[INDEX_KEY].items()
|
for k, v in index[INDEX_KEY].items()
|
||||||
@@ -594,7 +610,8 @@ def _save_index(index, file_path, file_list=None, with_template=False):
|
|||||||
|
|
||||||
|
|
||||||
def _load_index(file_path, with_template=False):
|
def _load_index(file_path, with_template=False):
|
||||||
bytes_index = storage.read(file_path)
|
with open(file_path, 'rb') as f:
|
||||||
|
bytes_index = f.read()
|
||||||
if with_template:
|
if with_template:
|
||||||
bytes_index = bytes_index.decode().replace(TEMPLATE_PATH,
|
bytes_index = bytes_index.decode().replace(TEMPLATE_PATH,
|
||||||
MODELSCOPE_PATH.as_posix())
|
MODELSCOPE_PATH.as_posix())
|
||||||
|
|||||||
@@ -3,8 +3,6 @@ import os
|
|||||||
from types import MethodType
|
from types import MethodType
|
||||||
from typing import Any, Optional
|
from typing import Any, Optional
|
||||||
|
|
||||||
from packaging import version
|
|
||||||
|
|
||||||
from modelscope.metainfo import Tasks
|
from modelscope.metainfo import Tasks
|
||||||
from modelscope.utils.ast_utils import INDEX_KEY
|
from modelscope.utils.ast_utils import INDEX_KEY
|
||||||
from modelscope.utils.import_utils import (LazyImportModule,
|
from modelscope.utils.import_utils import (LazyImportModule,
|
||||||
@@ -42,6 +40,7 @@ def fix_transformers_upgrade():
|
|||||||
# from 4.35.0, transformers changes its arguments of _set_gradient_checkpointing
|
# from 4.35.0, transformers changes its arguments of _set_gradient_checkpointing
|
||||||
import transformers
|
import transformers
|
||||||
from transformers import PreTrainedModel
|
from transformers import PreTrainedModel
|
||||||
|
from packaging import version
|
||||||
if version.parse(transformers.__version__) >= version.parse('4.35.0') \
|
if version.parse(transformers.__version__) >= version.parse('4.35.0') \
|
||||||
and not hasattr(PreTrainedModel, 'post_init_origin'):
|
and not hasattr(PreTrainedModel, 'post_init_origin'):
|
||||||
PreTrainedModel.post_init_origin = PreTrainedModel.post_init
|
PreTrainedModel.post_init_origin = PreTrainedModel.post_init
|
||||||
|
|||||||
@@ -16,10 +16,8 @@ from typing import Dict, Union
|
|||||||
|
|
||||||
import addict
|
import addict
|
||||||
import json
|
import json
|
||||||
from yapf.yapflib.yapf_api import FormatCode
|
|
||||||
|
|
||||||
from modelscope.utils.constant import ConfigFields, ModelFile
|
from modelscope.utils.constant import ConfigFields, ModelFile
|
||||||
from modelscope.utils.import_utils import import_modules_from_file
|
|
||||||
from modelscope.utils.logger import get_logger
|
from modelscope.utils.logger import get_logger
|
||||||
|
|
||||||
logger = get_logger()
|
logger = get_logger()
|
||||||
@@ -101,6 +99,8 @@ class Config:
|
|||||||
shutil.copyfile(filename, tmp_cfg_file.name)
|
shutil.copyfile(filename, tmp_cfg_file.name)
|
||||||
|
|
||||||
if filename.endswith('.py'):
|
if filename.endswith('.py'):
|
||||||
|
# import as needed.
|
||||||
|
from modelscope.utils.import_utils import import_modules_from_file
|
||||||
module_nanme, mod = import_modules_from_file(
|
module_nanme, mod = import_modules_from_file(
|
||||||
osp.join(tmp_cfg_dir, tmp_cfg_name))
|
osp.join(tmp_cfg_dir, tmp_cfg_name))
|
||||||
cfg_dict = {}
|
cfg_dict = {}
|
||||||
@@ -282,6 +282,7 @@ class Config:
|
|||||||
based_on_style='pep8',
|
based_on_style='pep8',
|
||||||
blank_line_before_nested_class_or_def=True,
|
blank_line_before_nested_class_or_def=True,
|
||||||
split_before_expression_after_opening_paren=True)
|
split_before_expression_after_opening_paren=True)
|
||||||
|
from yapf.yapflib.yapf_api import FormatCode
|
||||||
text, _ = FormatCode(text, style_config=yapf_style, verify=True)
|
text, _ = FormatCode(text, style_config=yapf_style, verify=True)
|
||||||
|
|
||||||
return text
|
return text
|
||||||
|
|||||||
@@ -5,11 +5,13 @@ import enum
|
|||||||
class Fields(object):
|
class Fields(object):
|
||||||
""" Names for different application fields
|
""" Names for different application fields
|
||||||
"""
|
"""
|
||||||
|
framework = 'framework'
|
||||||
cv = 'cv'
|
cv = 'cv'
|
||||||
nlp = 'nlp'
|
nlp = 'nlp'
|
||||||
audio = 'audio'
|
audio = 'audio'
|
||||||
multi_modal = 'multi-modal'
|
multi_modal = 'multi-modal'
|
||||||
science = 'science'
|
science = 'science'
|
||||||
|
server = 'server'
|
||||||
|
|
||||||
|
|
||||||
class CVTasks(object):
|
class CVTasks(object):
|
||||||
|
|||||||
@@ -3,6 +3,7 @@
|
|||||||
import ast
|
import ast
|
||||||
import functools
|
import functools
|
||||||
import importlib
|
import importlib
|
||||||
|
import logging
|
||||||
import os
|
import os
|
||||||
import os.path as osp
|
import os.path as osp
|
||||||
import sys
|
import sys
|
||||||
@@ -13,8 +14,6 @@ from pathlib import Path
|
|||||||
from types import ModuleType
|
from types import ModuleType
|
||||||
from typing import Any
|
from typing import Any
|
||||||
|
|
||||||
from packaging import version
|
|
||||||
|
|
||||||
from modelscope.utils.ast_utils import (INDEX_KEY, MODULE_KEY, REQUIREMENT_KEY,
|
from modelscope.utils.ast_utils import (INDEX_KEY, MODULE_KEY, REQUIREMENT_KEY,
|
||||||
load_index)
|
load_index)
|
||||||
from modelscope.utils.error import * # noqa
|
from modelscope.utils.error import * # noqa
|
||||||
@@ -25,7 +24,7 @@ if sys.version_info < (3, 8):
|
|||||||
else:
|
else:
|
||||||
import importlib.metadata as importlib_metadata
|
import importlib.metadata as importlib_metadata
|
||||||
|
|
||||||
logger = get_logger()
|
logger = get_logger(log_level=logging.WARNING)
|
||||||
|
|
||||||
AST_INDEX = None
|
AST_INDEX = None
|
||||||
|
|
||||||
@@ -192,6 +191,7 @@ if USE_TF in ENV_VARS_TRUE_AND_AUTO_VALUES and USE_TORCH not in ENV_VARS_TRUE_VA
|
|||||||
pass
|
pass
|
||||||
_tf_available = _tf_version is not None
|
_tf_available = _tf_version is not None
|
||||||
if _tf_available:
|
if _tf_available:
|
||||||
|
from packaging import version
|
||||||
if version.parse(_tf_version) < version.parse('2'):
|
if version.parse(_tf_version) < version.parse('2'):
|
||||||
pass
|
pass
|
||||||
else:
|
else:
|
||||||
|
|||||||
@@ -8,12 +8,9 @@ from io import BytesIO
|
|||||||
from typing import Any
|
from typing import Any
|
||||||
from urllib.parse import urlparse
|
from urllib.parse import urlparse
|
||||||
|
|
||||||
import cv2
|
|
||||||
import json
|
import json
|
||||||
import numpy as np
|
import numpy as np
|
||||||
|
|
||||||
from modelscope.hub.api import HubApi
|
|
||||||
from modelscope.hub.errors import NotExistError
|
|
||||||
from modelscope.hub.file_download import model_file_download
|
from modelscope.hub.file_download import model_file_download
|
||||||
from modelscope.outputs.outputs import (TASK_OUTPUTS, OutputKeys, OutputTypes,
|
from modelscope.outputs.outputs import (TASK_OUTPUTS, OutputKeys, OutputTypes,
|
||||||
OutputTypeSchema)
|
OutputTypeSchema)
|
||||||
@@ -714,6 +711,7 @@ def service_base64_input_to_pipeline_input(task_name, body):
|
|||||||
|
|
||||||
|
|
||||||
def encode_numpy_image_to_base64(image):
|
def encode_numpy_image_to_base64(image):
|
||||||
|
import cv2
|
||||||
_, img_encode = cv2.imencode('.png', image)
|
_, img_encode = cv2.imencode('.png', image)
|
||||||
bytes_data = img_encode.tobytes()
|
bytes_data = img_encode.tobytes()
|
||||||
base64_str = str(base64.b64encode(bytes_data), 'utf-8')
|
base64_str = str(base64.b64encode(bytes_data), 'utf-8')
|
||||||
|
|||||||
@@ -1,6 +1,6 @@
|
|||||||
# Copyright (c) Alibaba, Inc. and its affiliates.
|
# Copyright (c) Alibaba, Inc. and its affiliates.
|
||||||
|
|
||||||
import importlib
|
import importlib.util as iutil
|
||||||
import logging
|
import logging
|
||||||
from typing import Optional
|
from typing import Optional
|
||||||
|
|
||||||
@@ -28,6 +28,8 @@ def get_logger(log_file: Optional[str] = None,
|
|||||||
logger.propagate = False
|
logger.propagate = False
|
||||||
if logger_name in init_loggers:
|
if logger_name in init_loggers:
|
||||||
add_file_handler_if_needed(logger, log_file, file_mode, log_level)
|
add_file_handler_if_needed(logger, log_file, file_mode, log_level)
|
||||||
|
if logger.level != log_level:
|
||||||
|
logger.setLevel(log_level)
|
||||||
return logger
|
return logger
|
||||||
|
|
||||||
# handle duplicate logs to the console
|
# handle duplicate logs to the console
|
||||||
@@ -39,7 +41,7 @@ def get_logger(log_file: Optional[str] = None,
|
|||||||
# at the ERROR level.
|
# at the ERROR level.
|
||||||
torch_dist = False
|
torch_dist = False
|
||||||
is_worker0 = True
|
is_worker0 = True
|
||||||
if importlib.util.find_spec('torch') is not None:
|
if iutil.find_spec('torch') is not None:
|
||||||
from modelscope.utils.torch_utils import is_dist, is_master
|
from modelscope.utils.torch_utils import is_dist, is_master
|
||||||
torch_dist = is_dist()
|
torch_dist = is_dist()
|
||||||
is_worker0 = is_master()
|
is_worker0 = is_master()
|
||||||
@@ -76,7 +78,7 @@ def add_file_handler_if_needed(logger, log_file, file_mode, log_level):
|
|||||||
if isinstance(handler, logging.FileHandler):
|
if isinstance(handler, logging.FileHandler):
|
||||||
return
|
return
|
||||||
|
|
||||||
if importlib.util.find_spec('torch') is not None:
|
if iutil.find_spec('torch') is not None:
|
||||||
from modelscope.utils.torch_utils import is_master
|
from modelscope.utils.torch_utils import is_master
|
||||||
is_worker0 = is_master()
|
is_worker0 = is_master()
|
||||||
else:
|
else:
|
||||||
|
|||||||
@@ -17,11 +17,11 @@ from typing import Any, Iterable, List, Optional, Set, Union
|
|||||||
import json
|
import json
|
||||||
import pkg_resources
|
import pkg_resources
|
||||||
|
|
||||||
|
from modelscope import snapshot_download
|
||||||
from modelscope.fileio.file import LocalStorage
|
from modelscope.fileio.file import LocalStorage
|
||||||
from modelscope.utils.ast_utils import FilesAstScanning
|
from modelscope.utils.ast_utils import FilesAstScanning
|
||||||
from modelscope.utils.constant import DEFAULT_MODEL_REVISION
|
from modelscope.utils.constant import DEFAULT_MODEL_REVISION
|
||||||
from modelscope.utils.file_utils import get_modelscope_cache_dir
|
from modelscope.utils.file_utils import get_modelscope_cache_dir
|
||||||
from modelscope.utils.hub import read_config, snapshot_download
|
|
||||||
from modelscope.utils.logger import get_logger
|
from modelscope.utils.logger import get_logger
|
||||||
|
|
||||||
logger = get_logger()
|
logger = get_logger()
|
||||||
@@ -1140,6 +1140,7 @@ class EnvsManager(object):
|
|||||||
cache_dir = os.getenv('MODELSCOPE_CACHE', cache_dir)
|
cache_dir = os.getenv('MODELSCOPE_CACHE', cache_dir)
|
||||||
self.env_dir = os.path.join(cache_dir, EnvsManager.name, model_id)
|
self.env_dir = os.path.join(cache_dir, EnvsManager.name, model_id)
|
||||||
model_dir = snapshot_download(model_id, revision=model_revision)
|
model_dir = snapshot_download(model_id, revision=model_revision)
|
||||||
|
from modelscope.utils.hub import read_config
|
||||||
cfg = read_config(model_dir)
|
cfg = read_config(model_dir)
|
||||||
self.plugins = cfg.get('plugins', [])
|
self.plugins = cfg.get('plugins', [])
|
||||||
self.allow_remote = cfg.get('allow_remote', False)
|
self.allow_remote = cfg.get('allow_remote', False)
|
||||||
|
|||||||
@@ -1 +1 @@
|
|||||||
-r requirements/framework.txt
|
-r requirements/hub.txt
|
||||||
|
|||||||
@@ -23,7 +23,6 @@ imageio>=2.9.0
|
|||||||
imageio-ffmpeg>=0.4.2
|
imageio-ffmpeg>=0.4.2
|
||||||
imgaug>=0.4.0
|
imgaug>=0.4.0
|
||||||
kornia>=0.5.0
|
kornia>=0.5.0
|
||||||
lap
|
|
||||||
lmdb
|
lmdb
|
||||||
lpips
|
lpips
|
||||||
ml_collections
|
ml_collections
|
||||||
@@ -52,8 +51,8 @@ PyMCubes
|
|||||||
pytorch-lightning
|
pytorch-lightning
|
||||||
regex
|
regex
|
||||||
# <0.20.0 for compatible python3.7 python3.8
|
# <0.20.0 for compatible python3.7 python3.8
|
||||||
scikit-image>=0.19.3,<0.20.0
|
scikit-image
|
||||||
scikit-learn>=0.20.1
|
scikit-learn
|
||||||
shapely
|
shapely
|
||||||
shotdetect_scenedetect_lgss>=0.0.4
|
shotdetect_scenedetect_lgss>=0.0.4
|
||||||
smplx
|
smplx
|
||||||
|
|||||||
@@ -3,7 +3,6 @@ attrs
|
|||||||
datasets>=2.16.0,<2.19.0
|
datasets>=2.16.0,<2.19.0
|
||||||
einops
|
einops
|
||||||
filelock>=3.3.0
|
filelock>=3.3.0
|
||||||
gast>=0.2.2
|
|
||||||
huggingface_hub
|
huggingface_hub
|
||||||
numpy
|
numpy
|
||||||
oss2
|
oss2
|
||||||
|
|||||||
3
requirements/hub.txt
Normal file
3
requirements/hub.txt
Normal file
@@ -0,0 +1,3 @@
|
|||||||
|
requests>=2.25
|
||||||
|
tqdm>=4.64.0
|
||||||
|
urllib3>=1.26
|
||||||
@@ -1,4 +1,3 @@
|
|||||||
fastapi
|
fastapi
|
||||||
requests
|
|
||||||
sse-starlette
|
sse-starlette
|
||||||
uvicorn
|
uvicorn
|
||||||
9
setup.py
9
setup.py
@@ -5,7 +5,6 @@ import shutil
|
|||||||
import subprocess
|
import subprocess
|
||||||
from setuptools import find_packages, setup
|
from setuptools import find_packages, setup
|
||||||
|
|
||||||
from modelscope.utils.ast_utils import generate_ast_template
|
|
||||||
from modelscope.utils.constant import Fields
|
from modelscope.utils.constant import Fields
|
||||||
|
|
||||||
|
|
||||||
@@ -171,6 +170,7 @@ def pack_resource():
|
|||||||
|
|
||||||
if __name__ == '__main__':
|
if __name__ == '__main__':
|
||||||
# write_version_py()
|
# write_version_py()
|
||||||
|
from modelscope.utils.ast_utils import generate_ast_template
|
||||||
generate_ast_template()
|
generate_ast_template()
|
||||||
pack_resource()
|
pack_resource()
|
||||||
os.chdir('package')
|
os.chdir('package')
|
||||||
@@ -192,6 +192,13 @@ if __name__ == '__main__':
|
|||||||
filed_name = f'audio_{subfiled}'
|
filed_name = f'audio_{subfiled}'
|
||||||
extra_requires[filed_name], _ = parse_requirements(
|
extra_requires[filed_name], _ = parse_requirements(
|
||||||
f'requirements/audio/{filed_name}.txt')
|
f'requirements/audio/{filed_name}.txt')
|
||||||
|
framework_requires = extra_requires['framework']
|
||||||
|
# add framework dependencies to every field
|
||||||
|
for field, requires in extra_requires.items():
|
||||||
|
if field not in [
|
||||||
|
'server', 'framework'
|
||||||
|
]: # server need install model's field dependencies before.
|
||||||
|
extra_requires[field] = framework_requires + extra_requires[field]
|
||||||
extra_requires['all'] = all_requires
|
extra_requires['all'] = all_requires
|
||||||
|
|
||||||
setup(
|
setup(
|
||||||
|
|||||||
Reference in New Issue
Block a user