mirror of
https://github.com/modelscope/modelscope.git
synced 2025-12-16 08:17:45 +01:00
add inference gif and refine doc
This commit is contained in:
10
README.md
10
README.md
@@ -37,9 +37,10 @@ The Python library offers the layered-APIs necessary for model contributors to i
|
|||||||
Apart from harboring implementations of various models, ModelScope library also enables the necessary interactions with ModelScope backend services, particularly with the Model-Hub and Dataset-Hub. Such interactions facilitate management of various entities (models and datasets) to be performed seamlessly under-the-hood, including entity lookup, version control, cache management, and many others.
|
Apart from harboring implementations of various models, ModelScope library also enables the necessary interactions with ModelScope backend services, particularly with the Model-Hub and Dataset-Hub. Such interactions facilitate management of various entities (models and datasets) to be performed seamlessly under-the-hood, including entity lookup, version control, cache management, and many others.
|
||||||
|
|
||||||
# Models and Online Demos
|
# Models and Online Demos
|
||||||
ModelScope has open-sourced more than 600 models, covering NLP, CV, Audio, Multi-modality, and AI for Science, etc., and also contains hundreds of SOTA models. Users can enter the modelhub of ModelScope through Zero-threshold online experience, or experience the model in the way of developing a cloud environment.
|
|
||||||
|
|
||||||
Here are some examples:
|
Hundreds of models are made publicly available on ModelScope (600+ and counting), covering the latest development in areas such as NLP, CV, Audio, Multi-modality, and AI for Science, etc. Many of these models represent the SOTA in the fields, and made their open-sourced debut on ModelScope. Users can visit ModelScope([modelscope.cn](http://www.modelscope.cn)) and experience first-hand how these models perform via online experience, with just a few clicks. Immediate developer-experience is also possible through the ModelScope Notebook, which is backed by ready-to-use cloud CPU/GPU development environment, and is only a click away on ModelScope website.
|
||||||
|
|
||||||
|
Some of the representative examples include:
|
||||||
|
|
||||||
NLP:
|
NLP:
|
||||||
|
|
||||||
@@ -113,7 +114,8 @@ AI for Science:
|
|||||||
|
|
||||||
We provide unified interface for inference using `pipeline`, finetuning and evaluation using `Trainer` for different tasks.
|
We provide unified interface for inference using `pipeline`, finetuning and evaluation using `Trainer` for different tasks.
|
||||||
|
|
||||||
For any tasks with any type of input(image, text, audio, video...), you need only 3 lines of code to load model and get the inference result as follows:
|
For any given task with any type of input (image, text, audio, video...), inference pipeline can be implemented with only a few lines of code, which will automatically load the associated model to get inference result, as is exemplified below:
|
||||||
|
|
||||||
```python
|
```python
|
||||||
>>> from modelscope.pipelines import pipeline
|
>>> from modelscope.pipelines import pipeline
|
||||||
>>> word_segmentation = pipeline('word-segmentation',model='damo/nlp_structbert_word-segmentation_chinese-base')
|
>>> word_segmentation = pipeline('word-segmentation',model='damo/nlp_structbert_word-segmentation_chinese-base')
|
||||||
@@ -139,6 +141,8 @@ The output image is
|
|||||||
For finetuning and evaluation, you need ten more lines of code to construct dataset and trainer, and by calling `traner.train()` and
|
For finetuning and evaluation, you need ten more lines of code to construct dataset and trainer, and by calling `traner.train()` and
|
||||||
`trainer.evaluate()` you can finish finetuning and evaluating a certain model.
|
`trainer.evaluate()` you can finish finetuning and evaluating a certain model.
|
||||||
|
|
||||||
|
For example, we use the gpt3 1.3B model to load the chinese poetry dataset and finetune the model, the resulted model can be used for poetry generation.
|
||||||
|
|
||||||
```python
|
```python
|
||||||
>>> from modelscope.metainfo import Trainers
|
>>> from modelscope.metainfo import Trainers
|
||||||
>>> from modelscope.msdatasets import MsDataset
|
>>> from modelscope.msdatasets import MsDataset
|
||||||
|
|||||||
11
README_zh.md
11
README_zh.md
@@ -37,7 +37,13 @@ ModelScope Library为模型贡献者提供了必要的分层API,以便将来
|
|||||||
除了包含各种模型的实现之外,ModelScope Library还支持与ModelScope后端服务进行必要的交互,特别是与Model-Hub和Dataset-Hub的交互。这种交互促进了模型和数据集的管理在后台无缝执行,包括模型数据集查询、版本控制、缓存管理等。
|
除了包含各种模型的实现之外,ModelScope Library还支持与ModelScope后端服务进行必要的交互,特别是与Model-Hub和Dataset-Hub的交互。这种交互促进了模型和数据集的管理在后台无缝执行,包括模型数据集查询、版本控制、缓存管理等。
|
||||||
|
|
||||||
# 部分模型和在线体验
|
# 部分模型和在线体验
|
||||||
ModelScope开源了600多个模型,涵盖自然语言处理、计算机视觉、语音、多模态、科学计算等,还包含数百个SOTA模型。用户可以进入ModelScope的模型中心零门槛在线体验,或者Notebook方式体验模型。
|
ModelScope开源了数百个(当前600+)模型,涵盖自然语言处理、计算机视觉、语音、多模态、科学计算等,其中包含数百个SOTA模型。用户可以进入ModelScope网站([modelscope.cn](http://www.modelscope.cn))的模型中心零门槛在线体验,或者Notebook方式体验模型。
|
||||||
|
|
||||||
|
<p align="center">
|
||||||
|
<br>
|
||||||
|
<img src="https://modelscope.oss-cn-beijing.aliyuncs.com/resource/inference.gif"/>
|
||||||
|
<br>
|
||||||
|
<p>
|
||||||
|
|
||||||
示例如下:
|
示例如下:
|
||||||
|
|
||||||
@@ -136,8 +142,9 @@ ModelScope开源了600多个模型,涵盖自然语言处理、计算机视觉
|
|||||||
输出图像如下
|
输出图像如下
|
||||||

|

|
||||||
|
|
||||||
对于微调和评估模型, 你需要通过十多行代码构建dataset和trainer,调用`trainer.train()`和`trainer.evaluate()`即可.
|
对于微调和评估模型, 你需要通过十多行代码构建dataset和trainer,调用`trainer.train()`和`trainer.evaluate()`即可。
|
||||||
|
|
||||||
|
例如我们利用gpt3 1.3B的模型,加载是诗歌数据集进行finetune,可以完成古诗生成模型的训练。
|
||||||
```python
|
```python
|
||||||
>>> from modelscope.metainfo import Trainers
|
>>> from modelscope.metainfo import Trainers
|
||||||
>>> from modelscope.msdatasets import MsDataset
|
>>> from modelscope.msdatasets import MsDataset
|
||||||
|
|||||||
Reference in New Issue
Block a user