add inference gif and refine doc

This commit is contained in:
wenmeng.zwm
2023-02-17 13:32:00 +08:00
parent 78e0bc896f
commit bbe8849899
2 changed files with 16 additions and 5 deletions

View File

@@ -37,9 +37,10 @@ The Python library offers the layered-APIs necessary for model contributors to i
Apart from harboring implementations of various models, ModelScope library also enables the necessary interactions with ModelScope backend services, particularly with the Model-Hub and Dataset-Hub. Such interactions facilitate management of various entities (models and datasets) to be performed seamlessly under-the-hood, including entity lookup, version control, cache management, and many others.
# Models and Online Demos
ModelScope has open-sourced more than 600 models, covering NLP, CV, Audio, Multi-modality, and AI for Science, etc., and also contains hundreds of SOTA models. Users can enter the modelhub of ModelScope through Zero-threshold online experience, or experience the model in the way of developing a cloud environment.
Here are some examples:
Hundreds of models are made publicly available on ModelScope (600+ and counting), covering the latest development in areas such as NLP, CV, Audio, Multi-modality, and AI for Science, etc. Many of these models represent the SOTA in the fields, and made their open-sourced debut on ModelScope. Users can visit ModelScope([modelscope.cn](http://www.modelscope.cn)) and experience first-hand how these models perform via online experience, with just a few clicks. Immediate developer-experience is also possible through the ModelScope Notebook, which is backed by ready-to-use cloud CPU/GPU development environment, and is only a click away on ModelScope website.
Some of the representative examples include:
NLP:
@@ -113,7 +114,8 @@ AI for Science:
We provide unified interface for inference using `pipeline`, finetuning and evaluation using `Trainer` for different tasks.
For any tasks with any type of input(image, text, audio, video...), you need only 3 lines of code to load model and get the inference result as follows:
For any given task with any type of input (image, text, audio, video...), inference pipeline can be implemented with only a few lines of code, which will automatically load the associated model to get inference result, as is exemplified below:
```python
>>> from modelscope.pipelines import pipeline
>>> word_segmentation = pipeline('word-segmentation',model='damo/nlp_structbert_word-segmentation_chinese-base')
@@ -139,6 +141,8 @@ The output image is
For finetuning and evaluation, you need ten more lines of code to construct dataset and trainer, and by calling `traner.train()` and
`trainer.evaluate()` you can finish finetuning and evaluating a certain model.
For example, we use the gpt3 1.3B model to load the chinese poetry dataset and finetune the model, the resulted model can be used for poetry generation.
```python
>>> from modelscope.metainfo import Trainers
>>> from modelscope.msdatasets import MsDataset

View File

@@ -37,7 +37,13 @@ ModelScope Library为模型贡献者提供了必要的分层API以便将来
除了包含各种模型的实现之外ModelScope Library还支持与ModelScope后端服务进行必要的交互特别是与Model-Hub和Dataset-Hub的交互。这种交互促进了模型和数据集的管理在后台无缝执行包括模型数据集查询、版本控制、缓存管理等。
# 部分模型和在线体验
ModelScope开源了600多个模型,涵盖自然语言处理、计算机视觉、语音、多模态、科学计算等,包含数百个SOTA模型。用户可以进入ModelScope的模型中心零门槛在线体验或者Notebook方式体验模型。
ModelScope开源了数百个(当前600+)模型,涵盖自然语言处理、计算机视觉、语音、多模态、科学计算等,其中包含数百个SOTA模型。用户可以进入ModelScope网站([modelscope.cn](http://www.modelscope.cn))的模型中心零门槛在线体验或者Notebook方式体验模型。
<p align="center">
<br>
<img src="https://modelscope.oss-cn-beijing.aliyuncs.com/resource/inference.gif"/>
<br>
<p>
示例如下:
@@ -136,8 +142,9 @@ ModelScope开源了600多个模型涵盖自然语言处理、计算机视觉
输出图像如下
![image](https://resouces.modelscope.cn/document/docdata/2023-2-16_20:53/dist/ModelScope%20Library%E6%95%99%E7%A8%8B/resources/1656989768092-5470f8ac-cda8-4703-ac98-dbb6fd675b34.png)
对于微调和评估模型, 你需要通过十多行代码构建dataset和trainer调用`trainer.train()``trainer.evaluate()`即可.
对于微调和评估模型, 你需要通过十多行代码构建dataset和trainer调用`trainer.train()``trainer.evaluate()`即可
例如我们利用gpt3 1.3B的模型加载是诗歌数据集进行finetune可以完成古诗生成模型的训练。
```python
>>> from modelscope.metainfo import Trainers
>>> from modelscope.msdatasets import MsDataset