diff --git a/README.md b/README.md index dd6d3350..17745838 100644 --- a/README.md +++ b/README.md @@ -18,6 +18,10 @@ +
English | @@ -51,35 +55,36 @@ Hundreds of models are made publicly available on [ModelScope]( https://www.mode Some representative examples include: -NLP: +LLM: -* [ChatGLM3-6B](https://modelscope.cn/models/ZhipuAI/chatglm3-6b/summary) +* [Yi-1.5-34B-Chat](https://modelscope.cn/models/01ai/Yi-1.5-34B-Chat/summary) -* [Qwen-14B-Chat](https://modelscope.cn/models/qwen/Qwen-14B-Chat/summary) +* [Qwen1.5-110B-Chat](https://modelscope.cn/models/qwen/Qwen1.5-110B-Chat/summary) -* [Baichuan2-13B-Chat](https://modelscope.cn/models/baichuan-inc/Baichuan2-13B-Chat/summary) +* [DeepSeek-V2-Chat](https://modelscope.cn/models/deepseek-ai/DeepSeek-V2-Chat/summary) * [Ziya2-13B-Chat](https://modelscope.cn/models/Fengshenbang/Ziya2-13B-Chat/summary) -* [Internlm-chat-20b](https://modelscope.cn/models/Shanghai_AI_Laboratory/internlm-chat-20b/summary) +* [Meta-Llama-3-8B-Instruct](https://modelscope.cn/models/LLM-Research/Meta-Llama-3-8B-Instruct/summary) -* [Udever Multilingual Universal Text Representation Model 1b1](https://modelscope.cn/models/damo/udever-bloom-1b1/summary) +* [Phi-3-mini-128k-instruct](https://modelscope.cn/models/LLM-Research/Phi-3-mini-128k-instruct/summary) -* [CoROM Text Vector - Chinese - E-commerce Domain - Base](https://modelscope.cn/models/damo/nlp_corom_sentence-embedding_chinese-base-ecom/summary) - -* [MGeo Address Similarity Matching Entity Alignment - Chinese - Address Field - Base](https://modelscope.cn/models/damo/mgeo_geographic_entity_alignment_chinese_base/summary) Multi-Modal: * [Qwen-VL-Chat](https://modelscope.cn/models/qwen/Qwen-VL-Chat/summary) -* [CogVLM](https://modelscope.cn/models/ZhipuAI/CogVLM/summary) +* [Yi-VL-6B](https://modelscope.cn/models/01ai/Yi-VL-6B/summary) -* [Text-to-Video Synthesis Large Model - English - General Domain](https://modelscope.cn/models/damo/text-to-video-synthesis/summary) +* [InternVL-Chat-V1-5](https://modelscope.cn/models/AI-ModelScope/InternVL-Chat-V1-5/summary) -* [I2VGen-XL High Definition Image to Video Large Model](https://modelscope.cn/models/damo/Image-to-Video/summary) +* [deepseek-vl-7b-chat](https://modelscope.cn/models/deepseek-ai/deepseek-vl-7b-chat/summary) -* [I2VGen-XL High Definition Video to Video Large Model](https://modelscope.cn/models/damo/Video-to-Video/summary) +* [OpenSoraPlan](https://modelscope.cn/models/AI-ModelScope/Open-Sora-Plan-v1.0.0/summary) + +* [OpenSora](https://modelscope.cn/models/luchentech/OpenSora-STDiT-v1-HQ-16x512x512/summary) + +* [I2VGen-XL](https://modelscope.cn/models/iic/i2vgen-xl/summary) CV: diff --git a/README_ja.md b/README_ja.md index 4523add4..e058e231 100644 --- a/README_ja.md +++ b/README_ja.md @@ -18,6 +18,10 @@ +
English | @@ -51,33 +55,36 @@ ModelScope ライブラリは、様々なモデルの実装を保持するだけ 代表的な例をいくつか挙げると: -NLP: +大きなモデル: -* [nlp_gpt3_text-generation_2.7B](https://modelscope.cn/models/damo/nlp_gpt3_text-generation_2.7B) +* [Yi-1.5-34B-Chat](https://modelscope.cn/models/01ai/Yi-1.5-34B-Chat/summary) -* [ChatYuan-large](https://modelscope.cn/models/ClueAI/ChatYuan-large) +* [Qwen1.5-110B-Chat](https://modelscope.cn/models/qwen/Qwen1.5-110B-Chat/summary) -* [mengzi-t5-base](https://modelscope.cn/models/langboat/mengzi-t5-base) +* [DeepSeek-V2-Chat](https://modelscope.cn/models/deepseek-ai/DeepSeek-V2-Chat/summary) -* [nlp_csanmt_translation_en2zh](https://modelscope.cn/models/damo/nlp_csanmt_translation_en2zh) +* [Ziya2-13B-Chat](https://modelscope.cn/models/Fengshenbang/Ziya2-13B-Chat/summary) -* [nlp_raner_named-entity-recognition_chinese-base-news](https://modelscope.cn/models/damo/nlp_raner_named-entity-recognition_chinese-base-news) +* [Meta-Llama-3-8B-Instruct](https://modelscope.cn/models/LLM-Research/Meta-Llama-3-8B-Instruct/summary) -* [nlp_structbert_word-segmentation_chinese-base](https://modelscope.cn/models/damo/nlp_structbert_word-segmentation_chinese-base) +* [Phi-3-mini-128k-instruct](https://modelscope.cn/models/LLM-Research/Phi-3-mini-128k-instruct/summary) -* [Erlangshen-RoBERTa-330M-Sentiment](https://modelscope.cn/models/fengshenbang/Erlangshen-RoBERTa-330M-Sentiment) - -* [nlp_convai_text2sql_pretrain_cn](https://modelscope.cn/models/damo/nlp_convai_text2sql_pretrain_cn) マルチモーダル: -* [multi-modal_clip-vit-base-patch16_zh](https://modelscope.cn/models/damo/multi-modal_clip-vit-base-patch16_zh) +* [Qwen-VL-Chat](https://modelscope.cn/models/qwen/Qwen-VL-Chat/summary) -* [ofa_pretrain_base_zh](https://modelscope.cn/models/damo/ofa_pretrain_base_zh) +* [Yi-VL-6B](https://modelscope.cn/models/01ai/Yi-VL-6B/summary) -* [Taiyi-Stable-Diffusion-1B-Chinese-v0.1](https://modelscope.cn/models/fengshenbang/Taiyi-Stable-Diffusion-1B-Chinese-v0.1) +* [InternVL-Chat-V1-5](https://modelscope.cn/models/AI-ModelScope/InternVL-Chat-V1-5/summary) -* [mplug_visual-question-answering_coco_large_en](https://modelscope.cn/models/damo/mplug_visual-question-answering_coco_large_en) +* [deepseek-vl-7b-chat](https://modelscope.cn/models/deepseek-ai/deepseek-vl-7b-chat/summary) + +* [OpenSoraPlan](https://modelscope.cn/models/AI-ModelScope/Open-Sora-Plan-v1.0.0/summary) + +* [OpenSora](https://modelscope.cn/models/luchentech/OpenSora-STDiT-v1-HQ-16x512x512/summary) + +* [I2VGen-XL](https://modelscope.cn/models/iic/i2vgen-xl/summary) CV: diff --git a/README_zh.md b/README_zh.md index 6d5ff426..220ed9fd 100644 --- a/README_zh.md +++ b/README_zh.md @@ -18,6 +18,10 @@ +
English | @@ -50,36 +54,36 @@ ModelScope开源了数百个(当前700+)模型,涵盖自然语言处理、计 示例如下: -自然语言处理: +大模型: -* [ChatGLM3-6B](https://modelscope.cn/models/ZhipuAI/chatglm3-6b/summary) +* [Yi-1.5-34B-Chat](https://modelscope.cn/models/01ai/Yi-1.5-34B-Chat/summary) -* [Qwen-14B-Chat](https://modelscope.cn/models/qwen/Qwen-14B-Chat/summary) +* [Qwen1.5-110B-Chat](https://modelscope.cn/models/qwen/Qwen1.5-110B-Chat/summary) -* [Baichuan2-13B-Chat](https://modelscope.cn/models/baichuan-inc/Baichuan2-13B-Chat/summary) +* [DeepSeek-V2-Chat](https://modelscope.cn/models/deepseek-ai/DeepSeek-V2-Chat/summary) * [Ziya2-13B-Chat](https://modelscope.cn/models/Fengshenbang/Ziya2-13B-Chat/summary) -* [Internlm-chat-20b](https://modelscope.cn/models/Shanghai_AI_Laboratory/internlm-chat-20b/summary) +* [Meta-Llama-3-8B-Instruct](https://modelscope.cn/models/LLM-Research/Meta-Llama-3-8B-Instruct/summary) -* [Udever-bloom-1b1](https://modelscope.cn/models/damo/udever-bloom-1b1/summary) +* [Phi-3-mini-128k-instruct](https://modelscope.cn/models/LLM-Research/Phi-3-mini-128k-instruct/summary) -* [CoROM文本向量-中文-电商领域-base](https://modelscope.cn/models/damo/nlp_corom_sentence-embedding_chinese-base-ecom/summary) - -* [MGeo地址相似度匹配实体对齐-中文-地址领域-base](https://modelscope.cn/models/damo/mgeo_geographic_entity_alignment_chinese_base/summary) 多模态: * [Qwen-VL-Chat](https://modelscope.cn/models/qwen/Qwen-VL-Chat/summary) -* [CogVLM](https://modelscope.cn/models/ZhipuAI/CogVLM/summary) +* [Yi-VL-6B](https://modelscope.cn/models/01ai/Yi-VL-6B/summary) -* [Text-to-Video Synthesis Large Model - English - General Domain](https://modelscope.cn/models/damo/text-to-video-synthesis/summary) +* [InternVL-Chat-V1-5](https://modelscope.cn/models/AI-ModelScope/InternVL-Chat-V1-5/summary) -* [I2VGen-XL高清图片到视频大模型](https://modelscope.cn/models/damo/Image-to-Video/summary) +* [deepseek-vl-7b-chat](https://modelscope.cn/models/deepseek-ai/deepseek-vl-7b-chat/summary) -* [I2VGen-XL高清视频到视频大模型](https://modelscope.cn/models/damo/Video-to-Video/summary) +* [OpenSoraPlan](https://modelscope.cn/models/AI-ModelScope/Open-Sora-Plan-v1.0.0/summary) +* [OpenSora](https://modelscope.cn/models/luchentech/OpenSora-STDiT-v1-HQ-16x512x512/summary) + +* [I2VGen-XL](https://modelscope.cn/models/iic/i2vgen-xl/summary) 计算机视觉: