Files
modelscope/modelscope/models/nlp/structbert/configuration_sbert.py
yuze.zyz 21fa71baf0 [to #42322933] add/refactor nlp models source code and finetune
1. add sbert,veco,palm,space source code
2. support sbert sequence classification, token classification finetune
3. support veco sequence classification finetune
4. support palm nlg finetune
evaluation result: https://sheet.alibaba-inc.com/#/sheet/f7fdcc7f22bd5105 sheet:Maas
5. add ut for finetunes
6. add veco's taskdataset processor
7. add a common trainer for nlp, and a specific trainer for veco
8. merge some duplicate codes of models, preprocessors, pipelines
        Link: https://code.alibaba-inc.com/Ali-MaaS/MaaS-lib/codereview/9574105

    * add basic class of hook&metrics

* pre-commit passed

* change some comments

* pre commit passed

* 1. remove accuracy's groups 2. remove useless hooks 3. simplify priorities

* pre-commit passed

* fix a comment

* Merge branch 'master' into finetune_hooks_metrics

# Conflicts:
#	modelscope/metainfo.py

* pre-commit passed

* add basic class of hook&metrics

* pre-commit passed

* change some comments

* pre commit passed

* 1. remove accuracy's groups 2. remove useless hooks 3. simplify priorities

* pre-commit passed

* fix a comment

* Merge branch 'feat/finetune' of gitlab.alibaba-inc.com:Ali-MaaS/MaaS-lib into feat/finetune

* mv hooks related to modelscope/trainers/hooks

* mv priority back

* add torch mdoel base and test

* update hooks, trainer, import_util

* add torch epoch based trainer and dis utils

* add hooks

* fix warmup

* format code stype and fix warmup and add warmup unittest

* fix impls

* pre-commit check passed

* update hook and add EpochBasedTrainer

* add trainer unittest

* Merge branch 'feat/add_hooks' into feat/add_task

# Conflicts:
#	modelscope/models/base_torch.py
#	modelscope/trainers/hooks/hook.py
#	modelscope/trainers/trainer.py

* update unittest name

* rewrite taskdataset to trainer

* fix trainer and add unittest

* add unittest

* code: run to forward

* run through... but ugly code

* arrange some cls

* fix some errs

* revert some mistakes

* init check in

* Merge branch 'feat/add_hooks' into feat/add_task

# Conflicts:
#	modelscope/trainers/trainer.py

* test with bigger epoch and size

* add the default metrics class

* move build metrics code to a method

* merge add_task

* merge origin add_task

* add device initialization

* remove preprocessor arg for bool

* add task models

* move metric collect logic to metrics class

* pre-commit passed

* fix cr comments

* precommit passed

* add task models

* Merge remote-tracking branch 'origin/feat/add_task' into feat/backbone_head

* add comment

* change comment formats.

* fix comments

* fix ut bug

* fix comments

* add wrapper check

* fix comments

* pre commit passed

* fix cr comments

* solve a loop import problem

* fix ut bug

* fix ut errors

* change dummydataset to msdataset

* precommit passed

* merge add task

* backbone-head is build, model is not correctly loaded

* model load states matched

* result matched

* lint

* add veco/palm_v2 code

* merge master

* merge master success running

* add repr model name level

* Merge branch 'feat/veco_palm' into feat/finetune_sbert_veco

* model test for training

* add token-classification metric add formal ut

* fix running bug

* finetune and pipeline are working with backbone-head

* add nli

* add missing code

* finetune and pipeline are working with backbone-head

* Merge branch 'feat/backbone_head' of http://gitlab.alibaba-inc.com/Ali-MaaS/MaaS-lib into feat/backbone_head

* add a test repo for pr

* remove merge conflicted file

* remove merge conflicted file 1

* lint check

* import error

* none type bug fix

* forward input unpacking or dict bug

* move head into models, add build_backbone with registry, no base method

* merge master

* feat: 1. add interleave dataset method 2. support multiple dataset in trainer.build_dataset 3. support 3 sub tasks in sequence_classification task

* unfinished

* update the task model structure in NLP field

* merge master

* update by comments

* keep the default model id as current on production

* unfinished

* unfinished

* veco can run

* Merge remote-tracking branch 'origin/master' into feat/backbone_head

* add taskmodel for module management

* remove forward_input_is_dict

* unfinished

* token classification started

* update base model structure

* move space to backbone

* remove 'type' in build_from_cfg method

* test update

* bug fix

* on tesing, mess code

* Merge branch 'feat/backbone_head' into feat/refactor_nlp_730

# Conflicts:
#	modelscope/metrics/builder.py
#	modelscope/models/__init__.py
#	modelscope/models/nlp/__init__.py
#	modelscope/preprocessors/nlp.py
#	modelscope/trainers/trainer.py
#	requirements/multi-modal.txt

* add missing merge

* add sofa source code

* refactor

* add veco task dataset

* add veco task dataset

* pre-commit passed

* fix bug of log

* add some features

* merge master

* bug fix

* refine nlp models

* fix the training error

* unfinished

* refactor pipeline

* Merge branch 'feat/backbone_head' into feat/refactor_nlp_730

# Conflicts:
#	modelscope/metrics/builder.py
#	modelscope/models/nlp/__init__.py
#	modelscope/models/nlp/backbones/structbert/modeling_sbert.py
#	modelscope/models/nlp/palm_v2/palm_for_text_generation.py
#	modelscope/preprocessors/base.py
#	modelscope/preprocessors/nlp.py
#	modelscope/trainers/trainer.py

* Merge commit 'ab04ceafc5453ce7daa9aa09e37a55f703072a10' into feat/refactor_nlp_730

# Conflicts:
#	modelscope/metainfo.py
#	modelscope/metrics/builder.py
#	modelscope/models/__init__.py
#	modelscope/models/base/base_torch_model.py
#	modelscope/models/nlp/__init__.py
#	modelscope/models/nlp/backbones/space/model/intent_unified_transformer.py
#	modelscope/models/nlp/backbones/space/model/model_base.py
#	modelscope/models/nlp/palm_v2/palm_for_text_generation.py
#	modelscope/models/nlp/sbert_for_sequence_classification.py
#	modelscope/models/nlp/sequence_classification.py
#	modelscope/models/nlp/space/__init__.py
#	modelscope/models/nlp/space_for_dialog_intent_prediction.py
#	modelscope/models/nlp/space_for_dialog_modeling.py
#	modelscope/models/nlp/space_for_dialog_state_tracking.py
#	modelscope/models/nlp/task_model.py
#	modelscope/pipelines/nlp/sentiment_classification_pipeline.py
#	modelscope/preprocessors/base.py
#	modelscope/preprocessors/nlp.py
#	modelscope/trainers/trainer.py

* revert changes

* unify sentnece classification postprocess

* revert some changes, move some model files

* pipeline first case run through

* ws pipeline passed

* Merge branch 'feat/refactor_nlp_730' into feat/finetune_sbert_veco

* finetune

* revert code

* revert some code

* ws finetune started, only the accuracy is weird

* Merge branch 'feat/veco_taskdataset' into feat/finetune_sbert_veco

# Conflicts:
#	modelscope/task_datasets/veco_dataset.py
#	tests/taskdataset/test_veco_dataset.py

* veco+nli finetune started

* Merge branch 'master' into feat/finetune_sbert_veco

# Conflicts:
#	modelscope/models/nlp/sbert_for_sequence_classification.py
#	modelscope/models/nlp/sbert_for_token_classification.py
#	modelscope/models/nlp/sbert_for_zero_shot_classification.py
#	modelscope/models/nlp/space/space_for_dialog_intent_prediction.py
#	modelscope/models/nlp/space/space_for_dialog_modeling.py
#	modelscope/trainers/trainer.py

* add trainer for nlp

* trainer: dataset params passed into preprocessor

* test passed by nlptrainer

* fix some bugs

* fix some bugs

* add backbone/head subclass

* fix regression bugs

* fix bug in token-cls finetune

* support cfg modification

* fix bug

* fix bug

* update requirements

* add some comments and fix some t

* add some comments and revert a argument

* split to two test files

* revert code

* fixbug in precessor

(cherry picked from commit 7a648d096ef8500c694d3255dabe29e6f4bfc3e5)

* fix ut bug

* support sbert models

* unfinished

* Merge branch 'feat/finetune_sbert_veco' into sly_tmp_veco_finetune

# Conflicts:
#	tests/trainers/test_finetune_sequence_classification.py

* fixbug in veco

* fix bug

* fixbug

* correct running params

* remove useless files

* add palm finetuning with cnn_dailymail dataset

* copy space model from sofa

* Merge branch 'feat/finetune_sbert_veco' of gitlab.alibaba-inc.com:Ali-MaaS/MaaS-lib into feat/finetune_sbert_veco

* Merge branch 'master' into feat/finetune_sbert_veco

# Conflicts:
#	modelscope/metrics/__init__.py
#	modelscope/models/__init__.py
#	modelscope/models/nlp/__init__.py
#	modelscope/models/nlp/backbones/__init__.py
#	modelscope/models/nlp/backbones/structbert/modeling_sbert.py
#	modelscope/models/nlp/heads/__init__.py
#	modelscope/models/nlp/masked_language.py
#	modelscope/models/nlp/palm_v2/palm_for_text_generation.py
#	modelscope/models/nlp/sbert_for_nli.py
#	modelscope/models/nlp/sbert_for_sentence_similarity.py
#	modelscope/models/nlp/sbert_for_sentiment_classification.py
#	modelscope/models/nlp/sbert_for_sequence_classification.py
#	modelscope/models/nlp/sbert_for_token_classification.py
#	modelscope/models/nlp/sbert_for_zero_shot_classification.py
#	modelscope/models/nlp/sequence_classification.py
#	modelscope/models/nlp/space/space_for_dialog_intent_prediction.py
#	modelscope/models/nlp/space/space_for_dialog_modeling.py
#	modelscope/models/nlp/space/space_for_dialog_state_tracking.py
#	modelscope/models/nlp/structbert/adv_utils.py
#	modelscope/models/nlp/structbert/configuration_sbert.py
#	modelscope/models/nlp/task_models/task_model.py
#	modelscope/pipelines/__init__.py
#	modelscope/pipelines/nlp/__init__.py
#	modelscope/pipelines/nlp/fill_mask_pipeline.py
#	modelscope/pipelines/nlp/named_entity_recognition_pipeline.py
#	modelscope/pipelines/nlp/nli_pipeline.py
#	modelscope/pipelines/nlp/sentence_similarity_pipeline.py
#	modelscope/pipelines/nlp/sentiment_classification_pipeline.py
#	modelscope/pipelines/nlp/text_generation_pipeline.py
#	modelscope/pipelines/nlp/word_segmentation_pipeline.py
#	modelscope/pipelines/nlp/zero_shot_classification_pipeline.py
#	modelscope/preprocessors/nlp.py
#	modelscope/task_datasets/__init__.py
#	modelscope/trainers/trainer.py
#	modelscope/trainers/utils/inference.py
#	modelscope/utils/file_utils.py
#	requirements/nlp.txt
#	tests/pipelines/test_nli.py
#	tests/pipelines/test_sentence_similarity.py
#	tests/pipelines/test_sentiment_classification.py

* fix imports

* mark backbone in their own modeling

* pre-commit check passed

* pre-commit passed, remove roberta model

* fix a bug in ast import

* skip all finetune uts

* fix bugs

* pre-commit passed

* bug fixed

* bug fixed

* bug fixed

* bug fixed

* fix ut bug

* fix bug

* fix ut bug

* fix bug

* fix bug

* fixbugs

* fixbug

* revert veco

* revert veco because of core dump

* fix palm bug

* revert veco

* revert mistaken code

* add a test print

* pre-commit check

* test exception

* add test code

* for test

* fix bug and test

* remove test code

* remove useless file

* 1. fix some bugs 2. add backbone ut

* Merge branch 'master' into feat/finetune_refactor_730

# Conflicts:
#	modelscope/metainfo.py
#	modelscope/metrics/sequence_classification_metric.py
#	modelscope/models/nlp/__init__.py
#	modelscope/models/nlp/task_models/task_model.py
#	modelscope/preprocessors/__init__.py
#	modelscope/preprocessors/nlp.py
#	modelscope/trainers/trainer.py
#	modelscope/trainers/utils/inference.py
#	modelscope/utils/file_utils.py
#	tests/trainers/test_trainer_with_nlp.py

* pre-commit passed

* revert files

* increase test level

* unregister models

* fix bugs

* fix cr comments

* fix bug in backbone-head

* add sbert backbone

* fix bug

* add test for token-cls-metric

* pre-commit passed

* fix ut comments

* revert normal tokenizer to fast tokenizer

* Merge branch 'master' into feat/finetune_refactor_730

# Conflicts:
#	modelscope/models/nlp/__init__.py
#	modelscope/models/nlp/backbones/__init__.py
#	modelscope/models/nlp/backbones/structbert/__init__.py
#	modelscope/models/nlp/masked_language.py
#	modelscope/models/nlp/palm_v2/palm_for_text_generation.py
#	modelscope/models/nlp/sbert_for_sequence_classification.py
#	modelscope/models/nlp/sbert_for_token_classification.py
#	modelscope/models/nlp/sbert_for_zero_shot_classification.py
#	modelscope/pipelines/nlp/text_generation_pipeline.py
#	modelscope/preprocessors/nlp.py
#	modelscope/trainers/trainer.py
#	modelscope/trainers/utils/inference.py

* fix merge bugs

* pre commit passed

* fix bug

* fix bug

* fix bug

* fix bug from master

* add print

* fix ut bug

* fix bug

* Merge branch 'master' into feat/finetune_refactor_730

* skip task model test
2022-08-03 18:38:41 +08:00

135 lines
7.5 KiB
Python

# Copyright 2021-2022 The Alibaba DAMO NLP Team Authors.
# Copyright 2018 The Google AI Language Team Authors and The HuggingFace Inc. team.
# Copyright (c) 2018, NVIDIA CORPORATION. All rights reserved.
# All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
""" SBERT model configuration, mainly copied from :class:`~transformers.BertConfig` """
from transformers import PretrainedConfig
from modelscope.utils import logger as logging
logger = logging.get_logger(__name__)
class SbertConfig(PretrainedConfig):
r"""
This is the configuration class to store the configuration
of a :class:`~modelscope.models.nlp.structbert.SbertModel`.
It is used to instantiate a SBERT model according to the specified arguments.
Configuration objects inherit from :class:`~transformers.PretrainedConfig` and can be used to control the model
outputs. Read the documentation from :class:`~transformers.PretrainedConfig` for more information.
Args:
vocab_size (:obj:`int`, `optional`, defaults to 30522):
Vocabulary size of the BERT model. Defines the number of different tokens that can be represented by the
:obj:`inputs_ids` passed when calling :class:`~transformers.BertModel` or
:class:`~transformers.TFBertModel`.
hidden_size (:obj:`int`, `optional`, defaults to 768):
Dimensionality of the encoder layers and the pooler layer.
num_hidden_layers (:obj:`int`, `optional`, defaults to 12):
Number of hidden layers in the Transformer encoder.
num_attention_heads (:obj:`int`, `optional`, defaults to 12):
Number of attention heads for each attention layer in the Transformer encoder.
intermediate_size (:obj:`int`, `optional`, defaults to 3072):
Dimensionality of the "intermediate" (often named feed-forward) layer in the Transformer encoder.
hidden_act (:obj:`str` or :obj:`Callable`, `optional`, defaults to :obj:`"gelu"`):
The non-linear activation function (function or string) in the encoder and pooler. If string,
:obj:`"gelu"`, :obj:`"relu"`, :obj:`"silu"` and :obj:`"gelu_new"` are supported.
hidden_dropout_prob (:obj:`float`, `optional`, defaults to 0.1):
The dropout probability for all fully connected layers in the embeddings, encoder, and pooler.
attention_probs_dropout_prob (:obj:`float`, `optional`, defaults to 0.1):
The dropout ratio for the attention probabilities.
max_position_embeddings (:obj:`int`, `optional`, defaults to 512):
The maximum sequence length that this model might ever be used with. Typically set this to something large
just in case (e.g., 512 or 1024 or 2048).
type_vocab_size (:obj:`int`, `optional`, defaults to 2):
The vocabulary size of the :obj:`token_type_ids` passed when calling :class:`~transformers.BertModel` or
:class:`~transformers.TFBertModel`.
initializer_range (:obj:`float`, `optional`, defaults to 0.02):
The standard deviation of the truncated_normal_initializer for initializing all weight matrices.
layer_norm_eps (:obj:`float`, `optional`, defaults to 1e-12):
The epsilon used by the layer normalization layers.
position_embedding_type (:obj:`str`, `optional`, defaults to :obj:`"absolute"`):
Type of position embedding. Choose one of :obj:`"absolute"`, :obj:`"relative_key"`,
:obj:`"relative_key_query"`. For positional embeddings use :obj:`"absolute"`. For more information on
:obj:`"relative_key"`, please refer to `Self-Attention with Relative Position Representations (Shaw et al.)
<https://arxiv.org/abs/1803.02155>`__. For more information on :obj:`"relative_key_query"`, please refer to
`Method 4` in `Improve Transformer Models with Better Relative Position Embeddings (Huang et al.)
<https://arxiv.org/abs/2009.13658>`__.
use_cache (:obj:`bool`, `optional`, defaults to :obj:`True`):
Whether or not the model should return the last key/values attentions (not used by all models). Only
relevant if ``config.is_decoder=True``.
classifier_dropout (:obj:`float`, `optional`):
The dropout ratio for the classification head.
adv_grad_factor (:obj:`float`, `optional`): This factor will be multipled by the KL loss grad and then
the result will be added to the original embedding.
More details please check:https://arxiv.org/abs/1908.04577
The range of this value always be 1e-3~1e-7
adv_bound (:obj:`float`, `optional`): adv_bound is used to cut the top and the bottom bound of
the produced embedding.
If not proveded, 2 * sigma will be used as the adv_bound factor
sigma (:obj:`float`, `optional`): The std factor used to produce a 0 mean normal distribution.
If adv_bound not proveded, 2 * sigma will be used as the adv_bound factor
"""
model_type = 'sbert'
def __init__(self,
vocab_size=30522,
hidden_size=768,
num_hidden_layers=12,
num_attention_heads=12,
intermediate_size=3072,
hidden_act='gelu',
hidden_dropout_prob=0.1,
attention_probs_dropout_prob=0.1,
max_position_embeddings=512,
type_vocab_size=2,
initializer_range=0.02,
layer_norm_eps=1e-12,
pad_token_id=0,
position_embedding_type='absolute',
use_cache=True,
classifier_dropout=None,
**kwargs):
super().__init__(pad_token_id=pad_token_id, **kwargs)
self.vocab_size = vocab_size
self.hidden_size = hidden_size
self.num_hidden_layers = num_hidden_layers
self.num_attention_heads = num_attention_heads
self.hidden_act = hidden_act
self.intermediate_size = intermediate_size
self.hidden_dropout_prob = hidden_dropout_prob
self.attention_probs_dropout_prob = attention_probs_dropout_prob
self.max_position_embeddings = max_position_embeddings
self.type_vocab_size = type_vocab_size
self.initializer_range = initializer_range
self.layer_norm_eps = layer_norm_eps
self.position_embedding_type = position_embedding_type
self.use_cache = use_cache
self.classifier_dropout = classifier_dropout
# adv_grad_factor, used in adv loss.
# Users can check adv_utils.py for details.
# if adv_grad_factor set to None, no adv loss will not applied to the model.
self.adv_grad_factor = 5e-5 if 'adv_grad_factor' not in kwargs else kwargs[
'adv_grad_factor']
# sigma value, used in adv loss.
self.sigma = 5e-6 if 'sigma' not in kwargs else kwargs['sigma']
# adv_bound value, used in adv loss.
self.adv_bound = 2 * self.sigma if 'adv_bound' not in kwargs else kwargs[
'adv_bound']