Files
modelscope/modelscope/trainers/hooks/lr_scheduler_hook.py
yuze.zyz 21fa71baf0 [to #42322933] add/refactor nlp models source code and finetune
1. add sbert,veco,palm,space source code
2. support sbert sequence classification, token classification finetune
3. support veco sequence classification finetune
4. support palm nlg finetune
evaluation result: https://sheet.alibaba-inc.com/#/sheet/f7fdcc7f22bd5105 sheet:Maas
5. add ut for finetunes
6. add veco's taskdataset processor
7. add a common trainer for nlp, and a specific trainer for veco
8. merge some duplicate codes of models, preprocessors, pipelines
        Link: https://code.alibaba-inc.com/Ali-MaaS/MaaS-lib/codereview/9574105

    * add basic class of hook&metrics

* pre-commit passed

* change some comments

* pre commit passed

* 1. remove accuracy's groups 2. remove useless hooks 3. simplify priorities

* pre-commit passed

* fix a comment

* Merge branch 'master' into finetune_hooks_metrics

# Conflicts:
#	modelscope/metainfo.py

* pre-commit passed

* add basic class of hook&metrics

* pre-commit passed

* change some comments

* pre commit passed

* 1. remove accuracy's groups 2. remove useless hooks 3. simplify priorities

* pre-commit passed

* fix a comment

* Merge branch 'feat/finetune' of gitlab.alibaba-inc.com:Ali-MaaS/MaaS-lib into feat/finetune

* mv hooks related to modelscope/trainers/hooks

* mv priority back

* add torch mdoel base and test

* update hooks, trainer, import_util

* add torch epoch based trainer and dis utils

* add hooks

* fix warmup

* format code stype and fix warmup and add warmup unittest

* fix impls

* pre-commit check passed

* update hook and add EpochBasedTrainer

* add trainer unittest

* Merge branch 'feat/add_hooks' into feat/add_task

# Conflicts:
#	modelscope/models/base_torch.py
#	modelscope/trainers/hooks/hook.py
#	modelscope/trainers/trainer.py

* update unittest name

* rewrite taskdataset to trainer

* fix trainer and add unittest

* add unittest

* code: run to forward

* run through... but ugly code

* arrange some cls

* fix some errs

* revert some mistakes

* init check in

* Merge branch 'feat/add_hooks' into feat/add_task

# Conflicts:
#	modelscope/trainers/trainer.py

* test with bigger epoch and size

* add the default metrics class

* move build metrics code to a method

* merge add_task

* merge origin add_task

* add device initialization

* remove preprocessor arg for bool

* add task models

* move metric collect logic to metrics class

* pre-commit passed

* fix cr comments

* precommit passed

* add task models

* Merge remote-tracking branch 'origin/feat/add_task' into feat/backbone_head

* add comment

* change comment formats.

* fix comments

* fix ut bug

* fix comments

* add wrapper check

* fix comments

* pre commit passed

* fix cr comments

* solve a loop import problem

* fix ut bug

* fix ut errors

* change dummydataset to msdataset

* precommit passed

* merge add task

* backbone-head is build, model is not correctly loaded

* model load states matched

* result matched

* lint

* add veco/palm_v2 code

* merge master

* merge master success running

* add repr model name level

* Merge branch 'feat/veco_palm' into feat/finetune_sbert_veco

* model test for training

* add token-classification metric add formal ut

* fix running bug

* finetune and pipeline are working with backbone-head

* add nli

* add missing code

* finetune and pipeline are working with backbone-head

* Merge branch 'feat/backbone_head' of http://gitlab.alibaba-inc.com/Ali-MaaS/MaaS-lib into feat/backbone_head

* add a test repo for pr

* remove merge conflicted file

* remove merge conflicted file 1

* lint check

* import error

* none type bug fix

* forward input unpacking or dict bug

* move head into models, add build_backbone with registry, no base method

* merge master

* feat: 1. add interleave dataset method 2. support multiple dataset in trainer.build_dataset 3. support 3 sub tasks in sequence_classification task

* unfinished

* update the task model structure in NLP field

* merge master

* update by comments

* keep the default model id as current on production

* unfinished

* unfinished

* veco can run

* Merge remote-tracking branch 'origin/master' into feat/backbone_head

* add taskmodel for module management

* remove forward_input_is_dict

* unfinished

* token classification started

* update base model structure

* move space to backbone

* remove 'type' in build_from_cfg method

* test update

* bug fix

* on tesing, mess code

* Merge branch 'feat/backbone_head' into feat/refactor_nlp_730

# Conflicts:
#	modelscope/metrics/builder.py
#	modelscope/models/__init__.py
#	modelscope/models/nlp/__init__.py
#	modelscope/preprocessors/nlp.py
#	modelscope/trainers/trainer.py
#	requirements/multi-modal.txt

* add missing merge

* add sofa source code

* refactor

* add veco task dataset

* add veco task dataset

* pre-commit passed

* fix bug of log

* add some features

* merge master

* bug fix

* refine nlp models

* fix the training error

* unfinished

* refactor pipeline

* Merge branch 'feat/backbone_head' into feat/refactor_nlp_730

# Conflicts:
#	modelscope/metrics/builder.py
#	modelscope/models/nlp/__init__.py
#	modelscope/models/nlp/backbones/structbert/modeling_sbert.py
#	modelscope/models/nlp/palm_v2/palm_for_text_generation.py
#	modelscope/preprocessors/base.py
#	modelscope/preprocessors/nlp.py
#	modelscope/trainers/trainer.py

* Merge commit 'ab04ceafc5453ce7daa9aa09e37a55f703072a10' into feat/refactor_nlp_730

# Conflicts:
#	modelscope/metainfo.py
#	modelscope/metrics/builder.py
#	modelscope/models/__init__.py
#	modelscope/models/base/base_torch_model.py
#	modelscope/models/nlp/__init__.py
#	modelscope/models/nlp/backbones/space/model/intent_unified_transformer.py
#	modelscope/models/nlp/backbones/space/model/model_base.py
#	modelscope/models/nlp/palm_v2/palm_for_text_generation.py
#	modelscope/models/nlp/sbert_for_sequence_classification.py
#	modelscope/models/nlp/sequence_classification.py
#	modelscope/models/nlp/space/__init__.py
#	modelscope/models/nlp/space_for_dialog_intent_prediction.py
#	modelscope/models/nlp/space_for_dialog_modeling.py
#	modelscope/models/nlp/space_for_dialog_state_tracking.py
#	modelscope/models/nlp/task_model.py
#	modelscope/pipelines/nlp/sentiment_classification_pipeline.py
#	modelscope/preprocessors/base.py
#	modelscope/preprocessors/nlp.py
#	modelscope/trainers/trainer.py

* revert changes

* unify sentnece classification postprocess

* revert some changes, move some model files

* pipeline first case run through

* ws pipeline passed

* Merge branch 'feat/refactor_nlp_730' into feat/finetune_sbert_veco

* finetune

* revert code

* revert some code

* ws finetune started, only the accuracy is weird

* Merge branch 'feat/veco_taskdataset' into feat/finetune_sbert_veco

# Conflicts:
#	modelscope/task_datasets/veco_dataset.py
#	tests/taskdataset/test_veco_dataset.py

* veco+nli finetune started

* Merge branch 'master' into feat/finetune_sbert_veco

# Conflicts:
#	modelscope/models/nlp/sbert_for_sequence_classification.py
#	modelscope/models/nlp/sbert_for_token_classification.py
#	modelscope/models/nlp/sbert_for_zero_shot_classification.py
#	modelscope/models/nlp/space/space_for_dialog_intent_prediction.py
#	modelscope/models/nlp/space/space_for_dialog_modeling.py
#	modelscope/trainers/trainer.py

* add trainer for nlp

* trainer: dataset params passed into preprocessor

* test passed by nlptrainer

* fix some bugs

* fix some bugs

* add backbone/head subclass

* fix regression bugs

* fix bug in token-cls finetune

* support cfg modification

* fix bug

* fix bug

* update requirements

* add some comments and fix some t

* add some comments and revert a argument

* split to two test files

* revert code

* fixbug in precessor

(cherry picked from commit 7a648d096ef8500c694d3255dabe29e6f4bfc3e5)

* fix ut bug

* support sbert models

* unfinished

* Merge branch 'feat/finetune_sbert_veco' into sly_tmp_veco_finetune

# Conflicts:
#	tests/trainers/test_finetune_sequence_classification.py

* fixbug in veco

* fix bug

* fixbug

* correct running params

* remove useless files

* add palm finetuning with cnn_dailymail dataset

* copy space model from sofa

* Merge branch 'feat/finetune_sbert_veco' of gitlab.alibaba-inc.com:Ali-MaaS/MaaS-lib into feat/finetune_sbert_veco

* Merge branch 'master' into feat/finetune_sbert_veco

# Conflicts:
#	modelscope/metrics/__init__.py
#	modelscope/models/__init__.py
#	modelscope/models/nlp/__init__.py
#	modelscope/models/nlp/backbones/__init__.py
#	modelscope/models/nlp/backbones/structbert/modeling_sbert.py
#	modelscope/models/nlp/heads/__init__.py
#	modelscope/models/nlp/masked_language.py
#	modelscope/models/nlp/palm_v2/palm_for_text_generation.py
#	modelscope/models/nlp/sbert_for_nli.py
#	modelscope/models/nlp/sbert_for_sentence_similarity.py
#	modelscope/models/nlp/sbert_for_sentiment_classification.py
#	modelscope/models/nlp/sbert_for_sequence_classification.py
#	modelscope/models/nlp/sbert_for_token_classification.py
#	modelscope/models/nlp/sbert_for_zero_shot_classification.py
#	modelscope/models/nlp/sequence_classification.py
#	modelscope/models/nlp/space/space_for_dialog_intent_prediction.py
#	modelscope/models/nlp/space/space_for_dialog_modeling.py
#	modelscope/models/nlp/space/space_for_dialog_state_tracking.py
#	modelscope/models/nlp/structbert/adv_utils.py
#	modelscope/models/nlp/structbert/configuration_sbert.py
#	modelscope/models/nlp/task_models/task_model.py
#	modelscope/pipelines/__init__.py
#	modelscope/pipelines/nlp/__init__.py
#	modelscope/pipelines/nlp/fill_mask_pipeline.py
#	modelscope/pipelines/nlp/named_entity_recognition_pipeline.py
#	modelscope/pipelines/nlp/nli_pipeline.py
#	modelscope/pipelines/nlp/sentence_similarity_pipeline.py
#	modelscope/pipelines/nlp/sentiment_classification_pipeline.py
#	modelscope/pipelines/nlp/text_generation_pipeline.py
#	modelscope/pipelines/nlp/word_segmentation_pipeline.py
#	modelscope/pipelines/nlp/zero_shot_classification_pipeline.py
#	modelscope/preprocessors/nlp.py
#	modelscope/task_datasets/__init__.py
#	modelscope/trainers/trainer.py
#	modelscope/trainers/utils/inference.py
#	modelscope/utils/file_utils.py
#	requirements/nlp.txt
#	tests/pipelines/test_nli.py
#	tests/pipelines/test_sentence_similarity.py
#	tests/pipelines/test_sentiment_classification.py

* fix imports

* mark backbone in their own modeling

* pre-commit check passed

* pre-commit passed, remove roberta model

* fix a bug in ast import

* skip all finetune uts

* fix bugs

* pre-commit passed

* bug fixed

* bug fixed

* bug fixed

* bug fixed

* fix ut bug

* fix bug

* fix ut bug

* fix bug

* fix bug

* fixbugs

* fixbug

* revert veco

* revert veco because of core dump

* fix palm bug

* revert veco

* revert mistaken code

* add a test print

* pre-commit check

* test exception

* add test code

* for test

* fix bug and test

* remove test code

* remove useless file

* 1. fix some bugs 2. add backbone ut

* Merge branch 'master' into feat/finetune_refactor_730

# Conflicts:
#	modelscope/metainfo.py
#	modelscope/metrics/sequence_classification_metric.py
#	modelscope/models/nlp/__init__.py
#	modelscope/models/nlp/task_models/task_model.py
#	modelscope/preprocessors/__init__.py
#	modelscope/preprocessors/nlp.py
#	modelscope/trainers/trainer.py
#	modelscope/trainers/utils/inference.py
#	modelscope/utils/file_utils.py
#	tests/trainers/test_trainer_with_nlp.py

* pre-commit passed

* revert files

* increase test level

* unregister models

* fix bugs

* fix cr comments

* fix bug in backbone-head

* add sbert backbone

* fix bug

* add test for token-cls-metric

* pre-commit passed

* fix ut comments

* revert normal tokenizer to fast tokenizer

* Merge branch 'master' into feat/finetune_refactor_730

# Conflicts:
#	modelscope/models/nlp/__init__.py
#	modelscope/models/nlp/backbones/__init__.py
#	modelscope/models/nlp/backbones/structbert/__init__.py
#	modelscope/models/nlp/masked_language.py
#	modelscope/models/nlp/palm_v2/palm_for_text_generation.py
#	modelscope/models/nlp/sbert_for_sequence_classification.py
#	modelscope/models/nlp/sbert_for_token_classification.py
#	modelscope/models/nlp/sbert_for_zero_shot_classification.py
#	modelscope/pipelines/nlp/text_generation_pipeline.py
#	modelscope/preprocessors/nlp.py
#	modelscope/trainers/trainer.py
#	modelscope/trainers/utils/inference.py

* fix merge bugs

* pre commit passed

* fix bug

* fix bug

* fix bug

* fix bug from master

* add print

* fix ut bug

* fix bug

* Merge branch 'master' into feat/finetune_refactor_730

* skip task model test
2022-08-03 18:38:41 +08:00

135 lines
4.4 KiB
Python

# Copyright (c) Alibaba, Inc. and its affiliates.
from modelscope.trainers.lrscheduler.builder import build_lr_scheduler
from modelscope.utils.constant import LogKeys
from modelscope.utils.logger import get_logger
from modelscope.utils.torch_utils import is_master
from .builder import HOOKS
from .hook import Hook
from .priority import Priority
@HOOKS.register_module()
class LrSchedulerHook(Hook):
"""Lr scheduler.
Args:
by_epoch (bool): Whether lr changes by epoch
warmup (dict): warm up config
"""
PRIORITY = Priority.VERY_HIGH
def __init__(self, by_epoch=True, warmup=None) -> None:
super().__init__()
self.by_epoch = by_epoch
self.warmup = warmup
self.warmup_lr_scheduler = None
def before_run(self, trainer):
if self.warmup is not None:
assert isinstance(self.warmup, dict) and 'type' in self.warmup
self.warmup_lr_scheduler = build_lr_scheduler(
cfg=self.warmup,
default_args={'base_scheduler': trainer.lr_scheduler})
def get_current_lr(self, trainer):
import torch
if isinstance(trainer.optimizer, torch.optim.Optimizer):
lr = [group['lr'] for group in trainer.optimizer.param_groups]
elif isinstance(trainer.optimizer, dict):
lr = dict()
for name, optim in trainer.optimizer.items():
lr[name] = [group['lr'] for group in optim.param_groups]
else:
raise RuntimeError(
'lr is not applicable because optimizer does not exist.')
return lr
def before_train_iter(self, trainer):
if not self.by_epoch:
if self.warmup_lr_scheduler is not None:
self.warmup_lr_scheduler.step()
else:
trainer.lr_scheduler.step()
trainer.log_buffer.output[LogKeys.LR] = self._get_log_lr(trainer)
def before_train_epoch(self, trainer):
trainer.log_buffer.output[LogKeys.LR] = self._get_log_lr(trainer)
def after_train_epoch(self, trainer):
if self.by_epoch:
if self.warmup_lr_scheduler is not None:
self.warmup_lr_scheduler.step()
else:
trainer.lr_scheduler.step()
def _get_log_lr(self, trainer):
cur_lr = self.get_current_lr(trainer)
# only record lr of the first param group
if isinstance(cur_lr, list):
lr = cur_lr[0]
else:
assert isinstance(cur_lr, dict)
lr = {}
for k, lr_ in cur_lr.items():
assert isinstance(lr_, list)
lr.update({k: lr_[0]})
return lr
@HOOKS.register_module()
class PlateauLrSchedulerHook(LrSchedulerHook):
"""Lr scheduler hook for `ReduceLROnPlateau`.
Args:
metric_key (str): Metric key returned from `trainer.metric_values`,
get the value of metric key and pass it to `ReduceLROnPlateau.step`.
by_epoch (bool): Whether lr changes by epoch
warmup (dict): warm up config
"""
PRIORITY = Priority.LOW # should be after EvaluationHook
def __init__(self, metric_key, by_epoch=True, warmup=None) -> None:
super().__init__(by_epoch=by_epoch, warmup=warmup)
self.metric_key = metric_key
def before_run(self, trainer):
super().before_run(trainer)
if not hasattr(trainer, 'logger'):
self.logger = get_logger(__name__)
else:
self.logger = trainer.logger
def after_train_epoch(self, trainer):
# adapt to evaluation intervel is greater than 1
if trainer.metric_values is None:
if is_master():
self.logger.warning(
f'Current epoch {trainer.epoch} has no evaluation metric values, skip lr_scheduler.step() !'
)
return
metrics = trainer.metric_values[self.metric_key]
if self.by_epoch:
if self.warmup_lr_scheduler is not None:
self.warmup_lr_scheduler.step(metrics=metrics)
else:
trainer.lr_scheduler.step(metrics=metrics)
@HOOKS.register_module()
class NoneLrSchedulerHook(LrSchedulerHook):
PRIORITY = Priority.LOW # should be after EvaluationHook
def __init__(self, by_epoch=True, warmup=None) -> None:
super().__init__(by_epoch=by_epoch, warmup=warmup)
def before_run(self, trainer):
return
def after_train_epoch(self, trainer):
return