The original backbone-head abstraction was not articheted well enough, the input and output parameters of backbone and head were in the form of **kwargs, which was implicit and might cause confustion. Therefore, the following adjustments were made:
原有backbone head抽象程度不够深,backbone 以及head输入输出参数为**kwargs,比较晦涩,同时很多功能无法支持扩展,因此做了如下调整:
1. Divide the basic model based on the structure to: encoder-only model, decoder-only model, single stage model, two stage model, etc., . Now, the encoder-only model was accomplished, while others are under design
2. Derive the structed task-models from the basic model structure above: a single structed task-model is mainly used to parse the backbone/head cfg, in order to apply the correct backbone or head components, some models might adjust the forward method from the basic model
3. Add the initialization parameters, input and output parameters to head class and backbone class, in order to reduce the understanding cost.
4. Remove the original nncrf class and chang it to backbone-head form with the lstm backbone and crf head.
5. Support `model = Model.from_pretrained('bert-based-fill-mask', task='text-classification')`, this method could correctly load the backbone even when the task is different from the original one in configuration.
6. Support loading the model through the transformer's automodel, in the case of quickly integrating the backbone model without coding
7. Unifiy the original task classes in each nlp model and the structed task-model classes, the structed task-model are largely reduce the redundant codes in the original task classed. Still under refactor
8. Support load model configuration from hf transformers config.json, if the model related configuration is missing. Only suppport NLP models
1. Support csanmt exporting to savedmodel format
2. Create a new base class for text-ranking preprocessors, and move some parameters of mgeo_ranking_preprocessor to init method
3. Avoid Model & Preprocessor classes coupled with pytorch
4. Regression test supports comparing only model output
5. Support zero-shot exporting to onnx and torchscript
Link: https://code.alibaba-inc.com/Ali-MaaS/MaaS-lib/codereview/11522461
1. Exporting: Support text-classification of bert and tensorflow2.0 models, test cases have been added.
2. Downloading of preprocessor.from_pretrained will ignores some large files which not needed by extension file name.
3. Move sentence-piece-preprocessor to the subclass of text-generation-preprocessor and keep the original name for compatibility.
4. Remove some useless codes in nlp-trainer and trainer.
Link: https://code.alibaba-inc.com/Ali-MaaS/MaaS-lib/codereview/11206922
Features:
1. Refactor the directory structure of nlp models. All model files are placed into either the model folder or the task_model folder
2. Refactor all the comments to google style
3. Add detail comments to important tasks and nlp models, to list the description of the model, and its preprocessor&trainer
4. Model Exporting now supports a direct all to TorchModelExporter(no need to derive from it)
5. Refactor model save_pretrained method to support direct running(independent from trainer)
6. Remove the judgement of Model in the pipeline base class, to support outer register models running in our pipelines
7. Nlp trainer now has a NLPTrainingArguments class , user can pass arguments into the dataclass, and use it as a normal cfg_modify_fn, to simplify the operation of modify cfg.
8. Merge the BACKBONES and the MODELS, so user can get a backbone with the Model.from_pretrained call
9. Model.from_pretrained now support a task argument, so user can use a backbone and load it with a specific task class.
10. Support Preprocessor.from_pretrained method
11. Add standard return classes to important nlp tasks, so some of the pipelines and the models are independent now, the return values of the models will always be tensors, and the pipelines will take care of the conversion to numpy and the following stuffs.
12. Split the file of the nlp preprocessors, to make the dir structure more clear.
Bugs Fixing:
1. Fix a bug that lr_scheduler can be called earlier than the optimizer's step
2. Fix a bug that the direct call of Pipelines (not from pipeline(xxx)) throws error
3. Fix a bug that the trainer will not call the correct TaskDataset class
4. Fix a bug that the internal loading of dataset will throws error in the trainer class
Link: https://code.alibaba-inc.com/Ali-MaaS/MaaS-lib/codereview/10490585
1. Add exporter module
2. Move collate_fn out of the base pipeline class for reusing.
3. Add dummy inputs method in nlp tokenization preprocessor base class
4. Support Mapping in tensor numpify and detaching.
Link: https://code.alibaba-inc.com/Ali-MaaS/MaaS-lib/codereview/10037704