ThinkChat2.0新版上线,更智能更精彩,支持会话、画图、阅读、搜索等,送10W Token,即刻开启你的AI之旅 广告
[TOC] ## 模型 **加载模型** **在大部分情况下,我们都应该使用`AutoModel`来加载模型** ``` // bad from transformers import BertModel model = BertModel.from_pretrained("bert-base-cased") // good from transformers import AutoModel model = AutoModel.from_pretrained("bert-base-cased") ``` **保存模型** ``` from transformers import AutoModel model = AutoModel.from_pretrained("bert-base-cased") model.save_pretrained("./models/bert-base-cased/") ``` 这会在保存路径下创建两个文件: config.json:模型配置文件,存储模型结构参数,例如 Transformer 层数、特征空间维度等; pytorch_model.bin:又称为 state dictionary,存储模型的权重。 ## 分词器 1. 使用分词器 (tokenizer) 将文本按词、子词、字符切分为 tokens; 2. 将所有的 token 映射到对应的 token ID。 ### 分词策略 **按词切分 (Word-based)** ![](https://img.kancloud.cn/15/3d/153dfe018416cee920de6e99a44e4c4d_2663x546.png) ``` tokenized_text = "Jim Henson was a puppeteer".split() print(tokenized_text) ``` **按字符切分 (Character-based)** ![](https://img.kancloud.cn/d0/52/d0526c466614d1ee21cc0f7346b05b62_2631x177.png) **按子词切分 (Subword) ** ![](https://img.kancloud.cn/6e/c6/6ec6c6462e97346617979df0aaf9c68c_2626x158.png) ### 加载与保存分词器 进行使用AutoTokenizer处理 ``` //bad from transformers import BertTokenizer tokenizer = BertTokenizer.from_pretrained("bert-base-cased") tokenizer.save_pretrained("./models/bert-base-cased/") //good from transformers import AutoTokenizer tokenizer = AutoTokenizer.from_pretrained("bert-base-cased") tokenizer.save_pretrained("./models/bert-base-cased/") ``` 调用`Tokenizer.save_pretrained()`函数会在保存路径下创建三个文件: * *special\_tokens\_map.json*:映射文件,里面包含 unknown token 等特殊字符的映射关系; * *tokenizer\_config.json*:分词器配置文件,存储构建分词器需要的参数; * *vocab.txt*:词表,一行一个 token,行号就是对应的 token ID(从 0 开始) ### 编码与解码文本 例:使用 BERT 分词器 ``` from transformers import AutoTokenizer tokenizer = AutoTokenizer.from_pretrained("bert-base-cased") sequence = "Using a Transformer network is simple" tokens = tokenizer.tokenize(sequence) print(tokens) //['Using', 'a', 'Trans', '##former', 'network', 'is', 'simple'] ids = tokenizer.convert_tokens_to_ids(tokens) print(ids) //[7993, 170, 13809, 23763, 2443, 1110, 3014] ``` > 可以看到,BERT 分词器采用的是子词切分策略 > 通过`convert_tokens_to_ids()`将切分出的 tokens 转换为对应的 token IDs 还可以通过`encode()`函数将这两个步骤合并,并且`encode()`会自动添加模型需要的特殊 token,例如 BERT 分词器会分别在序列的首尾添加`[CLS]`和`[SEP]` ``` from transformers import AutoTokenizer tokenizer = AutoTokenizer.from_pretrained("bert-base-cased") sequence = "Using a Transformer network is simple" sequence_ids = tokenizer.encode(sequence) print(sequence_ids) //[101, 7993, 170, 13809, 23763, 2443, 1110, 3014, 102] ``` 其中 101 和 102 分别是`[CLS]`和`[SEP]` 对应的tokenid。 **在实际编码文本时,最常见的是直接使用分词器进行处理**,这样不仅会返回分词后的 token IDs,还包含模型需要的其他输入。例如 BERT 分词器还会自动在输入中添加`token_type_ids`和`attention_mask` ``` from transformers import AutoTokenizer tokenizer = AutoTokenizer.from_pretrained("bert-base-cased") tokenized_text = tokenizer("Using a Transformer network is simple") print(tokenized_text) // 输出 {'input_ids': [101, 7993, 170, 13809, 23763, 2443, 1110, 3014, 102], 'token_type_ids': [0, 0, 0, 0, 0, 0, 0, 0, 0], 'attention_mask': [1, 1, 1, 1, 1, 1, 1, 1, 1]} ``` **文本解码 (Decoding)** ``` from transformers import AutoTokenizer tokenizer = AutoTokenizer.from_pretrained("bert-base-cased") decoded_string = tokenizer.decode([7993, 170, 11303, 1200, 2443, 1110, 3014]) print(decoded_string) //Using a transformer network is simple decoded_string = tokenizer.decode([101, 7993, 170, 13809, 23763, 2443, 1110, 3014, 102]) print(decoded_string) // [CLS] Using a Transformer network is simple [SEP] ``` ### 处理多段文本 ``` import torch from transformers import AutoTokenizer, AutoModelForSequenceClassification checkpoint = "distilbert-base-uncased-finetuned-sst-2-english" tokenizer = AutoTokenizer.from_pretrained(checkpoint) model = AutoModelForSequenceClassification.from_pretrained(checkpoint) sequence = "I've been waiting for a HuggingFace course my whole life." tokens = tokenizer.tokenize(sequence) ids = tokenizer.convert_tokens_to_ids(tokens) # input_ids = torch.tensor(ids), This line will fail. input_ids = torch.tensor([ids]) print("Input IDs:\n", input_ids) output = model(input_ids) print("Logits:\n", output.logits) ``` 输出 ``` Input IDs: tensor([[ 1045, 1005, 2310, 2042, 3403, 2005, 1037, 17662, 12172, 2607, 2026, 2878, 2166, 1012]]) Logits: tensor([[-2.7276, 2.8789]], grad_fn=<AddmmBackward0>) ``` ### 编码句子对 除了对单段文本进行编码以外(batch 只是并行地编码多个单段文本),对于 BERT 等包含“句子对”预训练任务的模型,它们的分词器都支持对“句子对”进行编码,例 ``` from transformers import AutoTokenizer checkpoint = "bert-base-uncased" tokenizer = AutoTokenizer.from_pretrained(checkpoint) inputs = tokenizer("This is the first sentence.", "This is the second one.") print(inputs) tokens = tokenizer.convert_ids_to_tokens(inputs["input_ids"]) print(tokens) ``` ### 添加 Token 在实际当中经常会遇到输入中需要包含特殊标记符的情况,如`[ENT_START]` 和 `[ENT_END]` 由于这些自定义 token 并不在预训练模型原来的词表中,因此直接运用分词器处理就会出现问题。 此外,一些领域的专业词汇,例如使用多个词语的缩写拼接而成的医学术语,同样也不在模型的词表中,因此也会出现上面的问题。此时我们就需要将这些新 token 添加到模型的词表中,让分词器与模型可以识别并处理这些 token **添加新 token** * Transformers 库提供了两种方式来添加新 token,分别是: 1. add_tokens() ``` checkpoint = "bert-base-uncased" tokenizer = AutoTokenizer.from_pretrained(checkpoint) num_added_toks = tokenizer.add_tokens(["new_token1", "my_new-token2"]) print("We have added", num_added_toks, "tokens") // We have added 2 tokens ``` 过滤重复 ``` new_tokens = ["new_token1", "my_new-token2"] new_tokens = set(new_tokens) - set(tokenizer.vocab.keys()) tokenizer.add_tokens(list(new_tokens)) ``` 2. add_special_tokens() 添加特殊 token,参数是包含特殊 token 的字典,键值只能从`bos_token`,`eos_token`,`unk_token`,`sep_token`,`pad_token`,`cls_token`,`mask_token`,`additional_special_tokens`中选择 ``` checkpoint = "bert-base-uncased" tokenizer = AutoTokenizer.from_pretrained(checkpoint) special_tokens_dict = {"cls_token": "[MY_CLS]"} num_added_toks = tokenizer.add_special_tokens(special_tokens_dict) print("We have added", num_added_toks, "tokens") // We have added 1 tokens assert tokenizer.cls_token == "[MY_CLS]" ``` 使用 add_tokens 加参数的形式 ``` checkpoint = "bert-base-uncased" tokenizer = AutoTokenizer.from_pretrained(checkpoint) num_added_toks = tokenizer.add_tokens(["[NEW_tok1]", "[NEW_tok2]"]) num_added_toks = tokenizer.add_tokens(["[NEW_tok3]", "[NEW_tok4]"], special_tokens=True) print("We have added", num_added_toks, "tokens") print(tokenizer.tokenize('[NEW_tok1] Hello [NEW_tok2] [NEW_tok3] World [NEW_tok4]!')) //We have added 2 tokens ['[new_tok1]', 'hello', '[new_tok2]', '[NEW_tok3]', 'world', '[NEW_tok4]', '!'] ``` ## 调整 embedding 矩阵 >[warning] 向词表中添加新 token 后,必须重置模型 embedding 矩阵的大小,也就是向矩阵中添加新 token 对应的 embedding,这样模型才可以正常工作,将 token 映射到对应的 embedding。 调整 embedding 矩阵通过`resize_token_embeddings()`函数来实现 `````` from transformers import AutoTokenizer, AutoModel checkpoint = "bert-base-uncased" tokenizer = AutoTokenizer.from_pretrained(checkpoint) model = AutoModel.from_pretrained(checkpoint) print('vocabulary size:', len(tokenizer)) num_added_toks = tokenizer.add_tokens(['[ENT_START]', '[ENT_END]'], special_tokens=True) print("After we add", num_added_toks, "tokens") print('vocabulary size:', len(tokenizer)) model.resize_token_embeddings(len(tokenizer)) print(model.embeddings.word_embeddings.weight.size()) # Randomly generated matrix print(model.embeddings.word_embeddings.weight[-2:, :]) ```