多应用+插件架构,代码干净,二开方便,首家独创一键云编译技术,文档视频完善,免费商用码云13.8K 广告
[TOC] ## 使用gpt2 ``` from transformers import GPT2LMHeadModel, GPT2Tokenizer # 加载GPT-2预训练模型和分词器 model_name = "gpt2" model = GPT2LMHeadModel.from_pretrained(model_name) tokenizer = GPT2Tokenizer.from_pretrained(model_name) # 输入文本 input_text = "Transformers is a powerful tool for natural language processing. It can" # 将输入文本编码成输入张量 input_ids = tokenizer.encode(input_text, return_tensors="pt", add_special_tokens=True) # 使用模型生成文本 output = model.generate(input_ids, max_length=100, num_return_sequences=3, no_repeat_ngram_size=2, top_k=50, top_p=0.95, temperature=0.7) # 解码并打印生成的文本 for i, sample_output in enumerate(output): generated_text = tokenizer.decode(sample_output, skip_special_tokens=True) print(f"Generated Text {i + 1}: {generated_text}") ```