💎一站式轻松地调用各大LLM模型接口,支持GPT4、智谱、星火、月之暗面及文生图 广告
[TOC] ## 对话 ### roberta-base-chinese-extractive-qa 模型 ``` from transformers import AutoModelForQuestionAnswering, AutoTokenizer, pipeline model = AutoModelForQuestionAnswering.from_pretrained('uer/roberta-base-chinese-extractive-qa') tokenizer = AutoTokenizer.from_pretrained('uer/roberta-base-chinese-extractive-qa') QA = pipeline('question-answering', model=model, tokenizer=tokenizer) QA_input = {'question': "著名诗歌《假如生活欺骗了你》的作者是",'context': "普希金从那里学习人民的语言,吸取了许多有益的养料,这一切对普希金后来的创作产生了很大的影响。这两年里,普希金创作了不少优秀的作品,如《囚徒》、《致大海》、《致凯恩》和《假如生活欺骗了你》等几十首抒情诗,叙事诗《努林伯爵》,历史剧《鲍里斯·戈都诺夫》,以及《叶甫盖尼·奥涅金》前六章。"} qa = QA(QA_input,max_length=100) print(qa) # {'score': 0.9766427278518677, 'start': 0, 'end': 3, 'answer': '普希金'} ``` ### luhua/chinese_pretrain_mrc_roberta_wwm_ext_large 模型 ``` from transformers import AutoTokenizer, AutoModelForQuestionAnswering, pipeline, BertTokenizer model_name = "chinese_pretrain_mrc_roberta_wwm_ext_large" # "chinese_pretrain_mrc_macbert_large" # Use in Transformers tokenizer = AutoTokenizer.from_pretrained(f"luhua/{model_name}") model = AutoModelForQuestionAnswering.from_pretrained(f"luhua/{model_name}") QA = pipeline('question-answering', model=model, tokenizer=tokenizer) QA_input = { 'question': "钱钟书是谁", 'context': "普希金从那里学习人民的语言,吸取了许多有益的养料,这一切对普希金后来的创作产生了很大的影响。这两年里,普希金创作了不少优秀的作品,如《囚徒》、《致大海》、《致凯恩》和《假如生活欺骗了你》等几十首抒情诗,叙事诗《努林伯爵》,历史剧《鲍里斯·戈都诺夫》,以及《叶甫盖尼·奥涅金》前六章。" } qa = QA(QA_input, max_length=100) print(qa) # {'score': 0.0037305462174117565, 'start': 66, 'end': 76, 'answer': '《囚徒》、《致大海》'} ```