前言
随着多模态大模型的发展,其不仅限于文字处理,更能够在图像、视频、音频方面进行识别与理解。医疗领域中,医生们往往需要对各种医学图像进行处理,以辅助诊断和治疗。如果将多模态大模型与图像诊断相结合,那么这会极大地提升诊断效率。
项目目标
训练一个医疗多模态大模型,用于图像诊断。
刚好家里老爷子近期略感头疼,去医院做了脑部CT,诊断患有垂体瘤,我将尝试使用多模态大模型进行进一步诊断。
实现过程
1. 数据集准备
为了训练模型,需要准备大量的医学图像数据。通过搜索我们找到以下训练数据:
数据名称:MedTrinity-25M
数据地址:https://github.com/UCSC-VLAA/MedTrinity-25M
数据简介:MedTrinity-25M数据集是一个用于医学图像分析和计算机视觉研究的大型数据集。
数据来源:该数据集由加州大学圣克鲁兹分校(UCSC)提供,旨在促进医学图像处理和分析的研究。
数据量:MedTrinity-25M包含约2500万条医学图像数据,涵盖多种医学成像技术,如CT、MRI和超声等。
数据内容:
该数据集有两份,分别是 25Mdemo
和 25Mfull
。
25Mfull
(约24,800,000条)数据集内容如下:
2. 数据下载
2.1 安装Hugging Face的Datasets库
pip install datasets
2.2 下载数据集
from datasets import load_dataset
# 加载数据集
ds = load_dataset("UCSC-VLAA/MedTrinity-25M", "25M_demo", cache_dir="cache")
说明:
- 以上方法是使用HuggingFace的Datasets库下载数据集,下载的路径为当前脚本所在路径下的cache文件夹。
- 使用HuggingFace下载需要能够访问https://huggingface.co/ 并且在网站上申请数据集读取权限才可以。
- 如果没有权限访问HuggingFace,可以关注以下公众号后,回复 “MedTrinity”获取百度网盘下载地址。
2.3 预览数据集
# 查看训练集的前1个样本
print(ds['train'][:1])
运行结果:
{
'image': [<PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=512x512 at 0x15DD6D06530>],
'id': ['8031efe0-1b5c-11ef-8929-000066532cad'],
'caption': ['The image is a non-contrasted computed tomography (CT) scan of the brain, showing the cerebral structures without any medical devices present. The region of interest, located centrally and in the middle of the image, exhibits an area of altered density, which is indicative of a brain hemorrhage. This area is distinct from the surrounding brain tissue, suggesting a possible hematoma or bleeding within the brain parenchyma. The location and characteristics of this abnormality may suggest a relationship with the surrounding brain tissue, potentially causing a mass effect or contributing to increased intracranial pressure.'
]
}
使用如下命令对数据集的图片进行可视化查看:
# 可视化image内容
from PIL import Image
import matplotlib.pyplot as plt
image = ds['train'][0]['image'] # 获取第一张图像
plt.imshow(image)
plt.axis('off') # 不显示坐标轴
plt.show()
3. 数据预处理
由于后续我们要通过LLama Factory进行多模态大模型微调,所以我们需要对上述的数据集进行预处理以符合LLama Factory的要求。
3.1 LLama Factory数据格式
查看LLama Factory的多模态数据格式要求如下:
[
{
"messages": [
{
"content": "<image>他们是谁?",
"role": "user"
},
{
"content": "他们是拜仁慕尼黑的凯恩和格雷茨卡。",
"role": "assistant"
},
{
"content": "他们在做什么?",
"role": "user"
},
{
"content": "他们在足球场上庆祝。",
"role": "assistant"
}
],
"images": [
"mllm_demo_data/1.jpg"
]
}
]
3.2 实现数据格式转换脚本
from datasets import load_dataset
import os
import json
from PIL import Image
def save_images_and_json(ds, output_dir="mllm_data"):
"""
将数据集中的图像和对应的 JSON 信息保存到指定目录。
参数:
ds: 数据集对象,包含图像和标题。
output_dir: 输出目录,默认为 "mllm_data"。
"""
# 创建输出目录
if not os.path.exists(output_dir):
os.makedirs(output_dir)
# 创建一个列表来存储所有的消息和图像信息
all_data = []
# 遍历数据集中的每个项目
for item in ds:
img_path = f"{output_dir}/{item['id']}.jpg" # 图像保存路径
image = item["image"] # 假设这里是一个 PIL 图像对象
# 将图像对象保存为文件
image.save(img_path) # 使用 PIL 的 save 方法
# 添加消息和图像信息到列表中
all_data.append(
{
"messages": [
{
"content": "<image>图片中的诊断结果是怎样?",
"role": "user",
},
{
"content": item["caption"], # 从数据集中获取的标题
"role": "assistant",
},
],
"images": [img_path], # 图像文件路径
}
)
# 创建 JSON 文件
json_file_path = f"{output_dir}/mllm_data.json"
with open(json_file_path, "w", encoding='utf-8') as f:
json.dump(all_data, f, ensure_ascii=False) # 确保中文字符正常显示
if __name__ == "__main__":
# 加载数据集
ds = load_dataset("UCSC-VLAA/MedTrinity-25M", "25M_demo", cache_dir="cache")
# 保存数据集中的图像和 JSON 信息
save_images_and_json(ds['train'])
4. 模型下载
本次微调,我们使用阿里最新发布的多模态大模型:Qwen2-VL-2B-Instruct
作为底座模型。
模型说明地址:https://modelscope.cn/models/Qwen/Qwen2-VL-2B-Instruct
使用如下命令下载模型
git lfs install
# 下载模型
git clone https://www.modelscope.cn/Qwen/Qwen2-VL-2B-Instruct.git
5. 环境准备
5.1 机器环境
硬件:
- 显卡:4080 Super
- 显存:16GB
软件:
- 系统:Ubuntu 20.04 LTS
- python:3.10
- pytorch:2.1.2 + cuda12.1
5.2 准备虚拟环境
# 创建python3.10版本虚拟环境
conda create --name train_env python=3.10
# 激活环境
conda activate train_env
# 安装依赖包
pip install streamlit torch torchvision
# 安装Qwen2建议的transformers版本
pip install git+https://github.com/huggingface/transformers
严格按照操作指南完成,但是微调导出的模型chat对话的输出是不对的:The image is the brain MRI scan, and I can see a, for you. The region of interest is located in the other? I, the I, , and the region of interest is located in the other? I, the . The region of interest is in the with a. The region’s in the right. The left, I, and the right, I, are in the left-center of the right. The region of interest’s region is in the lower part of the right. The unusuals are in the brain, and the smell is on. The brain’s on the left, and the right, I, is in the lower part of the right, and the brain’s on the left. The region of interest’s in the lower part of the right, and the brain’s in the lower part of the right. The region of interest’s is in the lower part of the right, and the brain’s is in the lower part of the right. The region of interest’s is in the lower part of the right, and the brain’s is in the lower part of the right. The region of interest’s is in the lower part of the right. The region of interest’s is in the lower part of the right. The region of interest’s is in the lower part of the right. The region of interest’s is in the lower part of the right. The region of interest’s is in the lower part of the right. The region of interest’s is in the lower part of the right. The region of interest’s is in the lower part of the right. The region of interest’s is in the lower part of the right. The region of interest’s is in the lower part of the right. The region of interest’s is in the lower part of the right. The region of interest’s is in the lower part of the right. The region of interest’s is in the lower part of the right. The region of interest’s is in the lower part of the right. The region of interest’s is in the lower part of the right. The region of interest’s is in the lower part of the right. The region of interest’s is in the lower part of the right. The region of interest’s is in the lower part of the right. The region of interest’s is in the lower part of the right. The region of interest’s is in the lower part of the right. The region of interest’s is in the lower part of the right. The region of interest’s is in the lower
是不是底座模型选择的问题?
我这里选择的是Qwen2-VL-2B-Instruct,它有对话能力;如果选择Qwen2-VL-2B,那么对话能力会不足。
我选择得是Qwen2-VL-7B-Instruct这个模型
我也用的Qwen2-VL-7B-Instruct,数据集是我自己的,但是微调之后反而变得非常呆板,会重复而且输出的很短
那应该是模型过拟合了,这种情况需要在SFT微调的时候,加入一些常规知识数据集,防止模型过拟合。
比如:可以加入中文医疗对话数据集https://modelscope.cn/datasets/xiaofengalg/Chinese-medical-dialogue
按照《day24(上):大模型三阶段训练方法(LLaMa Factory)》中第2阶段:监督微调的过程操作,最后开始训练的时候会报错没有图片输入,然后就卡住不动了,怎么解决。使用的模型是Qwen2-VL-7B-Instruct,数据集是Chinese-medical-dialogue/data/train_0001_of_0001.json。
按照LLamaFactory的data格式,对数据进行预处理。
你好我想请教一下,为什么我安装上述模型训练完成之后,我让他用中文回答我,他还是使用英文
因为SFT训练时,训练数据基本都是英文,这一点在文章中的”不足之处”有说明,最好是能够增加中英文的语料数据进行训练。
例如把数据翻译为中文再进行训练么
是的,问答对的内容是:中文问,中文答
export GRADIO_SERVER_PORT=7860 GRADIO_ROOT_PATH=/${JUPYTER_NAME}/proxy/7860/
这个端口映射方式只适用于modelscope,autodl平台的端口映射方式不是这样操作。autodl官网有详细的操作说明和视频,请查看autodl的官网说明。
可以参考最新出炉的这篇文章:
https://17aitech.com/?p=36251
谢谢