news 2026/3/24 11:59:21

超声影像组学淋巴结病变诊断与来源预测【附代码数据】

作者头像

张小明

前端开发工程师

1.2k 24
文章封面图
超声影像组学淋巴结病变诊断与来源预测【附代码数据】

博主简介:擅长数据搜集与处理、建模仿真、程序设计、仿真代码、论文写作与指导,毕业论文、期刊论文经验交流。

✅成品或者定制,扫描文章底部微信二维码。


(1) 基于多模态超声的颈部淋巴结病变层次诊断模型构建

颈部淋巴结病变的病因复杂多样,包括反应性增生、结核性淋巴结炎、淋巴瘤和恶性肿瘤转移等,不同病因的治疗策略和预后差异显著,术前准确诊断对于制定合理的治疗方案具有重要意义。传统超声诊断主要依赖医师的主观经验判断,诊断准确性受限且存在较大的个体差异。本研究提出了基于深度学习的颈部淋巴结病变层次诊断模型,该模型模拟临床医师的诊断思维流程,采用层次化的分类策略逐步缩小诊断范围,最终确定具体病因。模型构建基于B模态超声图像和彩色多普勒超声图像的双模态输入,B模态超声提供淋巴结的形态学特征信息,包括大小、形状、边界、内部回声等;彩色多普勒超声提供淋巴结的血流灌注特征信息,包括血流分布模式、血流丰富程度等。两种模态的信息互为补充,能够更全面地描述淋巴结病变的影像学特征。层次诊断模型由三个深度卷积神经网络子模型组成,子模型一执行初级诊断任务,区分良性和恶性淋巴结病变;子模型二针对良性病变进行次级诊断,区分反应性增生和结核性淋巴结炎;子模型三针对恶性病变进行次级诊断,区分淋巴瘤和转移性病变。三个子模型采用相同的网络架构但独立训练,每个子模型包含双分支特征提取网络分别处理两种模态的图像,提取的特征在全连接层进行融合后输出分类结果。在训练集和多个测试集上的实验结果表明,各子模型在相应分类任务上均取得了较高的曲线下面积值,整合后的层次诊断模型能够准确识别四种常见病因,诊断性能优于不同年资的超声医师,并能有效提升低中年资医师的诊断水平。

(2) 基于多步模态融合的转移性淋巴结病理亚型诊断模型

转移性颈部淋巴结病变的病理亚型主要包括鳞状细胞癌和腺细胞癌两大类,不同亚型对应不同的原发肿瘤来源和治疗方案,术前准确判断病理亚型对于指导后续检查和治疗具有重要临床价值。然而,仅凭超声图像的视觉特征难以可靠地区分两种亚型,经验丰富的超声医师诊断准确性也相当有限。本研究构建了基于深度学习的多步模态融合模型,整合四种超声成像模态和临床信息,实现转移性淋巴结病理亚型的高精度诊断。四种超声模态包括B模态超声、彩色多普勒超声、超声弹性成像和超声造影,其中B模态和彩色多普勒超声为静态图像,提供淋巴结的形态和血流特征;超声弹性成像为静态图像,反映淋巴结组织的硬度分布;超声造影为动态视频,展示淋巴结的微循环灌注特征和增强模式。多步模态融合策略首先对三种静态图像模态进行特征提取和初步融合,采用三分支卷积神经网络分别处理各模态图像,在特征层进行拼接融合。然后对超声造影视频采用三维卷积网络或循环神经网络提取时序特征,捕获造影剂灌注的动态变化模式。最后将静态图像融合特征、动态视频特征和经筛选的关键临床指标进行综合融合,输出病理亚型的分类概率。临床指标通过单因素分析筛选,包括患者性别、年龄、淋巴结部位、颈分区、纵径和横径等与病理亚型显著相关的变量。实验结果表明,四种超声模态融合的诊断性能优于单模态和部分模态组合,加入临床指标后构建的完整模型取得了最高的诊断准确性,显著优于高年资超声医师的诊断水平。

(3) 基于综合预测模型的转移性淋巴结原发肿瘤来源预测

明确转移性颈部淋巴结的原发肿瘤来源对于制定针对性治疗方案至关重要,但原发灶的寻找往往需要进行大量的影像学检查和内镜检查,耗时长且给患者带来较大负担。如果能够根据转移性淋巴结的超声特征预测原发肿瘤的可能来源,可以显著缩小检查范围,加速诊断进程。本研究在病理亚型诊断模型的基础上进一步扩展,构建了预测原发肿瘤来源的综合模型,针对头颈部鳞状细胞癌、甲状腺癌、肺癌和胃肠癌四种常见原发肿瘤来源进行多分类预测。模型架构沿用多步模态融合的设计思想,整合常规超声、超声弹性成像、超声造影和临床信息四个维度的数据。首先通过单因素分析识别与原发肿瘤来源显著相关的临床变量,包括患者年龄、性别和淋巴结颈分区,建立基于临床信息的基线预测模型。

import numpy as np import torch import torch.nn as nn import torch.nn.functional as F import torch.optim as optim from torch.utils.data import DataLoader, Dataset from torchvision import models from sklearn.metrics import roc_auc_score, accuracy_score class DualModalityEncoder(nn.Module): def __init__(self, backbone='resnet50'): super(DualModalityEncoder, self).__init__() if backbone == 'resnet50': resnet = models.resnet50(pretrained=True) self.bmode_encoder = nn.Sequential(*list(resnet.children())[:-2]) resnet2 = models.resnet50(pretrained=True) self.cdfi_encoder = nn.Sequential(*list(resnet2.children())[:-2]) self.pool = nn.AdaptiveAvgPool2d(1) self.feature_dim = 2048 * 2 def forward(self, bmode_img, cdfi_img): bmode_features = self.pool(self.bmode_encoder(bmode_img)).flatten(1) cdfi_features = self.pool(self.cdfi_encoder(cdfi_img)).flatten(1) fused_features = torch.cat([bmode_features, cdfi_features], dim=1) return fused_features class SubModel(nn.Module): def __init__(self, num_classes=2): super(SubModel, self).__init__() self.encoder = DualModalityEncoder() self.classifier = nn.Sequential( nn.Linear(self.encoder.feature_dim, 512), nn.ReLU(inplace=True), nn.Dropout(0.5), nn.Linear(512, 256), nn.ReLU(inplace=True), nn.Dropout(0.3), nn.Linear(256, num_classes) ) def forward(self, bmode_img, cdfi_img): features = self.encoder(bmode_img, cdfi_img) logits = self.classifier(features) return logits class HierarchicalDiagnosticModel(nn.Module): def __init__(self): super(HierarchicalDiagnosticModel, self).__init__() self.submodel1 = SubModel(num_classes=2) self.submodel2 = SubModel(num_classes=2) self.submodel3 = SubModel(num_classes=2) def forward(self, bmode_img, cdfi_img, return_all=False): logits1 = self.submodel1(bmode_img, cdfi_img) prob1 = F.softmax(logits1, dim=1) is_malignant = prob1[:, 1] > 0.5 logits2 = self.submodel2(bmode_img, cdfi_img) logits3 = self.submodel3(bmode_img, cdfi_img) batch_size = bmode_img.size(0) final_probs = torch.zeros(batch_size, 4).to(bmode_img.device) prob2 = F.softmax(logits2, dim=1) prob3 = F.softmax(logits3, dim=1) for i in range(batch_size): if is_malignant[i]: final_probs[i, 2] = prob1[i, 1] * prob3[i, 0] final_probs[i, 3] = prob1[i, 1] * prob3[i, 1] else: final_probs[i, 0] = prob1[i, 0] * prob2[i, 0] final_probs[i, 1] = prob1[i, 0] * prob2[i, 1] if return_all: return final_probs, logits1, logits2, logits3 return final_probs class TripleModalityEncoder(nn.Module): def __init__(self): super(TripleModalityEncoder, self).__init__() self.bmode_encoder = self._create_encoder() self.cdfi_encoder = self._create_encoder() self.ue_encoder = self._create_encoder() self.feature_dim = 512 * 3 def _create_encoder(self): resnet = models.resnet34(pretrained=True) return nn.Sequential(*list(resnet.children())[:-1]) def forward(self, bmode_img, cdfi_img, ue_img): bmode_feat = self.bmode_encoder(bmode_img).flatten(1) cdfi_feat = self.cdfi_encoder(cdfi_img).flatten(1) ue_feat = self.ue_encoder(ue_img).flatten(1) return torch.cat([bmode_feat, cdfi_feat, ue_feat], dim=1) class CEUSVideoEncoder(nn.Module): def __init__(self, num_frames=16, hidden_dim=256): super(CEUSVideoEncoder, self).__init__() self.frame_encoder = models.resnet18(pretrained=True) self.frame_encoder.fc = nn.Identity() self.temporal_encoder = nn.LSTM(512, hidden_dim, num_layers=2, batch_first=True, bidirectional=True) self.feature_dim = hidden_dim * 2 def forward(self, video): batch_size, num_frames, C, H, W = video.shape frames = video.view(batch_size * num_frames, C, H, W) frame_features = self.frame_encoder(frames) frame_features = frame_features.view(batch_size, num_frames, -1) lstm_out, (h_n, c_n) = self.temporal_encoder(frame_features) video_features = torch.cat([h_n[-2], h_n[-1]], dim=1) return video_features class ClinicalFeatureEncoder(nn.Module): def __init__(self, num_clinical_features, hidden_dim=64): super(ClinicalFeatureEncoder, self).__init__() self.encoder = nn.Sequential( nn.Linear(num_clinical_features, 128), nn.ReLU(inplace=True), nn.Dropout(0.3), nn.Linear(128, hidden_dim) ) self.feature_dim = hidden_dim def forward(self, clinical_features): return self.encoder(clinical_features) class MultiStepModalityFusionModel(nn.Module): def __init__(self, num_clinical_features=6, num_classes=2): super(MultiStepModalityFusionModel, self).__init__() self.static_encoder = TripleModalityEncoder() self.video_encoder = CEUSVideoEncoder() self.clinical_encoder = ClinicalFeatureEncoder(num_clinical_features) step1_dim = self.static_encoder.feature_dim self.step1_fusion = nn.Sequential( nn.Linear(step1_dim, 512), nn.ReLU(inplace=True), nn.Dropout(0.4) ) step2_dim = 512 + self.video_encoder.feature_dim self.step2_fusion = nn.Sequential( nn.Linear(step2_dim, 256), nn.ReLU(inplace=True), nn.Dropout(0.3) ) final_dim = 256 + self.clinical_encoder.feature_dim self.classifier = nn.Sequential( nn.Linear(final_dim, 128), nn.ReLU(inplace=True), nn.Dropout(0.2), nn.Linear(128, num_classes) ) def forward(self, bmode_img, cdfi_img, ue_img, ceus_video, clinical_features): static_features = self.static_encoder(bmode_img, cdfi_img, ue_img) step1_features = self.step1_fusion(static_features) video_features = self.video_encoder(ceus_video) step2_input = torch.cat([step1_features, video_features], dim=1) step2_features = self.step2_fusion(step2_input) clinical_features = self.clinical_encoder(clinical_features) final_features = torch.cat([step2_features, clinical_features], dim=1) logits = self.classifier(final_features) return logits class PrimaryTumorPredictionModel(nn.Module): def __init__(self, num_clinical_features=3, num_classes=4): super(PrimaryTumorPredictionModel, self).__init__() self.msmfm = MultiStepModalityFusionModel(num_clinical_features, num_classes) def forward(self, bmode_img, cdfi_img, ue_img, ceus_video, clinical_features): return self.msmfm(bmode_img, cdfi_img, ue_img, ceus_video, clinical_features) class AttentionFusion(nn.Module): def __init__(self, feature_dims): super(AttentionFusion, self).__init__() total_dim = sum(feature_dims) self.attention = nn.Sequential( nn.Linear(total_dim, 128), nn.Tanh(), nn.Linear(128, len(feature_dims)), nn.Softmax(dim=1) ) def forward(self, features_list): concat_features = torch.cat(features_list, dim=1) attention_weights = self.attention(concat_features) weighted_features = [] start_idx = 0 for i, feat in enumerate(features_list): weight = attention_weights[:, i:i+1] weighted_features.append(feat * weight) return torch.cat(weighted_features, dim=1) def train_hierarchical_model(model, train_loader, val_loader, epochs, learning_rate): optimizer = optim.Adam(model.parameters(), lr=learning_rate, weight_decay=1e-4) criterion = nn.CrossEntropyLoss() scheduler = optim.lr_scheduler.ReduceLROnPlateau(optimizer, mode='max', patience=5) for epoch in range(epochs): model.train() for batch in train_loader: bmode, cdfi, labels = batch optimizer.zero_grad() probs, logits1, logits2, logits3 = model(bmode, cdfi, return_all=True) binary_labels = (labels >= 2).long() loss1 = criterion(logits1, binary_labels) benign_mask = labels < 2 if benign_mask.sum() > 0: loss2 = criterion(logits2[benign_mask], labels[benign_mask]) else: loss2 = 0 malignant_mask = labels >= 2 if malignant_mask.sum() > 0: loss3 = criterion(logits3[malignant_mask], labels[malignant_mask] - 2) else: loss3 = 0 total_loss = loss1 + loss2 + loss3 total_loss.backward() optimizer.step() model.eval() val_preds, val_labels = [], [] with torch.no_grad(): for batch in val_loader: bmode, cdfi, labels = batch probs = model(bmode, cdfi) preds = probs.argmax(dim=1) val_preds.extend(preds.cpu().numpy()) val_labels.extend(labels.cpu().numpy()) val_acc = accuracy_score(val_labels, val_preds) scheduler.step(val_acc) return model def train_msmfm(model, train_loader, val_loader, epochs, learning_rate): optimizer = optim.Adam(model.parameters(), lr=learning_rate, weight_decay=1e-4) criterion = nn.CrossEntropyLoss() for epoch in range(epochs): model.train() for batch in train_loader: bmode, cdfi, ue, ceus, clinical, labels = batch optimizer.zero_grad() logits = model(bmode, cdfi, ue, ceus, clinical) loss = criterion(logits, labels) loss.backward() optimizer.step() return model def calculate_metrics(predictions, labels, num_classes): accuracy = accuracy_score(labels, predictions.argmax(axis=1)) auc_scores = [] for i in range(num_classes): binary_labels = (np.array(labels) == i).astype(int) if len(np.unique(binary_labels)) > 1: auc = roc_auc_score(binary_labels, predictions[:, i]) auc_scores.append(auc) return accuracy, np.mean(auc_scores) if __name__ == "__main__": hdm = HierarchicalDiagnosticModel() msmfm = MultiStepModalityFusionModel(num_clinical_features=6, num_classes=2) primary_model = PrimaryTumorPredictionModel(num_clinical_features=3, num_classes=4) dummy_bmode = torch.randn(4, 3, 224, 224) dummy_cdfi = torch.randn(4, 3, 224, 224) dummy_ue = torch.randn(4, 3, 224, 224) dummy_ceus = torch.randn(4, 16, 3, 112, 112) dummy_clinical = torch.randn(4, 6) hdm_output = hdm(dummy_bmode, dummy_cdfi) msmfm_output = msmfm(dummy_bmode, dummy_cdfi, dummy_ue, dummy_ceus, dummy_clinical)


如有问题,可以直接沟通

👇👇👇👇👇👇👇👇👇👇👇👇👇👇👇👇👇👇👇👇👇👇

版权声明: 本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若内容造成侵权/违法违规/事实不符,请联系邮箱:809451989@qq.com进行投诉反馈,一经查实,立即删除!
网站建设 2026/3/12 18:21:37

解锁文献综述新境界:书匠策AI的“学术星图导航仪”

在学术探索的浩瀚宇宙中&#xff0c;文献综述犹如一张精准的星图&#xff0c;它不仅勾勒出前人研究的轨迹&#xff0c;更为我们指明了前行的方向。然而&#xff0c;传统文献综述的撰写过程往往繁琐且耗时&#xff0c;如同在茫茫星海中手动绘制星图&#xff0c;既易出错又效率低…

作者头像 李华
网站建设 2026/3/13 0:07:23

Google面试密码:解码那些挑战思维边界的真题与哲学

Google面试密码&#xff1a;解码那些挑战思维边界的真题与哲学引言&#xff1a;硅谷的智力圣杯在科技世界的圣殿中&#xff0c;Google的面试过程犹如一场现代版的骑士考验&#xff0c;充满了传奇色彩和敬畏感。每年&#xff0c;数百万来自世界各地的顶尖人才竞相申请Google的职…

作者头像 李华
网站建设 2026/3/23 23:44:10

破译微软面试密码:从真题解析到人才选拔哲学的深度探索

破译微软面试密码&#xff1a;从真题解析到人才选拔哲学的深度探索 引言&#xff1a;为何微软面试成为科技行业风向标 在科技行业的人才选拔体系中&#xff0c;微软公司的面试流程一直被视为标杆和风向标。自1975年比尔盖茨和保罗艾伦创立以来&#xff0c;微软不仅塑造了全球…

作者头像 李华
网站建设 2026/3/24 4:52:16

LLM Fine-Tuning|七阶段微调【工程系列】1.总览

七阶段微调(7-Stage Fine-Tuning)流程的本质不是“训练步骤”&#xff0c;而是&#xff1a; 一套覆盖模型从“通用能力→领域专家→生产系统→持续演进”的完整工程生命周期 解决的不是"能不能训"&#xff0c;而是能不能 稳定训上线长期用持续改而不翻车 1.LLM生命周…

作者头像 李华
网站建设 2026/3/20 4:17:08

让大模型更“懂”外部知识:RAG技术及未来发展综述

&#xff5c;引言 如何更好地结合外部数据&#xff0c;如何提升模型处理专业领域问题的可靠性&#xff0c;是大语言模型应用开发中值得不断思考的问题。针对此&#xff0c;微软亚洲研究院的研究员们提出了一种基于查询需求分层的 RAG 任务分类法&#xff0c;从显式事实、隐式事…

作者头像 李华