news 2026/5/13 8:36:27

图像分类:从传统方法到深度学习

作者头像

张小明

前端开发工程师

1.2k 24
文章封面图
图像分类:从传统方法到深度学习

图像分类:从传统方法到深度学习

1. 技术分析

1.1 图像分类技术演进

图像分类经历了从传统方法到深度学习的演进:

图像分类技术路线 传统方法: SIFT/SURF + SVM 深度学习: AlexNet → ResNet → ViT

1.2 分类方法对比

方法特征提取模型效果适用场景
SIFT + SVM手工特征传统模型小规模
AlexNetCNN深度学习中等规模
ResNet残差CNN深度学习很高大规模
ViTTransformer预训练极高大规模

1.3 图像分类指标

图像分类评估指标 Top-1 准确率: 最可能类别正确比例 Top-5 准确率: 前5个预测中包含正确类别 Confusion Matrix: 混淆矩阵

2. 核心功能实现

2.1 传统图像分类

import cv2 import numpy as np from sklearn.svm import SVC from sklearn.preprocessing import StandardScaler class SIFTClassifier: def __init__(self): self.sift = cv2.SIFT_create() self.svm = SVC() self.scaler = StandardScaler() def extract_features(self, image): gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY) keypoints, descriptors = self.sift.detectAndCompute(gray, None) if descriptors is not None: return descriptors.mean(axis=0) else: return np.zeros(128) def train(self, images, labels): features = [self.extract_features(img) for img in images] features = np.array(features) features = self.scaler.fit_transform(features) self.svm.fit(features, labels) def predict(self, image): features = self.extract_features(image) features = self.scaler.transform([features]) return self.svm.predict(features)[0] class HOGClassifier: def __init__(self): self.hog = cv2.HOGDescriptor() self.svm = SVC() def extract_features(self, image): gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY) features = self.hog.compute(gray) return features.flatten() def train(self, images, labels): features = [self.extract_features(img) for img in images] features = np.array(features) self.svm.fit(features, labels) def predict(self, image): features = self.extract_features(image) return self.svm.predict([features])[0]

2.2 CNN 图像分类

import torch import torch.nn as nn import torch.nn.functional as F class SimpleCNN(nn.Module): def __init__(self, num_classes=10): super().__init__() self.conv_layers = nn.Sequential( nn.Conv2d(3, 32, kernel_size=3, padding=1), nn.ReLU(), nn.MaxPool2d(2, 2), nn.Conv2d(32, 64, kernel_size=3, padding=1), nn.ReLU(), nn.MaxPool2d(2, 2), nn.Conv2d(64, 128, kernel_size=3, padding=1), nn.ReLU(), nn.MaxPool2d(2, 2) ) self.fc_layers = nn.Sequential( nn.Linear(128 * 4 * 4, 512), nn.ReLU(), nn.Linear(512, num_classes) ) def forward(self, x): x = self.conv_layers(x) x = x.view(-1, 128 * 4 * 4) x = self.fc_layers(x) return x class AlexNet(nn.Module): def __init__(self, num_classes=1000): super().__init__() self.features = nn.Sequential( nn.Conv2d(3, 64, kernel_size=11, stride=4, padding=2), nn.ReLU(inplace=True), nn.MaxPool2d(kernel_size=3, stride=2), nn.Conv2d(64, 192, kernel_size=5, padding=2), nn.ReLU(inplace=True), nn.MaxPool2d(kernel_size=3, stride=2), nn.Conv2d(192, 384, kernel_size=3, padding=1), nn.ReLU(inplace=True), nn.Conv2d(384, 256, kernel_size=3, padding=1), nn.ReLU(inplace=True), nn.Conv2d(256, 256, kernel_size=3, padding=1), nn.ReLU(inplace=True), nn.MaxPool2d(kernel_size=3, stride=2) ) self.classifier = nn.Sequential( nn.Dropout(), nn.Linear(256 * 6 * 6, 4096), nn.ReLU(inplace=True), nn.Dropout(), nn.Linear(4096, 4096), nn.ReLU(inplace=True), nn.Linear(4096, num_classes) ) def forward(self, x): x = self.features(x) x = x.view(-1, 256 * 6 * 6) x = self.classifier(x) return x

2.3 Vision Transformer 实现

class PatchEmbedding(nn.Module): def __init__(self, img_size=224, patch_size=16, in_channels=3, embed_dim=768): super().__init__() self.img_size = img_size self.patch_size = patch_size self.num_patches = (img_size // patch_size) ** 2 self.proj = nn.Conv2d(in_channels, embed_dim, kernel_size=patch_size, stride=patch_size) def forward(self, x): x = self.proj(x) x = x.flatten(2).transpose(1, 2) return x class TransformerBlock(nn.Module): def __init__(self, embed_dim, num_heads, mlp_ratio=4.0): super().__init__() self.norm1 = nn.LayerNorm(embed_dim) self.attn = nn.MultiheadAttention(embed_dim, num_heads) self.norm2 = nn.LayerNorm(embed_dim) mlp_dim = int(embed_dim * mlp_ratio) self.mlp = nn.Sequential( nn.Linear(embed_dim, mlp_dim), nn.GELU(), nn.Linear(mlp_dim, embed_dim) ) def forward(self, x): x = x + self.attn(self.norm1(x), self.norm1(x), self.norm1(x))[0] x = x + self.mlp(self.norm2(x)) return x class ViT(nn.Module): def __init__(self, img_size=224, patch_size=16, in_channels=3, embed_dim=768, num_heads=12, num_layers=12, num_classes=1000): super().__init__() self.patch_embed = PatchEmbedding(img_size, patch_size, in_channels, embed_dim) self.cls_token = nn.Parameter(torch.randn(1, 1, embed_dim)) num_patches = self.patch_embed.num_patches self.pos_embed = nn.Parameter(torch.randn(1, num_patches + 1, embed_dim)) self.blocks = nn.Sequential(*[ TransformerBlock(embed_dim, num_heads) for _ in range(num_layers) ]) self.norm = nn.LayerNorm(embed_dim) self.head = nn.Linear(embed_dim, num_classes) def forward(self, x): x = self.patch_embed(x) cls_tokens = self.cls_token.expand(x.size(0), -1, -1) x = torch.cat([cls_tokens, x], dim=1) x = x + self.pos_embed x = self.blocks(x) x = self.norm(x) return self.head(x[:, 0])

3. 性能对比

3.1 图像分类方法对比

方法Top-1Top-5模型大小推理速度
SIFT + SVM60%80%
AlexNet83%97%240MB
ResNet-5076%93%98MB
ViT-Base85%98%340MB
ViT-Large87%99%1.2GB

3.2 不同数据集表现

数据集SIFT+SVMAlexNetResNet-50ViT
CIFAR-1075%92%95%97%
ImageNet60%83%76%85%
MNIST98%99%99.7%99.8%

3.3 数据增强效果

增强方式准确率提升计算开销
随机裁剪+2%
随机翻转+1%
色彩抖动+1%
MixUp+2%
CutMix+2%

4. 最佳实践

4.1 图像分类模型选择

def select_classifier(task_type, data_size): if data_size < 1000: return SIFTClassifier() elif data_size < 10000: return SimpleCNN(num_classes=10) else: return ViT(num_classes=10) class ClassifierFactory: @staticmethod def create(config): if config['type'] == 'traditional': return SIFTClassifier() elif config['type'] == 'cnn': return SimpleCNN(**config['params']) elif config['type'] == 'vit': return ViT(**config['params'])

4.2 图像分类训练流程

class ImageClassificationTrainer: def __init__(self, model, optimizer, scheduler, loss_fn, device='cuda'): self.model = model.to(device) self.optimizer = optimizer self.scheduler = scheduler self.loss_fn = loss_fn self.device = device def train_step(self, images, labels): self.optimizer.zero_grad() images = images.to(self.device) labels = labels.to(self.device) outputs = self.model(images) loss = self.loss_fn(outputs, labels) loss.backward() self.optimizer.step() self.scheduler.step() return loss.item() def evaluate(self, dataloader): self.model.eval() correct = 0 total = 0 with torch.no_grad(): for images, labels in dataloader: images = images.to(self.device) labels = labels.to(self.device) outputs = self.model(images) predictions = torch.argmax(outputs, dim=1) correct += (predictions == labels).sum().item() total += labels.size(0) return correct / total

5. 总结

图像分类是计算机视觉的基础任务:

  1. 传统方法:适合小规模数据,快速简单
  2. CNN:深度学习主流方法,效果好
  3. ViT:Transformer 在图像领域的应用,效果最佳
  4. 数据增强:提升模型泛化能力

对比数据如下:

  • ViT 在大规模数据上表现最好
  • CNN 在中等规模数据上性价比最高
  • 数据增强可提升 5-10% 准确率
  • 推荐使用预训练模型进行微调
版权声明: 本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若内容造成侵权/违法违规/事实不符,请联系邮箱:809451989@qq.com进行投诉反馈,一经查实,立即删除!
网站建设 2026/5/13 8:32:17

WeChatExporter:在Mac上完整备份微信聊天记录的终极指南

WeChatExporter&#xff1a;在Mac上完整备份微信聊天记录的终极指南 【免费下载链接】WeChatExporter 一个可以快速导出、查看你的微信聊天记录的工具 项目地址: https://gitcode.com/gh_mirrors/wec/WeChatExporter 你是否担心珍贵的微信聊天记录会因为手机丢失、系统升…

作者头像 李华
网站建设 2026/5/13 8:27:08

Azure AI实战:从Demo到生产级智能应用架构全解析

1. 项目概述与核心价值最近在探索Azure AI服务时&#xff0c;我偶然发现了一个名为“Azure-AIGEN-demos”的GitHub仓库。这个项目由开发者retkowsky维护&#xff0c;乍一看名字&#xff0c;你可能会觉得它又是一个普通的Azure AI示例代码合集。但当我真正深入进去&#xff0c;花…

作者头像 李华
网站建设 2026/5/13 8:19:22

BilibiliVideoDownload:3步解决跨平台B站视频下载常见问题

BilibiliVideoDownload&#xff1a;3步解决跨平台B站视频下载常见问题 【免费下载链接】BilibiliVideoDownload Cross-platform download bilibili video desktop software, support windows, macOS, Linux 项目地址: https://gitcode.com/gh_mirrors/bi/BilibiliVideoDownlo…

作者头像 李华
网站建设 2026/5/13 8:19:21

Vosk语音识别准确率实战优化:从78%到95%的3个关键技术点

Vosk语音识别准确率实战优化&#xff1a;从78%到95%的3个关键技术点 【免费下载链接】vosk-api Offline speech recognition API for Android, iOS, Raspberry Pi and servers with Python, Java, C# and Node 项目地址: https://gitcode.com/GitHub_Trending/vo/vosk-api …

作者头像 李华