从数据集中包含的值创建具有特定池的顺序网络。这个过程在“图像识别模块”中也得到了很好的应用。
以下步骤用于使用PyTorch创建卷积神经网络(Convents)的序列处理模型:
1、导入模块
导入必要的模块,以执行序列处理使用卷积神经网络(Convents)。
import torch import torch.nn as nn import torch.optim as optim from torch.utils.data import DataLoader, TensorDataset
2、数据转换与加载
使用下面的代码执行必要的操作,以相应的顺序创建一个模式:
# 参数配置 batch_size = 128 num_classes = 10 epochs = 12 img_rows, img_cols = 28, 28 # 数据转换与加载 transform = transforms.Compose([ # 转换为 (C, H, W) 范围 [0,1] transforms.ToTensor(), # 标准化 MNIST 的均值与方差 transforms.Normalize((0.1307,), (0.3081,)) ]) train_dataset = datasets.MNIST(root='./data', train=True, download=True, transform=transform) test_dataset = datasets.MNIST(root='./data', train=False, download=True, transform=transform) train_loader = DataLoader(train_dataset, batch_size=batch_size, shuffle=True, num_workers=2) test_loader = DataLoader(test_dataset, batch_size=batch_size, shuffle=False, num_workers=2) # 数据集信息打印 print(f'x_train shape: {train_dataset.data.shape} (raw tensor shape)') print(f'{len(train_dataset)} train samples') print(f'{len(test_dataset)} test samples')
3、训练模型
1)model
定义了一个用于图像分类的二维卷积神经网络(CNN),模型包含两个卷积层(用于提取特征)、两个全连接层(用于分类),并在中间使用 ReLU 激活、最大池化和 Dropout 进行特征压缩和正则化,适合处理如 MNIST 这类的灰度图像并输出 10 类分类结果。
class CNN(nn.Module): def __init__(self): super(CNN, self).__init__() self.conv1 = nn.Conv2d(1, 32, 3, 1) # 输入通道数为1 self.conv2 = nn.Conv2d(32, 64, 3, 1) self.dropout1 = nn.Dropout(0.25) self.dropout2 = nn.Dropout(0.5) self.fc1 = nn.Linear(9216, 128) self.fc2 = nn.Linear(128, 10) def forward(self, x): x = self.conv1(x) x = F.relu(x) x = self.conv2(x) x = F.relu(x) x = F.max_pool2d(x, 2) x = self.dropout1(x) x = torch.flatten(x, 1) x = self.fc1(x) x = F.relu(x) x = self.dropout2(x) x = self.fc2(x) return x model = CNN()
2)加载训练
通过 DataLoader 加载训练和测试数据集,然后将模型迁移到 GPU 或 CPU,使用 CrossEntropyLoss 和 Adadelta 优化器训练模型若干轮(epoch),在每轮中计算训练损失与准确率,最后在测试集上评估模型性能并输出测试损失与准确率。
rain_loader = DataLoader(train_dataset, batch_size=batch_size, shuffle=True) test_loader = DataLoader(test_dataset, batch_size=batch_size) # 有一个已定义的 model,类别为分类任务 device = torch.device("cuda" if torch.cuda.is_available() else "cpu") model = model.to(device) # 损失函数和优化器 # 用于 one-hot 标签,记得 y_train 是整数编码 criterion = nn.CrossEntropyLoss() optimizer = optim.Adadelta(model.parameters()) # 训练模型 for epoch in range(epochs): model.train() running_loss = 0.0 correct = 0 total = 0 for inputs, targets in train_loader: inputs, targets = inputs.to(device), targets.to(device) optimizer.zero_grad() outputs = model(inputs) loss = criterion(outputs, targets) loss.backward() optimizer.step() running_loss += loss.item() * inputs.size(0) _, predicted = torch.max(outputs, 1) correct += (predicted == targets).sum().item() total += targets.size(0) train_loss = running_loss / total train_acc = correct / total print(f"Epoch {epoch + 1}/{epochs} - ") print(f"Loss: {train_loss:.4f} - Accuracy: {train_acc:.4f}")
3)测试模型
# 测试模型 model.eval() test_loss = 0.0 correct = 0 total = 0 with torch.no_grad(): for inputs, targets in test_loader: inputs, targets = inputs.to(device), targets.to(device) outputs = model(inputs) loss = criterion(outputs, targets) test_loss += loss.item() * inputs.size(0) _, predicted = torch.max(outputs, 1) correct += (predicted == targets).sum().item() total += targets.size(0) test_loss /= total test_acc = correct / total print("Test loss:", test_loss) print("Test accuracy:", test_acc)