TensorFlow学习

mtain 2022年04月26日 28次浏览

1. 开发环境

Conda+PyCharm+tensorflow2

官方文档:https://tensorflow.google.cn/

2. 环境配置

创建opencv虚拟环境
conda create -n tensorflow python=3.9.10

激活环境
conda activate tensorflow

安装tensorflow
conda install -c apple tensorflow-deps==2.6.0

python -m pip install tensorflow-macos

python -m pip install tensorflow-metal


# 安装jupyter
pip install jupyter
# 启动
jupyter notebook

3. 入门实例

3.1 TensorFlow2.0-Examples

Github: https://github.com/YunYang1994/TensorFlow2.0

Examples

  1. 简介
    Hello World:非常简单的示例,学习如何使用TensorFlow打印“hello world”
    变量:学习在张量流中使用变量
    基本操作:一个涵盖 TensorFlow基本操作的简单示例
    激活:开始了解张量流中的一些激活函数
    GradientTape:介绍自动微分的关键技术

  2. 基本模型
    线性回归使用:TensorFlow实现线性回归
    逻辑回归使用:TensorFlow实现逻辑回归
    多层感知器层使用:TensorFlow实现多层感知器模型
    CNN:使用TensorFlow实现CNN模型

  3. 神经网络架构
    VGG16:用于大规模图像识别的超深度卷积网络
    Resnet:用于图像识别的深度残差学习
    自动编码器:使用神经网络降低数据的维度
    68206851b08d2580000811ea8b51061e0cbead62.gif

FPN:用于对象检测的特征金字塔网络

  1. 物体检测
    RPN:一个区域提案网络
    679132314e2ac400fbc711e9999594ed6f7181d4.png

MTCNN:使用多任务级联卷积网络的联合人脸检测和对齐(人脸检测和对齐)
685475317e6f2f80041d11ea8cfb0c5a22af0921.jpeg

YOLOv3:渐进式改进
67914531656bb080fbcb11e99775302a25faf747.png

固态硬盘SSD:单发多箱探测器【待办事项】
682901345f416c8000c211ea8cbcd6010ced4efd.png

更快的R-CNN更快的R-CNN:通过区域提案网络实现实时对象检测【待办事项】
6854662354187480041311ea93960a698c5a2580.png

  1. 图像分割
    FCN:用于语义分割的全卷积网络
    67917411e62eaa80fbd311e99fe195550cf559d7.png

U-Net:用于生物医学图像分割的卷积网络
679222382ba7a380fbe511e996a055c6827bd024.png

  1. 生成对抗网络
    DCGAN:深度卷积生成对抗网络
    Pix2Pix:具有条件对抗网络的图像到图像转换

  2. 使用
    多GPU训练使用多个GPU训练模型

3.2 mnist手写数字识别

MNIST数据集:http://yann.lecun.com/exdb/mnist/

官方例子:https://tensorflow.google.cn/tutorials/quickstart/advanced

3.3 fashion_mnist服装分类

Github:https://github.com/zalandoresearch/fashion-mnist
里面包含数据集

tensorFlow代码实现

import tensorflow as tf

import numpy as np
import os
import gzip
import datetime

# 定义加载数据的函数,data_folder为保存gz数据的文件夹,该文件夹下有4个文件
# 'train-labels-idx1-ubyte.gz', 'train-images-idx3-ubyte.gz',
# 't10k-labels-idx1-ubyte.gz', 't10k-images-idx3-ubyte.gz'

def load_data(data_folder):
  files = [
      'train-labels-idx1-ubyte.gz', 'train-images-idx3-ubyte.gz',
      't10k-labels-idx1-ubyte.gz', 't10k-images-idx3-ubyte.gz'
  ]

  paths = []
  for fname in files:
    paths.append(os.path.join(data_folder, fname))

  with gzip.open(paths[0], 'rb') as lbpath:
    y_train = np.frombuffer(lbpath.read(), np.uint8, offset=8)

  with gzip.open(paths[1], 'rb') as imgpath:
    x_train = np.frombuffer(
        imgpath.read(), np.uint8, offset=16).reshape(len(y_train), 28, 28)

  with gzip.open(paths[2], 'rb') as lbpath:
    y_test = np.frombuffer(lbpath.read(), np.uint8, offset=8)

  with gzip.open(paths[3], 'rb') as imgpath:
    x_test = np.frombuffer(
        imgpath.read(), np.uint8, offset=16).reshape(len(y_test), 28, 28)

  return (x_train, y_train), (x_test, y_test)


starttime = datetime.datetime.now()


(train_x, train_y), (test_x, test_y) = load_data('MNIST_data/')


# 网络下载训练数据
# 默认下载到~/.keras/datasets/fashion-mnist/t10k-labels-idx1-ubyte.gz
# (train_x, train_y), (test_x, test_y) = tf.keras.datasets.fashion_mnist.load_data()


def preprocess(x, y):
    x = tf.cast(x, dtype=tf.float32) / 255.0
    y = tf.cast(y, dtype=tf.int32)
    return x, y

train_db = tf.data.Dataset.from_tensor_slices((train_x, train_y))
train_db = train_db.map(preprocess).shuffle(1000).batch(32)
test_db = tf.data.Dataset.from_tensor_slices((test_x, test_y))
test_db = test_db.map(preprocess).batch(32)


class Mymodel(tf.keras.Model):
    def __init__(self):
        super().__init__()
        self.conv1 = tf.keras.layers.Conv2D(filters=32, kernel_size=[3, 3], padding='same', activation=tf.nn.relu)
        self.pool1 = tf.keras.layers.MaxPool2D(pool_size=[2, 2], strides=[2, 2])
        self.conv2 = tf.keras.layers.Conv2D(filters=64, kernel_size=[3, 3], padding='same', activation=tf.nn.relu)
        self.pool2 = tf.keras.layers.MaxPool2D(pool_size=[2, 2], strides=[2, 2])
        self.flatten = tf.keras.layers.Flatten()
        self.fc1 = tf.keras.layers.Dense(64, activation=tf.nn.relu)
        self.fc2 = tf.keras.layers.Dense(10, activation=tf.nn.softmax)

    def call(self, inputs):
        x = self.conv1(inputs)
        x = self.pool1(x)
        x = self.conv2(x)
        x = self.pool2(x)
        x = self.flatten(x)
        x = self.fc1(x)
        x = self.fc2(x)
        return x


model = Mymodel()
model.build(input_shape=(None, 28, 28, 1))
model.summary()
optimizer = tf.keras.optimizers.Adam(learning_rate=0.001)

for epoch in range(10):
    train_loss = 0
    train_num = 0
    for x, y in train_db:
        x = tf.reshape(x, [-1, 28, 28, 1])
        with tf.GradientTape() as tape:
            pred = model(x)
            loss = tf.keras.losses.sparse_categorical_crossentropy(y_true=y, y_pred=pred)
            loss = tf.reduce_mean(loss)
        grads = tape.gradient(loss, model.trainable_variables)
        optimizer.apply_gradients(zip(grads, model.trainable_variables))
        train_loss += float(loss)
        train_num += x.shape[0]
    loss = train_loss / train_num

    total_correct = 0
    total_num = 0
    for x, y in test_db:
        x = tf.reshape(x, [-1, 28, 28, 1])
        pred = model(x)
        pred = tf.argmax(pred, axis=1)
        pred = tf.cast(pred, dtype=tf.int32)
        correct = tf.equal(pred, y)
        correct = tf.reduce_sum(tf.cast(correct, dtype=tf.int32))
        total_correct += correct
        total_num += x.shape[0]
    accuracy = float(total_correct / total_num)
    print(epoch, 'loss:', loss, 'accuracy:', accuracy)

print('.....................预测.............................')
for x, y in test_db:
    img = x
    label = y
    break

x = tf.reshape(x, [-1, 28, 28, 1])
logits = model(x)
logits = tf.argmax(logits, axis=1)
logits = tf.cast(logits, dtype=tf.int32)

print('logits:', logits)
print('label:', label)

print('预测值和标签是否相等呢?', tf.equal(logits, y))

endtime = datetime.datetime.now()
print('运行耗时:', endtime-starttime)

m1芯片运行结果

0 loss: 0.013465284936937194 accuracy: 0.8749
1 loss: 0.008801757772266865 accuracy: 0.8936
2 loss: 0.007410778609818468 accuracy: 0.9043
3 loss: 0.0063808930972901485 accuracy: 0.9113
4 loss: 0.005589933775334309 accuracy: 0.9129
5 loss: 0.004844056898301157 accuracy: 0.9089
6 loss: 0.00420004242938788 accuracy: 0.9125
7 loss: 0.0036903093538402268 accuracy: 0.9121
8 loss: 0.003222900984052103 accuracy: 0.9186
9 loss: 0.0027694650786501975 accuracy: 0.9168
.....................预测.............................
logits: tf.Tensor([9 2 1 1 6 1 4 6 5 7 4 5 7 3 4 1 2 4 8 0 2 5 7 5 1 2 6 0 9 3 8 8], shape=(32,), dtype=int32)
label: tf.Tensor([9 2 1 1 6 1 4 6 5 7 4 5 7 3 4 1 2 4 8 0 2 5 7 9 1 4 6 0 9 3 8 8], shape=(32,), dtype=int32)
预测值和标签是否相等呢? tf.Tensor(
[ True  True  True  True  True  True  True  True  True  True  True  True
  True  True  True  True  True  True  True  True  True  True  True False
  True False  True  True  True  True  True  True], shape=(32,), dtype=bool)
运行耗时: 0:03:10.046904