在 VGG 網路論文研讀中,我們瞭解到卷積神經網路也可以進行到很深層,VGG16 和 VGG19 就是證明。但卷積網路變得更深呢?當然是可以的。深度神經網路能夠從提取圖像各個層級的特徵,使得圖像識別的準確率越來越高。但在2014年和15年那會兒,將卷積網路變深且取得不錯的訓練效果並不是一件容易的事。

深度卷積網路一開始面臨的最主要的問題是梯度消失和梯度爆炸。那什麼是梯度消失和梯度爆炸呢?所謂梯度消失,就是在深層神經網路的訓練過程中,計算得到的梯度越來越小,使得權值得不到更新的情形,這樣演算法也就失效了。而梯度爆炸則是相反的情況,是指在神經網路訓練過程中梯度變得越來越大,權值得到瘋狂更新的情形,這樣演算法得不到收斂,模型也就失效了。當然,其間通過設置 relu 和歸一化激活函數層等手段使得我們很好的解決這些問題。但當我們將網路層數加到更深時卻發現訓練的準確率在逐漸降低。這種並不是由過擬合造成的神經網路訓練數據識別準確率降低的現象我們稱之為退化(degradation)。

由上圖我們可以看到 56 層的普通卷積網路不管是在訓練集還是測試集上的訓練誤差都要高於 20 層的卷積網路。是個典型的退化現象。

這退化問題不解決,咱們的深度學習就無法 go deeper. 於是何凱明等一干大佬就發明瞭今天我們要研讀的論文主題——殘差網路 ResNet.

殘差塊與殘差網路

要理解殘差網路,就必須理解殘差塊(residual block)這個結構,因為殘差塊是殘差網路的基本組成部分。回憶一下我們之前學到的各種卷積網路結構(LeNet-5/AlexNet/VGG),通常結構就是卷積池化再卷積池化,中間的卷積池化操作可以很多層。類似這樣的網路結構何凱明在論文中將其稱為普通網路(Plain Network),何凱明認為普通網路解決不了退化問題,我們需要在網路結構上作出創新。

何凱明給出的創新在於給網路之間添加一個捷徑(shortcuts)或者也叫跳躍連接(skip connection),這使得捷徑之間之間的網路能夠學習一個恆等函數,使得在加深網路的情形下訓練效果至少不會變差。殘差塊的基本結構如下:

以上殘差塊是一個兩層的網路結構,輸入 X 經過兩層的加權和激活得到 F(X) 的輸出,這是典型的普通卷積網路結構。但殘差塊的區別在於添加了一個從輸入 X 到兩層網路輸出單元的 shortcut,這使得輸入節點的信息單元直接獲得了與輸出節點的信息單元通信的能力 ,這時候在進行 relu 激活之前的輸出就不再是 F(X) 了,而是 F(X)+X。當很多個具備類似結構的這樣的殘差塊組建到一起時,殘差網路就順利形成了。殘差網路能夠順利訓練很深層的卷積網路,其中能夠很好的解決網路的退化問題。

或許你可能會問憑什麼加了一條從輸入到輸出的捷徑網路就能防止退化訓練更深層的卷積網路?或是是說殘差網路為什麼能有效?我們將上述殘差塊的兩層輸入輸出符號改為 和 ,相應的就有:

加入 的跳躍連接後就有:

在網路中加入 L2 正則化進行權值衰減或者其他情形下,l+2 層的權值 W 是很容易衰減為零的,假設偏置同樣為零的情形下就有 = 。深度學習的試驗表明學習這個恆等式並不困難,這就意味著,在擁有跳躍連接的普通網路即使多加幾層,其效果也並不遜色於加深之前的網路效果。當然,我們的目標不是保持網路不退化,而是需要提升網路表現,當隱藏層能夠學到一些有用的信息時,殘差網路的效果就會提升。所以,殘差網路之所以有效是在於它能夠很好的學習上述那個恆等式,而普通網路學習恆等式都很困難,殘差網路在兩者相較中自然勝出。

由很多個殘差塊組成的殘差網路如下圖右圖所示:

殘差塊的 keras 實現

要實現一個殘差塊,關鍵在於實現一個跳躍連接。實際處理中跳躍連接會隨著殘差塊輸入輸出大小的不同而分為兩種。一種是輸入輸出一致情況下的 Identity Block,另一種則是輸入輸出不一致情形下的 Convolutional Block,顧名思義,就是跳躍連接中包含卷積操作,用來使得輸入輸出一致。且看二者的 keras 實現方法。

Identity Block 的圖示如下:

編寫實現代碼如下:

def identity_block(X, f, filters, stage, block):
"""
Implementation of the identity block as defined in Figure 3

Arguments:
X -- input tensor of shape (m, n_H_prev, n_W_prev, n_C_prev)
f -- integer, specifying the shape of the middle CONVs window for the main path
filters -- python list of integers, defining the number of filters in the CONV layers of the main path
stage -- integer, used to name the layers, depending on their position in the network
block -- string/character, used to name the layers, depending on their position in the network

Returns:
X -- output of the identity block, tensor of shape (n_H, n_W, n_C)
"""

# defining name basis
conv_name_base = res + str(stage) + block + _branch
bn_name_base = bn + str(stage) + block + _branch

# Retrieve Filters
F1, F2, F3 = filters
# Save the input value. Youll need this later to add back to the main path.
X_shortcut = X
# First component of main path
X = Conv2D(filters = F1, kernel_size = (1, 1), strides = (1,1), padding = valid, name = conv_name_base + 2a, kernel_initializer = glorot_uniform(seed=0))(X)
X = BatchNormalization(axis = 3, name = bn_name_base + 2a)(X)
X = Activation(relu)(X)
# Second component of main path
X = Conv2D(filters = F2, kernel_size = (f, f), strides= (1, 1), padding = same, name = conv_name_base + 2b, kernel_initializer = glorot_uniform(seed=0))(X)
X = BatchNormalization(axis = 3, name = bn_name_base + 2b)(X)
X = Activation(relu)(X)
# Third component of main path
X = Conv2D(filters = F3, kernel_size = (1, 1), strides = (1, 1), padding = valid, name = conv_name_base + 2c, kernel_initializer = glorot_uniform(seed=0))(X)
X = BatchNormalization(axis = 3, name = bn_name_base + 2c)(X)
# Final step: Add shortcut value to main path, and pass it through a RELU activation
X = Add()([X, X_shortcut])
X = Activation(relu)(X)

return X

可見殘差塊的實現特殊之處就在於添加一條跳躍連接。

Convolutional Block 的圖示如下:

編寫實現代碼如下:

def convolutional_block(X, f, filters, stage, block, s = 2):
"""
Implementation of the convolutional block as defined in Figure 4

Arguments:
X -- input tensor of shape (m, n_H_prev, n_W_prev, n_C_prev)
f -- integer, specifying the shape of the middle CONVs window for the main path
filters -- python list of integers, defining the number of filters in the CONV layers of the main path
stage -- integer, used to name the layers, depending on their position in the network
block -- string/character, used to name the layers, depending on their position in the network
s -- Integer, specifying the stride to be used

Returns:
X -- output of the convolutional block, tensor of shape (n_H, n_W, n_C)
"""

# defining name basis
conv_name_base = res + str(stage) + block + _branch
bn_name_base = bn + str(stage) + block + _branch

# Retrieve Filters
F1, F2, F3 = filters
# Save the input value
X_shortcut = X

##### MAIN PATH #####
# First component of main path
X = Conv2D(filters = F1, kernel_size = (1, 1), strides = (s,s), padding = valid, name = conv_name_base + 2a, kernel_initializer = glorot_uniform(seed=0))(X)
X = BatchNormalization(axis = 3, name = bn_name_base + 2a)(X)
X = Activation(relu)(X)
# Second component of main path
X = Conv2D(filters = F2, kernel_size = (f, f), strides = (1,1), padding = same, name = conv_name_base + 2b, kernel_initializer = glorot_uniform(seed=0))(X)
X = BatchNormalization(axis = 3, name = bn_name_base + 2b)(X)
X = Activation(relu)(X)
# Third component of main path
X = Conv2D(filters = F3, kernel_size = (1, 1), strides = (1,1), padding = valid, name = conv_name_base + 2c, kernel_initializer = glorot_uniform(seed=0))(X)
X = BatchNormalization(axis = 3, name = bn_name_base + 2c)(X)

##### SHORTCUT PATH ####
X_shortcut = Conv2D(filters = F3, kernel_size = (1, 1), strides = (s, s), padding = valid, name = conv_name_base + 1, kernel_initializer = glorot_uniform(seed=0))(X_shortcut)
X_shortcut = BatchNormalization(axis = 3, name = bn_name_base + 1)(X_shortcut)
# Final step: Add shortcut value to main path, and pass it through a RELU activation
X = Add()([X, X_shortcut])
X = Activation(relu)(X)
return X

殘差網路 resnet50 的 keras 實現

搭建好組件殘差塊之後就是確定網路結構,將一個個殘差塊組成殘差網路。下面搭建一個 resnet50 的殘差網路,其基本結構如下:

CONV2D -> BATCHNORM -> RELU -> MAXPOOL -> CONVBLOCK -> IDBLOCK2 -> CONVBLOCK -> IDBLOCK3 -> CONVBLOCK -> IDBLOCK5 -> CONVBLOCK -> IDBLOCK2 -> AVGPOOL -> TOPLAYER

編寫實現代碼如下:

def ResNet50(input_shape = (64, 64, 3), classes = 6):

# Define the input as a tensor with shape input_shape
X_input = Input(input_shape)
# Zero-Padding
X = ZeroPadding2D((3, 3))(X_input)
# Stage 1
X = Conv2D(64, (7, 7), strides = (2, 2), name = conv1, kernel_initializer = glorot_uniform(seed=0))(X)
X = BatchNormalization(axis = 3, name = bn_conv1)(X)
X = Activation(relu)(X)
X = MaxPooling2D((3, 3), strides=(2, 2))(X)
# Stage 2
X = convolutional_block(X, f = 3, filters = [64, 64, 256], stage = 2, block=a, s = 1)
X = identity_block(X, 3, [64, 64, 256], stage=2, block=b)
X = identity_block(X, 3, [64, 64, 256], stage=2, block=c)
# Stage 3
X = convolutional_block(X, f = 3, filters = [128, 128, 512], stage = 3, block=a, s = 2)
X = identity_block(X, 3, [128, 128, 512], stage=3, block=b)
X = identity_block(X, 3, [128, 128, 512], stage=3, block=c)
X = identity_block(X, 3, [128, 128, 512], stage=3, block=d)
# Stage 4
X = convolutional_block(X, f = 3, filters = [256, 256, 1024], stage = 4, block=a, s = 2)
X = identity_block(X, 3, [256, 256, 1024], stage=4, block=b)
X = identity_block(X, 3, [256, 256, 1024], stage=4, block=c)
X = identity_block(X, 3, [256, 256, 1024], stage=4, block=d)
X = identity_block(X, 3, [256, 256, 1024], stage=4, block=e)
X = identity_block(X, 3, [256, 256, 1024], stage=4, block=f)
# Stage 5
X = convolutional_block(X, f = 3, filters = [512, 512, 2048], stage = 5, block=a, s = 2)
X = identity_block(X, 3, [512, 512, 2048], stage=5, block=b)
X = identity_block(X, 3, [512, 512, 2048], stage=5, block=c)
# AVGPOOL (≈1 line). Use "X = AveragePooling2D(...)(X)"
X = AveragePooling2D((2, 2), strides=(2, 2))(X)
# output layer
X = Flatten()(X)
X = Dense(classes, activation=softmax, name=fc + str(classes), kernel_initializer = glorot_uniform(seed=0))(X)
# Create model
model = Model(inputs = X_input, outputs = X, name=ResNet50)

return model

這樣一個 resnet50 的殘差網路就搭建好了,其關鍵還是在於搭建殘差塊,殘差塊搭建好之後只需根據網路結構構建殘差網路即可。當然,其間也可以看到 keras 作為一個優秀的深度學習框架的便利之處。

推薦閱讀:

相關文章