模型使用pytorch中的resnet18, 兩次訓練使用的數據集相同,超參也相同,訓練100個epoch後,在驗證集上的準確率卻相差20個百分點。好奇怪,有朋友可以幫忙解釋一下的嗎?

1.數據集是提前整理好的,提前劃分好了訓練集和驗證集放在了不同的文件裡面

2.該實驗是用一個resnet18模型(學生模型)和少量數據,使用知識蒸餾損失,去逼近用大量數據訓練的另一個resnet18模型(教師模型),兩個模型都能對100個類的樣本進行分類,教師模型的準確率在97%左右。

3.超參如下:

dataset mean = [0.5199, 0.4116, 0.361]

dataset std = [0.2604, 0.2297, 0.2169]

Train Batch size = 32

Val Batch size = 2

Number of epochs = 100

lr = 0.1

momentum = 0.9

weight_decay = 0.0001

lr_decay = 0.1

patience = 10

number of classes = 100

Training-set size = 2000

Validation-set size = 1000

T = 5

4.下面是兩次訓練的結果,完整結果請看我寫的回答,補充問題又3000字的字數限制

某次訓練結果:訓練100epoch後準確率為:78.6%

Training...

001/100 | 0:00:16 | Train : loss = 4.6046 | Val : acc@1 = 1.7%

002/100 | 0:00:30 | Train : loss = 4.5759 | Val : acc@1 = 3.1%

。。。

071/100 | 0:16:46 | Train : loss = 3.8599 | Val : acc@1 = 60.5%

072/100 | 0:17:00 | Train : loss = 3.8757 | Val : acc@1 = 62.3%

。。。

077/100 | 0:18:13 | Train : loss = 3.8174 | Val : acc@1 = 74.4%

078/100 | 0:18:27 | Train : loss = 3.8089 | Val : acc@1 = 75.6%

。。。

098/100 | 0:23:11 | Train : loss = 3.7849 | Val : acc@1 = 78.8%

099/100 | 0:23:25 | Train : loss = 3.7785 | Val : acc@1 = 78.2%

100/100 | 0:23:39 | Train : loss = 3.7726 | Val : acc@1 = 78.6%

之後的訓練結果大多都是100epoch結果62%左右,例如下面這次:100個epoch後準確率62.1%

001/100 | 0:00:15 | Train : loss = 4.6036 | Val : acc@1 = 1.8%

002/100 | 0:00:28 | Train : loss = 4.5687 | Val : acc@1 = 3.1%

。。。

049/100 | 0:11:03 | Train : loss = 3.9535 | Val : acc@1 = 57.2%

050/100 | 0:11:17 | Train : loss = 3.9412 | Val : acc@1 = 57.2%

051/100 | 0:11:31 | Train : loss = 3.9342 | Val : acc@1 = 58.2%

052/100 | 0:11:44 | Train : loss = 3.9200 | Val : acc@1 = 58.6%

。。。

064/100 | 0:14:26 | Train : loss = 3.9201 | Val : acc@1 = 60.6%

065/100 | 0:14:40 | Train : loss = 3.8980 | Val : acc@1 = 61.5%

066/100 | 0:14:53 | Train : loss = 3.9023 | Val : acc@1 = 61.3%

067/100 | 0:15:07 | Train : loss = 3.9021 | Val : acc@1 = 61.6%

。。。

098/100 | 0:22:02 | Train : loss = 3.9131 | Val : acc@1 = 61.7%

099/100 | 0:22:16 | Train : loss = 3.9012 | Val : acc@1 = 62.5%

100/100 | 0:22:30 | Train : loss = 3.9169 | Val : acc@1 = 62.1%


本質原因是random seed,因為很多地方random seed不一樣會導致得到的結果不一樣,具體調整方式:

1.設定所有的random seed:

np.random.seed(config.seed)
torch.manual_seed(config.seed)
torch.cuda.manual_seed_all(config.seed)
random.seed(config.seed)

2.開啟deterministic部分

torch.backends.cudnn.benchmark = False
torch.backends.cudnn.deterministic = True
torch.backends.cudnn.enabled = True

只要經過上面兩個步驟,基本上一樣的代碼跑兩次得到的最終結果會保持一致。


前面的回答說了random seed的影響,我覺得也是這個原因,不過不是因為影響了初始化,而是會影響每一輪訓練的效果,導致最終失之毫釐,差以千里。

首先,你用的ReduceLROnPlatue吧,patience設置的是10。但是你的日誌里少列印了一項很重要的東西,就是每一輪的學習率。以我的經驗,如果用Platue的lr策略,由於每一輪的訓練集的數據生成順序都是隨機的,會導致每一輪的valid loss產生一定的上下浮動。所以會導致可能你在第一次訓練的時候在30,50,61,72,83,94輪降低了0.1倍的lr,但是在第二次訓練的時候由於運氣不好,在前期可能20多輪就開始瘋狂降lr,導致可能在50輪左右lr就降低了3-4次,變得非常小,所以模型的valid loss後面基本就不降低了,準確率自然也就不會提升了。

這是Platue策略的一個缺點吧,就是會由於valid loss導致lr下降時間點的不確定性,你既然用了這個就要想到會發生這種運行兩次同樣的代碼會出現不同結果的情況。如果你真的想復現第一次的結果,那麼我建議你自己在訓練的時候監督一下,如果發現valid loss一直上升,超過了patience,但你卻不希望過早降低lr,那你就暫停訓練,把學習率提升回去然後讀取保存的模型接著訓練。或者記錄第一次訓練lr降低的輪數,然後在第二次訓練的時候在對應輪數的時候降低lr,而不用Platue策略。

或者我建議你使用Cosine學習率策略,以我的經驗來看,Cosine比較省心,就是選擇Cosine每一個循環的訓練輪數的時候需要多嘗試幾次。

第二,如果不是學習率策略的原因的話,那可能是你的數據增強的不確定性或者你改了代碼里某個地方但你以為沒影響吧,不過一般數據增強不會導致這麼大的前後差距。


問題補充,請大家幫忙看一下

1.數據集是提前整理好的,提前劃分好了訓練集和驗證集放在了不同的文件裡面

2.該實驗是用一個resnet18模型(學生模型)和少量數據,使用知識蒸餾損失,去逼近用大量數據訓練的另一個resnet18模型(教師模型),兩個模型都能對100個類的樣本進行分類,教師模型的準確率在97%左右。

3.超參如下:

dataset mean = [0.5199, 0.4116, 0.361]

dataset std = [0.2604, 0.2297, 0.2169]

Number of workers = 2

Train Batch size = 32

Val Batch size = 2

Number of epochs = 100

lr = 0.1

momentum = 0.9

weight_decay = 0.0001

lr_decay = 0.1

patience = 10

number of classes = 100

Training-set size = 2000

Validation-set size = 1000

T = 5

4.下面是兩次訓練的結果

某次訓練結果:訓練100epoch後準確率為:78.6%

Training...

001/100 | 0:00:16 | Train : loss = 4.6046 | Val : acc@1 = 1.7% ; acc@5 = 9.7%

002/100 | 0:00:30 | Train : loss = 4.5759 | Val : acc@1 = 3.1% ; acc@5 = 11.1%

003/100 | 0:00:44 | Train : loss = 4.5373 | Val : acc@1 = 1.6% ; acc@5 = 8.9%

004/100 | 0:00:58 | Train : loss = 4.5146 | Val : acc@1 = 3.6% ; acc@5 = 15.4%

005/100 | 0:01:12 | Train : loss = 4.4948 | Val : acc@1 = 4.4% ; acc@5 = 21.4%

006/100 | 0:01:26 | Train : loss = 4.4736 | Val : acc@1 = 5.6% ; acc@5 = 24.1%

007/100 | 0:01:41 | Train : loss = 4.4546 | Val : acc@1 = 6.4% ; acc@5 = 26.7%

008/100 | 0:01:55 | Train : loss = 4.4456 | Val : acc@1 = 8.3% ; acc@5 = 28.7%

009/100 | 0:02:09 | Train : loss = 4.4228 | Val : acc@1 = 7.6% ; acc@5 = 30.3%

010/100 | 0:02:23 | Train : loss = 4.4117 | Val : acc@1 = 5.1% ; acc@5 = 19.8%

011/100 | 0:02:37 | Train : loss = 4.4025 | Val : acc@1 = 8.5% ; acc@5 = 26.8%

012/100 | 0:02:51 | Train : loss = 4.3947 | Val : acc@1 = 9.7% ; acc@5 = 32.6%

013/100 | 0:03:06 | Train : loss = 4.3571 | Val : acc@1 = 11.5% ; acc@5 = 34.5%

014/100 | 0:03:20 | Train : loss = 4.3636 | Val : acc@1 = 9.9% ; acc@5 = 34.4%

015/100 | 0:03:34 | Train : loss = 4.3589 | Val : acc@1 = 12.6% ; acc@5 = 39.7%

016/100 | 0:03:48 | Train : loss = 4.3231 | Val : acc@1 = 11.8% ; acc@5 = 33.1%

017/100 | 0:04:02 | Train : loss = 4.3212 | Val : acc@1 = 13.5% ; acc@5 = 38.4%

018/100 | 0:04:16 | Train : loss = 4.3067 | Val : acc@1 = 15.5% ; acc@5 = 43.3%

019/100 | 0:04:30 | Train : loss = 4.2919 | Val : acc@1 = 14.8% ; acc@5 = 40.0%

020/100 | 0:04:44 | Train : loss = 4.2681 | Val : acc@1 = 18.2% ; acc@5 = 46.0%

021/100 | 0:04:59 | Train : loss = 4.2581 | Val : acc@1 = 21.2% ; acc@5 = 50.8%

022/100 | 0:05:13 | Train : loss = 4.2518 | Val : acc@1 = 21.6% ; acc@5 = 51.5%

023/100 | 0:05:27 | Train : loss = 4.2404 | Val : acc@1 = 20.7% ; acc@5 = 46.1%

024/100 | 0:05:41 | Train : loss = 4.2226 | Val : acc@1 = 22.2% ; acc@5 = 53.2%

025/100 | 0:05:55 | Train : loss = 4.2069 | Val : acc@1 = 19.0% ; acc@5 = 52.3%

026/100 | 0:06:09 | Train : loss = 4.1834 | Val : acc@1 = 26.7% ; acc@5 = 57.7%

027/100 | 0:06:24 | Train : loss = 4.1941 | Val : acc@1 = 26.1% ; acc@5 = 52.6%

028/100 | 0:06:38 | Train : loss = 4.1675 | Val : acc@1 = 24.1% ; acc@5 = 54.5%

029/100 | 0:06:52 | Train : loss = 4.1796 | Val : acc@1 = 22.9% ; acc@5 = 52.9%

030/100 | 0:07:06 | Train : loss = 4.1345 | Val : acc@1 = 22.8% ; acc@5 = 51.5%

031/100 | 0:07:20 | Train : loss = 4.1511 | Val : acc@1 = 20.2% ; acc@5 = 46.3%

032/100 | 0:07:34 | Train : loss = 4.1124 | Val : acc@1 = 30.9% ; acc@5 = 66.7%

033/100 | 0:07:49 | Train : loss = 4.0982 | Val : acc@1 = 17.6% ; acc@5 = 47.9%

034/100 | 0:08:03 | Train : loss = 4.0977 | Val : acc@1 = 33.2% ; acc@5 = 67.1%

035/100 | 0:08:17 | Train : loss = 4.0997 | Val : acc@1 = 31.8% ; acc@5 = 63.6%

036/100 | 0:08:31 | Train : loss = 4.0760 | Val : acc@1 = 34.5% ; acc@5 = 71.9%

037/100 | 0:08:45 | Train : loss = 4.0684 | Val : acc@1 = 29.7% ; acc@5 = 59.9%

038/100 | 0:08:59 | Train : loss = 4.0725 | Val : acc@1 = 25.6% ; acc@5 = 61.1%

039/100 | 0:09:13 | Train : loss = 4.0523 | Val : acc@1 = 35.8% ; acc@5 = 72.3%

040/100 | 0:09:27 | Train : loss = 4.0483 | Val : acc@1 = 38.6% ; acc@5 = 71.1%

041/100 | 0:09:41 | Train : loss = 4.0260 | Val : acc@1 = 37.4% ; acc@5 = 70.4%

042/100 | 0:09:56 | Train : loss = 4.0225 | Val : acc@1 = 39.9% ; acc@5 = 74.1%

043/100 | 0:10:10 | Train : loss = 4.0265 | Val : acc@1 = 39.8% ; acc@5 = 72.0%

044/100 | 0:10:24 | Train : loss = 4.0203 | Val : acc@1 = 42.2% ; acc@5 = 75.4%

045/100 | 0:10:38 | Train : loss = 4.0043 | Val : acc@1 = 38.5% ; acc@5 = 71.0%

046/100 | 0:10:52 | Train : loss = 3.9939 | Val : acc@1 = 44.4% ; acc@5 = 76.9%

047/100 | 0:11:07 | Train : loss = 3.9959 | Val : acc@1 = 37.6% ; acc@5 = 69.7%

048/100 | 0:11:21 | Train : loss = 3.9825 | Val : acc@1 = 45.2% ; acc@5 = 79.7%

049/100 | 0:11:35 | Train : loss = 3.9745 | Val : acc@1 = 43.7% ; acc@5 = 78.0%

050/100 | 0:11:49 | Train : loss = 3.9590 | Val : acc@1 = 46.0% ; acc@5 = 77.2%

051/100 | 0:12:03 | Train : loss = 3.9569 | Val : acc@1 = 34.4% ; acc@5 = 71.4%

052/100 | 0:12:17 | Train : loss = 3.9616 | Val : acc@1 = 45.0% ; acc@5 = 76.1%

053/100 | 0:12:32 | Train : loss = 3.9437 | Val : acc@1 = 44.4% ; acc@5 = 77.6%

054/100 | 0:12:46 | Train : loss = 3.9285 | Val : acc@1 = 49.8% ; acc@5 = 81.7%

055/100 | 0:13:00 | Train : loss = 3.9386 | Val : acc@1 = 42.4% ; acc@5 = 74.3%

056/100 | 0:13:14 | Train : loss = 3.9333 | Val : acc@1 = 53.5% ; acc@5 = 83.9%

057/100 | 0:13:28 | Train : loss = 3.9202 | Val : acc@1 = 46.6% ; acc@5 = 79.3%

058/100 | 0:13:43 | Train : loss = 3.9143 | Val : acc@1 = 48.1% ; acc@5 = 80.8%

059/100 | 0:13:58 | Train : loss = 3.9062 | Val : acc@1 = 52.5% ; acc@5 = 83.6%

060/100 | 0:14:12 | Train : loss = 3.9085 | Val : acc@1 = 51.8% ; acc@5 = 83.7%

061/100 | 0:14:26 | Train : loss = 3.9161 | Val : acc@1 = 54.3% ; acc@5 = 83.5%

062/100 | 0:14:40 | Train : loss = 3.9055 | Val : acc@1 = 43.2% ; acc@5 = 76.4%

063/100 | 0:14:54 | Train : loss = 3.9153 | Val : acc@1 = 51.6% ; acc@5 = 83.7%

064/100 | 0:15:08 | Train : loss = 3.9030 | Val : acc@1 = 50.7% ; acc@5 = 84.6%

065/100 | 0:15:22 | Train : loss = 3.8756 | Val : acc@1 = 58.3% ; acc@5 = 86.5%

066/100 | 0:15:36 | Train : loss = 3.8870 | Val : acc@1 = 58.0% ; acc@5 = 86.4%

067/100 | 0:15:49 | Train : loss = 3.8840 | Val : acc@1 = 60.5% ; acc@5 = 88.3%

068/100 | 0:16:03 | Train : loss = 3.8636 | Val : acc@1 = 54.3% ; acc@5 = 86.6%

069/100 | 0:16:17 | Train : loss = 3.8765 | Val : acc@1 = 56.4% ; acc@5 = 85.8%

070/100 | 0:16:31 | Train : loss = 3.8624 | Val : acc@1 = 58.2% ; acc@5 = 85.7%

071/100 | 0:16:46 | Train : loss = 3.8599 | Val : acc@1 = 60.5% ; acc@5 = 87.5%

072/100 | 0:17:00 | Train : loss = 3.8757 | Val : acc@1 = 62.3% ; acc@5 = 88.5%

073/100 | 0:17:15 | Train : loss = 3.8559 | Val : acc@1 = 58.3% ; acc@5 = 86.5%

074/100 | 0:17:30 | Train : loss = 3.8317 | Val : acc@1 = 57.5% ; acc@5 = 86.5%

075/100 | 0:17:44 | Train : loss = 3.8413 | Val : acc@1 = 59.3% ; acc@5 = 85.5%

076/100 | 0:17:58 | Train : loss = 3.8557 | Val : acc@1 = 59.5% ; acc@5 = 87.6%

077/100 | 0:18:13 | Train : loss = 3.8174 | Val : acc@1 = 74.4% ; acc@5 = 94.0%

078/100 | 0:18:27 | Train : loss = 3.8089 | Val : acc@1 = 75.6% ; acc@5 = 94.3%

079/100 | 0:18:41 | Train : loss = 3.7826 | Val : acc@1 = 76.0% ; acc@5 = 93.7%

080/100 | 0:18:55 | Train : loss = 3.8054 | Val : acc@1 = 75.3% ; acc@5 = 94.6%

081/100 | 0:19:10 | Train : loss = 3.7963 | Val : acc@1 = 76.7% ; acc@5 = 94.8%

082/100 | 0:19:24 | Train : loss = 3.7776 | Val : acc@1 = 76.7% ; acc@5 = 94.9%

083/100 | 0:19:38 | Train : loss = 3.7802 | Val : acc@1 = 77.3% ; acc@5 = 94.6%

084/100 | 0:19:52 | Train : loss = 3.7819 | Val : acc@1 = 76.9% ; acc@5 = 94.8%

085/100 | 0:20:06 | Train : loss = 3.7879 | Val : acc@1 = 77.7% ; acc@5 = 94.4%

086/100 | 0:20:21 | Train : loss = 3.8019 | Val : acc@1 = 76.4% ; acc@5 = 94.7%

087/100 | 0:20:35 | Train : loss = 3.8010 | Val : acc@1 = 77.0% ; acc@5 = 94.9%

088/100 | 0:20:49 | Train : loss = 3.8025 | Val : acc@1 = 78.0% ; acc@5 = 95.3%

089/100 | 0:21:03 | Train : loss = 3.7841 | Val : acc@1 = 77.8% ; acc@5 = 95.1%

090/100 | 0:21:17 | Train : loss = 3.7745 | Val : acc@1 = 78.5% ; acc@5 = 94.9%

091/100 | 0:21:31 | Train : loss = 3.7813 | Val : acc@1 = 76.9% ; acc@5 = 95.1%

092/100 | 0:21:46 | Train : loss = 3.7765 | Val : acc@1 = 77.5% ; acc@5 = 95.5%

093/100 | 0:22:00 | Train : loss = 3.7822 | Val : acc@1 = 78.3% ; acc@5 = 94.8%

094/100 | 0:22:14 | Train : loss = 3.7639 | Val : acc@1 = 77.7% ; acc@5 = 95.1%

095/100 | 0:22:28 | Train : loss = 3.7833 | Val : acc@1 = 78.4% ; acc@5 = 95.5%

096/100 | 0:22:42 | Train : loss = 3.7877 | Val : acc@1 = 77.7% ; acc@5 = 95.1%

097/100 | 0:22:56 | Train : loss = 3.7814 | Val : acc@1 = 79.1% ; acc@5 = 94.8%

098/100 | 0:23:11 | Train : loss = 3.7849 | Val : acc@1 = 78.8% ; acc@5 = 94.9%

099/100 | 0:23:25 | Train : loss = 3.7785 | Val : acc@1 = 78.2% ; acc@5 = 94.9%

100/100 | 0:23:39 | Train : loss = 3.7726 | Val : acc@1 = 78.6% ; acc@5 = 95.3%

之後的訓練結果大多都是100epoch結果62%左右,例如下面這次:100個epoch後準確率62.1%

001/100 | 0:00:15 | Train : loss = 4.6036 | Val : acc@1 = 1.8% ; acc@5 = 9.3%

002/100 | 0:00:28 | Train : loss = 4.5687 | Val : acc@1 = 3.1% ; acc@5 = 11.4%

003/100 | 0:00:42 | Train : loss = 4.5390 | Val : acc@1 = 3.3% ; acc@5 = 16.7%

004/100 | 0:00:55 | Train : loss = 4.5125 | Val : acc@1 = 4.6% ; acc@5 = 17.6%

005/100 | 0:01:09 | Train : loss = 4.4902 | Val : acc@1 = 4.9% ; acc@5 = 22.1%

006/100 | 0:01:22 | Train : loss = 4.4691 | Val : acc@1 = 3.8% ; acc@5 = 18.6%

007/100 | 0:01:35 | Train : loss = 4.4569 | Val : acc@1 = 6.5% ; acc@5 = 24.7%

008/100 | 0:01:49 | Train : loss = 4.4356 | Val : acc@1 = 8.1% ; acc@5 = 27.4%

009/100 | 0:02:02 | Train : loss = 4.4153 | Val : acc@1 = 6.6% ; acc@5 = 26.6%

010/100 | 0:02:15 | Train : loss = 4.3992 | Val : acc@1 = 7.3% ; acc@5 = 27.1%

011/100 | 0:02:29 | Train : loss = 4.3811 | Val : acc@1 = 10.0% ; acc@5 = 31.5%

012/100 | 0:02:42 | Train : loss = 4.3767 | Val : acc@1 = 11.7% ; acc@5 = 36.4%

013/100 | 0:02:56 | Train : loss = 4.3591 | Val : acc@1 = 10.5% ; acc@5 = 32.9%

014/100 | 0:03:09 | Train : loss = 4.3445 | Val : acc@1 = 9.7% ; acc@5 = 30.9%

015/100 | 0:03:22 | Train : loss = 4.3324 | Val : acc@1 = 13.5% ; acc@5 = 39.8%

016/100 | 0:03:36 | Train : loss = 4.3161 | Val : acc@1 = 11.7% ; acc@5 = 38.7%

017/100 | 0:03:49 | Train : loss = 4.3051 | Val : acc@1 = 14.5% ; acc@5 = 42.3%

018/100 | 0:04:03 | Train : loss = 4.3012 | Val : acc@1 = 14.8% ; acc@5 = 42.7%

019/100 | 0:04:16 | Train : loss = 4.2720 | Val : acc@1 = 13.8% ; acc@5 = 40.5%

020/100 | 0:04:30 | Train : loss = 4.2606 | Val : acc@1 = 16.5% ; acc@5 = 42.3%

021/100 | 0:04:43 | Train : loss = 4.2606 | Val : acc@1 = 18.7% ; acc@5 = 49.0%

022/100 | 0:04:57 | Train : loss = 4.2344 | Val : acc@1 = 18.5% ; acc@5 = 47.0%

023/100 | 0:05:10 | Train : loss = 4.2112 | Val : acc@1 = 20.2% ; acc@5 = 46.6%

024/100 | 0:05:24 | Train : loss = 4.2077 | Val : acc@1 = 23.4% ; acc@5 = 51.9%

025/100 | 0:05:38 | Train : loss = 4.2003 | Val : acc@1 = 23.2% ; acc@5 = 56.2%

026/100 | 0:05:51 | Train : loss = 4.1869 | Val : acc@1 = 24.9% ; acc@5 = 57.7%

027/100 | 0:06:04 | Train : loss = 4.1713 | Val : acc@1 = 26.4% ; acc@5 = 59.6%

028/100 | 0:06:18 | Train : loss = 4.1684 | Val : acc@1 = 19.5% ; acc@5 = 46.6%

029/100 | 0:06:31 | Train : loss = 4.1667 | Val : acc@1 = 25.5% ; acc@5 = 53.4%

030/100 | 0:06:45 | Train : loss = 4.1404 | Val : acc@1 = 20.5% ; acc@5 = 53.6%

031/100 | 0:06:58 | Train : loss = 4.1281 | Val : acc@1 = 26.0% ; acc@5 = 58.0%

032/100 | 0:07:12 | Train : loss = 4.1211 | Val : acc@1 = 31.3% ; acc@5 = 66.7%

033/100 | 0:07:25 | Train : loss = 4.1103 | Val : acc@1 = 25.9% ; acc@5 = 57.3%

034/100 | 0:07:39 | Train : loss = 4.1114 | Val : acc@1 = 29.7% ; acc@5 = 60.6%

035/100 | 0:07:52 | Train : loss = 4.0905 | Val : acc@1 = 35.8% ; acc@5 = 67.0%

036/100 | 0:08:06 | Train : loss = 4.0922 | Val : acc@1 = 32.7% ; acc@5 = 65.8%

037/100 | 0:08:20 | Train : loss = 4.0837 | Val : acc@1 = 35.7% ; acc@5 = 68.9%

038/100 | 0:08:34 | Train : loss = 4.0731 | Val : acc@1 = 32.7% ; acc@5 = 67.6%

039/100 | 0:08:47 | Train : loss = 4.0529 | Val : acc@1 = 29.4% ; acc@5 = 61.7%

040/100 | 0:09:01 | Train : loss = 4.0551 | Val : acc@1 = 38.8% ; acc@5 = 73.1%

041/100 | 0:09:14 | Train : loss = 4.0297 | Val : acc@1 = 42.8% ; acc@5 = 77.6%

042/100 | 0:09:28 | Train : loss = 4.0415 | Val : acc@1 = 38.2% ; acc@5 = 72.2%

043/100 | 0:09:42 | Train : loss = 4.0222 | Val : acc@1 = 41.1% ; acc@5 = 74.0%

044/100 | 0:09:55 | Train : loss = 3.9738 | Val : acc@1 = 54.8% ; acc@5 = 85.3%

045/100 | 0:10:09 | Train : loss = 3.9569 | Val : acc@1 = 55.3% ; acc@5 = 86.0%

046/100 | 0:10:23 | Train : loss = 3.9414 | Val : acc@1 = 56.5% ; acc@5 = 86.9%

047/100 | 0:10:36 | Train : loss = 3.9465 | Val : acc@1 = 57.9% ; acc@5 = 88.2%

048/100 | 0:10:50 | Train : loss = 3.9303 | Val : acc@1 = 57.7% ; acc@5 = 87.3%

049/100 | 0:11:03 | Train : loss = 3.9535 | Val : acc@1 = 57.2% ; acc@5 = 88.1%

050/100 | 0:11:17 | Train : loss = 3.9412 | Val : acc@1 = 57.2% ; acc@5 = 87.1%

051/100 | 0:11:31 | Train : loss = 3.9342 | Val : acc@1 = 58.2% ; acc@5 = 88.1%

052/100 | 0:11:44 | Train : loss = 3.9200 | Val : acc@1 = 58.6% ; acc@5 = 88.5%

053/100 | 0:11:58 | Train : loss = 3.9383 | Val : acc@1 = 58.2% ; acc@5 = 88.1%

054/100 | 0:12:11 | Train : loss = 3.9375 | Val : acc@1 = 58.8% ; acc@5 = 88.5%

055/100 | 0:12:25 | Train : loss = 3.9360 | Val : acc@1 = 58.7% ; acc@5 = 89.0%

056/100 | 0:12:38 | Train : loss = 3.9235 | Val : acc@1 = 59.1% ; acc@5 = 88.4%

057/100 | 0:12:52 | Train : loss = 3.9150 | Val : acc@1 = 58.3% ; acc@5 = 88.2%

058/100 | 0:13:05 | Train : loss = 3.9210 | Val : acc@1 = 58.8% ; acc@5 = 88.7%

059/100 | 0:13:19 | Train : loss = 3.9227 | Val : acc@1 = 59.0% ; acc@5 = 89.2%

060/100 | 0:13:32 | Train : loss = 3.9185 | Val : acc@1 = 58.5% ; acc@5 = 89.0%

061/100 | 0:13:46 | Train : loss = 3.9232 | Val : acc@1 = 59.1% ; acc@5 = 89.7%

062/100 | 0:14:00 | Train : loss = 3.9050 | Val : acc@1 = 61.0% ; acc@5 = 89.8%

063/100 | 0:14:13 | Train : loss = 3.9089 | Val : acc@1 = 60.1% ; acc@5 = 89.0%

064/100 | 0:14:26 | Train : loss = 3.9201 | Val : acc@1 = 60.6% ; acc@5 = 89.8%

065/100 | 0:14:40 | Train : loss = 3.8980 | Val : acc@1 = 61.5% ; acc@5 = 90.0%

066/100 | 0:14:53 | Train : loss = 3.9023 | Val : acc@1 = 61.3% ; acc@5 = 90.0%

067/100 | 0:15:07 | Train : loss = 3.9021 | Val : acc@1 = 61.6% ; acc@5 = 89.5%

068/100 | 0:15:20 | Train : loss = 3.8881 | Val : acc@1 = 61.4% ; acc@5 = 89.2%

069/100 | 0:15:33 | Train : loss = 3.9174 | Val : acc@1 = 61.7% ; acc@5 = 89.9%

070/100 | 0:15:47 | Train : loss = 3.9063 | Val : acc@1 = 61.7% ; acc@5 = 90.1%

071/100 | 0:16:00 | Train : loss = 3.9066 | Val : acc@1 = 62.0% ; acc@5 = 90.4%

072/100 | 0:16:14 | Train : loss = 3.9010 | Val : acc@1 = 62.4% ; acc@5 = 89.8%

073/100 | 0:16:27 | Train : loss = 3.9297 | Val : acc@1 = 61.8% ; acc@5 = 89.8%

074/100 | 0:16:40 | Train : loss = 3.9113 | Val : acc@1 = 61.5% ; acc@5 = 90.3%

075/100 | 0:16:54 | Train : loss = 3.9036 | Val : acc@1 = 61.8% ; acc@5 = 90.1%

076/100 | 0:17:07 | Train : loss = 3.8986 | Val : acc@1 = 62.1% ; acc@5 = 90.5%

077/100 | 0:17:21 | Train : loss = 3.9059 | Val : acc@1 = 62.2% ; acc@5 = 90.2%

078/100 | 0:17:34 | Train : loss = 3.9091 | Val : acc@1 = 62.1% ; acc@5 = 90.7%

079/100 | 0:17:47 | Train : loss = 3.9088 | Val : acc@1 = 61.6% ; acc@5 = 90.5%

080/100 | 0:18:01 | Train : loss = 3.9215 | Val : acc@1 = 61.7% ; acc@5 = 90.0%

081/100 | 0:18:14 | Train : loss = 3.8956 | Val : acc@1 = 61.8% ; acc@5 = 90.2%

082/100 | 0:18:28 | Train : loss = 3.9153 | Val : acc@1 = 61.7% ; acc@5 = 90.0%

083/100 | 0:18:41 | Train : loss = 3.9058 | Val : acc@1 = 62.0% ; acc@5 = 89.9%

084/100 | 0:18:55 | Train : loss = 3.8975 | Val : acc@1 = 62.0% ; acc@5 = 90.4%

085/100 | 0:19:08 | Train : loss = 3.9064 | Val : acc@1 = 61.4% ; acc@5 = 90.1%

086/100 | 0:19:21 | Train : loss = 3.8985 | Val : acc@1 = 61.7% ; acc@5 = 90.8%

087/100 | 0:19:35 | Train : loss = 3.8970 | Val : acc@1 = 62.0% ; acc@5 = 90.7%

088/100 | 0:19:48 | Train : loss = 3.9050 | Val : acc@1 = 61.6% ; acc@5 = 90.4%

089/100 | 0:20:02 | Train : loss = 3.9092 | Val : acc@1 = 61.3% ; acc@5 = 90.0%

090/100 | 0:20:15 | Train : loss = 3.8910 | Val : acc@1 = 61.4% ; acc@5 = 89.5%

091/100 | 0:20:28 | Train : loss = 3.9033 | Val : acc@1 = 62.0% ; acc@5 = 90.3%

092/100 | 0:20:42 | Train : loss = 3.9246 | Val : acc@1 = 61.5% ; acc@5 = 90.4%

093/100 | 0:20:55 | Train : loss = 3.9218 | Val : acc@1 = 61.7% ; acc@5 = 90.6%

094/100 | 0:21:08 | Train : loss = 3.9061 | Val : acc@1 = 62.0% ; acc@5 = 90.7%

095/100 | 0:21:22 | Train : loss = 3.9000 | Val : acc@1 = 62.1% ; acc@5 = 90.4%

096/100 | 0:21:36 | Train : loss = 3.9131 | Val : acc@1 = 61.2% ; acc@5 = 90.1%

097/100 | 0:21:49 | Train : loss = 3.9112 | Val : acc@1 = 61.6% ; acc@5 = 90.3%

098/100 | 0:22:02 | Train : loss = 3.9131 | Val : acc@1 = 61.7% ; acc@5 = 90.2%

099/100 | 0:22:16 | Train : loss = 3.9012 | Val : acc@1 = 62.5% ; acc@5 = 90.1%

100/100 | 0:22:30 | Train : loss = 3.9169 | Val : acc@1 = 62.1% ; acc@5 = 90.2%


我還以為你是在復現別人論文里的東西,那樣可能就是作者省略了一些實際上會影響到結果的細節。但是現在是你自己訓練的自己的數據集的話,可以從這些方面去考慮。

  1. 你的任務是什麼?你確定ResNet 18可以解決你的問題么?或者說卷積神經網路適合去解決你的問題么?
  2. 超參是否選擇合適呢?比如學習率過大的話可能結果都不太好。你可能要檢查一下你的訓練策略。
  3. 既然你訓練100個epoch,你有沒有把訓練的loss等相關的信息保存下來,畫個曲線分析一下?比如說實際上你在30個epoch左右就訓練得挺好得了,後面繼續訓練最好的情況是loss不再降低,驗證集準確率也不再提升,進入一個穩定的平緩的狀態。但是也有可能是進入震蕩的狀態,只是剛好在100個epoch的時候兩次訓練的結果是相差較大的。
  4. 網路的訓練是存在一定的隨機性的,包括網路權重的初始化,數據集每個batch打亂和隨機獲取等等,你可以試試指定隨機種子,保證他們的狀態是一致的。但是一般來說,好的演算法,應該不受這個因素太大的影響。


相差是變好還是變壞?猜測是變壞。

100epoch太多了,很可能過擬合了,正常來說一般都用先用10個epoch,在這周圍浮動


學習率偏大,有一定的loss不下降風險


我也是做蒸餾的,也出現過類似奇怪的問題。以我的經驗,雖然你說超參數一致,但還是懷疑無意中改動超參數(或者代碼)導致的。個人建議是重新復現這兩種情況,訓練前列印所有超參數,然後保存log文件,然後對比這兩個log,如果能復現基本能確定原因。


推薦閱讀:
相关文章