Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

准确率与参数设置 #18

Open
whateverud opened this issue May 10, 2021 · 2 comments
Open

准确率与参数设置 #18

whateverud opened this issue May 10, 2021 · 2 comments

Comments

@whateverud
Copy link

您好,我在运行mr这个数据集,准确率无法达到文中那么高,最高也是74,默认参数甚至更低,请问可以分享一下您的参数设置吗?
环境配置:
Python 3.6.13
Tensorflow 1.12.0
Scipy 1.5.4
参数设置:
learning_rate 0.005
epochs 300
batch_size 128
input_dim 300
Hidden 128
steps 1
dropout 0.5
weight_decay 0
early_stopping -1
max_degree 3
运行日志:
(tensorflow) D:\Python\text-ing1>python build_graph.py mr
using default window size = 3
using default unweighted graph
loading raw data
building graphs for training
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 5879/5879 [00:05<00:00, 1087.83it/s]
building graphs for training + validation
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 6532/6532 [00:04<00:00, 1564.62it/s]
building graphs for test
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 2568/2568 [00:01<00:00, 1653.97it/s]
max_doc_length 340 min_doc_length 4 average 44.73
training_vocab 8695 test_vocab 7467 intersection 7270

(tensorflow) D:\Python\text-ing1>python train.py --dataset mr
D:\Python\anaconda3\envs\tensorflow\lib\site-packages\tensorflow\python\framework\dtypes.py:523: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_qint8 = np.dtype([("qint8", np.int8, 1)])
D:\Python\anaconda3\envs\tensorflow\lib\site-packages\tensorflow\python\framework\dtypes.py:524: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_quint8 = np.dtype([("quint8", np.uint8, 1)])
D:\Python\anaconda3\envs\tensorflow\lib\site-packages\tensorflow\python\framework\dtypes.py:525: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_qint16 = np.dtype([("qint16", np.int16, 1)])
D:\Python\anaconda3\envs\tensorflow\lib\site-packages\tensorflow\python\framework\dtypes.py:526: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_quint16 = np.dtype([("quint16", np.uint16, 1)])
D:\Python\anaconda3\envs\tensorflow\lib\site-packages\tensorflow\python\framework\dtypes.py:527: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_qint32 = np.dtype([("qint32", np.int32, 1)])
D:\Python\anaconda3\envs\tensorflow\lib\site-packages\tensorflow\python\framework\dtypes.py:532: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
np_resource = np.dtype([("resource", np.ubyte, 1)])
D:\Python\text-ing1\utils.py:82: VisibleDeprecationWarning: Creating an ndarray from ragged nested sequences (which is a list-or-tuple of lists-or-tuples-or ndarrays with different lengths or shapes) is deprecated. If you meant to do this, you must specify 'dtype=object' when creating the ndarray
train_adj = np.array(train_adj)
D:\Python\text-ing1\utils.py:83: VisibleDeprecationWarning: Creating an ndarray from ragged nested sequences (which is a list-or-tuple of lists-or-tuples-or ndarrays with different lengths or shapes) is deprecated. If you meant to do this, you must specify 'dtype=object' when creating the ndarray
val_adj = np.array(val_adj)
D:\Python\text-ing1\utils.py:84: VisibleDeprecationWarning: Creating an ndarray from ragged nested sequences (which is a list-or-tuple of lists-or-tuples-or ndarrays with different lengths or shapes) is deprecated. If you meant to do this, you must specify 'dtype=object' when creating the ndarray
test_adj = np.array(test_adj)
D:\Python\text-ing1\utils.py:85: VisibleDeprecationWarning: Creating an ndarray from ragged nested sequences (which is a list-or-tuple of lists-or-tuples-or ndarrays with different lengths or shapes) is deprecated. If you meant to do this, you must specify 'dtype=object' when creating the ndarray
train_embed = np.array(train_embed)
D:\Python\text-ing1\utils.py:86: VisibleDeprecationWarning: Creating an ndarray from ragged nested sequences (which is a list-or-tuple of lists-or-tuples-or ndarrays with different lengths or shapes) is deprecated. If you meant to do this, you must specify 'dtype=object' when creating the ndarray
val_embed = np.array(val_embed)
D:\Python\text-ing1\utils.py:87: VisibleDeprecationWarning: Creating an ndarray from ragged nested sequences (which is a list-or-tuple of lists-or-tuples-or ndarrays with different lengths or shapes) is deprecated. If you meant to do this, you must specify 'dtype=object' when creating the ndarray
test_embed = np.array(test_embed)
loading training set
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 6398/6398 [00:00<00:00, 9130.35it/s]
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 6398/6398 [00:00<00:00, 9834.68it/s]
loading validation set
100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 710/710 [00:00<00:00, 8878.95it/s]
100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 710/710 [00:00<00:00, 10148.16it/s]
loading test set
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 3554/3554 [00:00<00:00, 9547.53it/s]
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 3554/3554 [00:00<00:00, 9845.77it/s]
build...
WARNING:tensorflow:From D:\Python\text-ing1\metrics.py:6: softmax_cross_entropy_with_logits (from tensorflow.python.ops.nn_ops) is deprecated and will be removed in a future version.
Instructions for updating:

Future major versions of TensorFlow will allow gradients to flow
into the labels input on backprop by default.

See tf.nn.softmax_cross_entropy_with_logits_v2.

2021-05-09 11:32:07.199176: I tensorflow/core/platform/cpu_feature_guard.cc:141] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2
train start...
Epoch: 0000 train_loss= 0.69575 train_acc= 0.51094 val_loss= 0.69226 val_acc= 0.50000 test_acc= 0.50028 time= 10.06635
Epoch: 0001 train_loss= 0.69177 train_acc= 0.52563 val_loss= 0.68898 val_acc= 0.53239 test_acc= 0.55262 time= 9.37085
Epoch: 0002 train_loss= 0.68322 train_acc= 0.56268 val_loss= 0.67658 val_acc= 0.57746 test_acc= 0.58244 time= 9.41362
Epoch: 0003 train_loss= 0.66895 train_acc= 0.58815 val_loss= 0.66829 val_acc= 0.59859 test_acc= 0.60242 time= 9.41338
Epoch: 0004 train_loss= 0.66522 train_acc= 0.59409 val_loss= 0.69396 val_acc= 0.50563 test_acc= 0.50647 time= 9.43484
Epoch: 0005 train_loss= 0.66729 train_acc= 0.58190 val_loss= 0.66119 val_acc= 0.59155 test_acc= 0.59820 time= 9.36946
Epoch: 0006 train_loss= 0.65636 train_acc= 0.61066 val_loss= 0.66293 val_acc= 0.60000 test_acc= 0.58807 time= 9.52164
Epoch: 0007 train_loss= 0.65120 train_acc= 0.61488 val_loss= 0.66714 val_acc= 0.60986 test_acc= 0.59932 time= 9.53432
Epoch: 0008 train_loss= 0.65217 train_acc= 0.61550 val_loss= 0.66052 val_acc= 0.58873 test_acc= 0.59257 time= 9.43989
Epoch: 0009 train_loss= 0.63813 train_acc= 0.63051 val_loss= 0.64807 val_acc= 0.61408 test_acc= 0.60580 time= 9.46848
Epoch: 0010 train_loss= 0.64021 train_acc= 0.63567 val_loss= 0.64397 val_acc= 0.63239 test_acc= 0.61114 time= 9.39541
Epoch: 0011 train_loss= 0.63666 train_acc= 0.62988 val_loss= 0.65440 val_acc= 0.61972 test_acc= 0.60383 time= 9.45876
Epoch: 0012 train_loss= 0.63583 train_acc= 0.64004 val_loss= 0.65723 val_acc= 0.60845 test_acc= 0.59313 time= 9.48956
Epoch: 0013 train_loss= 0.63290 train_acc= 0.63598 val_loss= 0.62616 val_acc= 0.64085 test_acc= 0.63731 time= 9.57913
Epoch: 0014 train_loss= 0.62451 train_acc= 0.64708 val_loss= 0.63261 val_acc= 0.62113 test_acc= 0.62296 time= 9.42966
Epoch: 0015 train_loss= 0.62469 train_acc= 0.65646 val_loss= 0.63072 val_acc= 0.63099 test_acc= 0.62577 time= 9.48841
Epoch: 0016 train_loss= 0.61896 train_acc= 0.65927 val_loss= 0.62247 val_acc= 0.64507 test_acc= 0.62971 time= 9.36453
Epoch: 0017 train_loss= 0.62090 train_acc= 0.66068 val_loss= 0.63092 val_acc= 0.64507 test_acc= 0.62155 time= 9.44694
Epoch: 0018 train_loss= 0.62048 train_acc= 0.65505 val_loss= 0.61991 val_acc= 0.64648 test_acc= 0.63900 time= 9.50112
Epoch: 0019 train_loss= 0.62190 train_acc= 0.65442 val_loss= 0.62285 val_acc= 0.64085 test_acc= 0.64856 time= 9.43798
Epoch: 0020 train_loss= 0.61083 train_acc= 0.66599 val_loss= 0.62599 val_acc= 0.64507 test_acc= 0.63393 time= 9.43447
Epoch: 0021 train_loss= 0.60688 train_acc= 0.67115 val_loss= 0.62512 val_acc= 0.64366 test_acc= 0.63056 time= 9.45628
Epoch: 0022 train_loss= 0.60888 train_acc= 0.66005 val_loss= 0.62133 val_acc= 0.63521 test_acc= 0.64434 time= 9.69254
Epoch: 0023 train_loss= 0.59996 train_acc= 0.67521 val_loss= 0.61779 val_acc= 0.64789 test_acc= 0.63337 time= 9.93541
Epoch: 0024 train_loss= 0.59912 train_acc= 0.66943 val_loss= 0.60914 val_acc= 0.66338 test_acc= 0.64744 time= 9.59789
Epoch: 0025 train_loss= 0.60301 train_acc= 0.67709 val_loss= 0.61318 val_acc= 0.65070 test_acc= 0.63928 time= 9.61154
Epoch: 0026 train_loss= 0.60057 train_acc= 0.67240 val_loss= 0.63171 val_acc= 0.63099 test_acc= 0.62774 time= 9.48291
Epoch: 0027 train_loss= 0.60607 train_acc= 0.66802 val_loss= 0.61070 val_acc= 0.67324 test_acc= 0.65335 time= 9.44387
Epoch: 0028 train_loss= 0.60064 train_acc= 0.67631 val_loss= 0.61669 val_acc= 0.64225 test_acc= 0.63562 time= 9.32728
Epoch: 0029 train_loss= 0.59627 train_acc= 0.68037 val_loss= 0.62801 val_acc= 0.62817 test_acc= 0.62324 time= 9.49590
Epoch: 0030 train_loss= 0.60187 train_acc= 0.66443 val_loss= 0.61688 val_acc= 0.64789 test_acc= 0.63928 time= 9.39059
Epoch: 0031 train_loss= 0.59325 train_acc= 0.68162 val_loss= 0.60755 val_acc= 0.66620 test_acc= 0.65363 time= 9.46285
Epoch: 0032 train_loss= 0.59154 train_acc= 0.67865 val_loss= 0.59926 val_acc= 0.66479 test_acc= 0.66376 time= 9.43294
Epoch: 0033 train_loss= 0.59496 train_acc= 0.67505 val_loss= 0.60086 val_acc= 0.67042 test_acc= 0.66179 time= 9.31140
Epoch: 0034 train_loss= 0.58943 train_acc= 0.67896 val_loss= 0.61667 val_acc= 0.66479 test_acc= 0.63759 time= 9.38982
Epoch: 0035 train_loss= 0.59080 train_acc= 0.68568 val_loss= 0.59542 val_acc= 0.67465 test_acc= 0.65279 time= 9.42639
Epoch: 0036 train_loss= 0.58282 train_acc= 0.69209 val_loss= 0.61697 val_acc= 0.65493 test_acc= 0.63928 time= 9.38227
Epoch: 0037 train_loss= 0.58181 train_acc= 0.68115 val_loss= 0.60567 val_acc= 0.66056 test_acc= 0.64688 time= 9.40806
Epoch: 0038 train_loss= 0.57703 train_acc= 0.69866 val_loss= 0.58810 val_acc= 0.69014 test_acc= 0.66038 time= 9.41947
Epoch: 0039 train_loss= 0.57983 train_acc= 0.69006 val_loss= 0.58503 val_acc= 0.68310 test_acc= 0.66545 time= 9.42167
Epoch: 0040 train_loss= 0.57052 train_acc= 0.70038 val_loss= 0.58115 val_acc= 0.68169 test_acc= 0.66826 time= 9.34553
Epoch: 0041 train_loss= 0.57329 train_acc= 0.69459 val_loss= 0.59113 val_acc= 0.68451 test_acc= 0.66207 time= 9.35682
Epoch: 0042 train_loss= 0.57334 train_acc= 0.69303 val_loss= 0.59184 val_acc= 0.68310 test_acc= 0.65250 time= 9.41156
Epoch: 0043 train_loss= 0.57295 train_acc= 0.70522 val_loss= 0.58733 val_acc= 0.68028 test_acc= 0.65447 time= 9.38891
Epoch: 0044 train_loss= 0.56855 train_acc= 0.69928 val_loss= 0.58302 val_acc= 0.70000 test_acc= 0.66517 time= 9.46548
Epoch: 0045 train_loss= 0.57541 train_acc= 0.69162 val_loss= 0.58648 val_acc= 0.67465 test_acc= 0.66854 time= 9.56229
Epoch: 0046 train_loss= 0.56680 train_acc= 0.70678 val_loss= 0.58039 val_acc= 0.68310 test_acc= 0.67107 time= 9.53400
Epoch: 0047 train_loss= 0.56518 train_acc= 0.69928 val_loss= 0.59025 val_acc= 0.67887 test_acc= 0.65701 time= 9.57426
Epoch: 0048 train_loss= 0.55901 train_acc= 0.71085 val_loss= 0.57852 val_acc= 0.69014 test_acc= 0.66995 time= 9.49446
Epoch: 0049 train_loss= 0.55809 train_acc= 0.70350 val_loss= 0.58617 val_acc= 0.68028 test_acc= 0.66545 time= 9.50039
Epoch: 0050 train_loss= 0.56116 train_acc= 0.70725 val_loss= 0.61384 val_acc= 0.65775 test_acc= 0.63506 time= 9.37594
Epoch: 0051 train_loss= 0.55758 train_acc= 0.70460 val_loss= 0.56670 val_acc= 0.69577 test_acc= 0.67614 time= 9.44596
Epoch: 0052 train_loss= 0.55347 train_acc= 0.70866 val_loss= 0.59473 val_acc= 0.67465 test_acc= 0.65813 time= 9.35023
Epoch: 0053 train_loss= 0.55434 train_acc= 0.70944 val_loss= 0.61905 val_acc= 0.66338 test_acc= 0.64660 time= 9.45921
Epoch: 0054 train_loss= 0.54661 train_acc= 0.71616 val_loss= 0.57742 val_acc= 0.69296 test_acc= 0.67501 time= 9.36394
Epoch: 0055 train_loss= 0.54844 train_acc= 0.72570 val_loss= 0.58679 val_acc= 0.67746 test_acc= 0.65841 time= 9.39498
Epoch: 0056 train_loss= 0.54415 train_acc= 0.72445 val_loss= 0.59058 val_acc= 0.68028 test_acc= 0.67023 time= 9.48117
Epoch: 0057 train_loss= 0.54069 train_acc= 0.72507 val_loss= 0.56899 val_acc= 0.70282 test_acc= 0.67811 time= 9.39584
Epoch: 0058 train_loss= 0.54571 train_acc= 0.72351 val_loss= 0.56660 val_acc= 0.69014 test_acc= 0.67642 time= 9.41704
Epoch: 0059 train_loss= 0.54001 train_acc= 0.72570 val_loss= 0.57387 val_acc= 0.68451 test_acc= 0.67304 time= 9.44571
Epoch: 0060 train_loss= 0.54911 train_acc= 0.71569 val_loss= 0.57932 val_acc= 0.67746 test_acc= 0.66348 time= 9.45830
Epoch: 0061 train_loss= 0.53369 train_acc= 0.73398 val_loss= 0.56237 val_acc= 0.71408 test_acc= 0.68796 time= 9.42036
Epoch: 0062 train_loss= 0.53597 train_acc= 0.73038 val_loss= 0.56399 val_acc= 0.71127 test_acc= 0.68261 time= 9.42772
Epoch: 0063 train_loss= 0.53227 train_acc= 0.73148 val_loss= 0.57755 val_acc= 0.69859 test_acc= 0.66939 time= 9.52330
Epoch: 0064 train_loss= 0.51820 train_acc= 0.73601 val_loss= 0.56583 val_acc= 0.70986 test_acc= 0.68627 time= 9.43127
Epoch: 0065 train_loss= 0.52492 train_acc= 0.72741 val_loss= 0.57010 val_acc= 0.69859 test_acc= 0.67839 time= 9.38220
Epoch: 0066 train_loss= 0.52673 train_acc= 0.73257 val_loss= 0.56701 val_acc= 0.70141 test_acc= 0.68486 time= 9.43901
Epoch: 0067 train_loss= 0.53470 train_acc= 0.73304 val_loss= 0.57407 val_acc= 0.70282 test_acc= 0.68092 time= 9.36912
Epoch: 0068 train_loss= 0.52497 train_acc= 0.73070 val_loss= 0.55444 val_acc= 0.71268 test_acc= 0.68965 time= 9.43445
Epoch: 0069 train_loss= 0.52388 train_acc= 0.74117 val_loss= 0.58293 val_acc= 0.68310 test_acc= 0.66798 time= 9.43005
Epoch: 0070 train_loss= 0.52789 train_acc= 0.72945 val_loss= 0.58188 val_acc= 0.68592 test_acc= 0.67276 time= 9.45276
Epoch: 0071 train_loss= 0.52093 train_acc= 0.73726 val_loss= 0.57112 val_acc= 0.70141 test_acc= 0.68064 time= 9.42288
Epoch: 0072 train_loss= 0.51113 train_acc= 0.73961 val_loss= 0.56504 val_acc= 0.69296 test_acc= 0.68542 time= 9.36708
Epoch: 0073 train_loss= 0.51124 train_acc= 0.74992 val_loss= 0.56535 val_acc= 0.69577 test_acc= 0.68768 time= 9.37870
Epoch: 0074 train_loss= 0.51491 train_acc= 0.74258 val_loss= 0.55237 val_acc= 0.71831 test_acc= 0.69555 time= 9.41619
Epoch: 0075 train_loss= 0.50697 train_acc= 0.75133 val_loss= 0.55362 val_acc= 0.71549 test_acc= 0.69527 time= 9.38604
Epoch: 0076 train_loss= 0.51111 train_acc= 0.74773 val_loss= 0.54947 val_acc= 0.71408 test_acc= 0.69837 time= 9.49435
Epoch: 0077 train_loss= 0.50315 train_acc= 0.74992 val_loss= 0.55632 val_acc= 0.71408 test_acc= 0.69977 time= 9.40648
Epoch: 0078 train_loss= 0.50476 train_acc= 0.74648 val_loss= 0.55781 val_acc= 0.71268 test_acc= 0.69162 time= 9.39114
Epoch: 0079 train_loss= 0.50042 train_acc= 0.74977 val_loss= 0.55939 val_acc= 0.71549 test_acc= 0.69696 time= 9.49069
Epoch: 0080 train_loss= 0.49152 train_acc= 0.75711 val_loss= 0.56260 val_acc= 0.71690 test_acc= 0.69612 time= 9.43299
Epoch: 0081 train_loss= 0.50067 train_acc= 0.75430 val_loss= 0.56319 val_acc= 0.70563 test_acc= 0.68993 time= 9.49183
Epoch: 0082 train_loss= 0.48840 train_acc= 0.76008 val_loss= 0.57394 val_acc= 0.70704 test_acc= 0.69415 time= 9.44209
Epoch: 0083 train_loss= 0.48515 train_acc= 0.76164 val_loss= 0.56156 val_acc= 0.71268 test_acc= 0.69471 time= 9.43980
Epoch: 0084 train_loss= 0.48637 train_acc= 0.76196 val_loss= 0.58453 val_acc= 0.70845 test_acc= 0.69443 time= 9.41256
Epoch: 0085 train_loss= 0.47967 train_acc= 0.76493 val_loss= 0.55377 val_acc= 0.72254 test_acc= 0.69949 time= 9.39790
Epoch: 0086 train_loss= 0.48991 train_acc= 0.75367 val_loss= 0.58508 val_acc= 0.70563 test_acc= 0.69246 time= 9.45006
Epoch: 0087 train_loss= 0.48204 train_acc= 0.76211 val_loss= 0.55074 val_acc= 0.71972 test_acc= 0.70090 time= 9.40147
Epoch: 0088 train_loss= 0.47860 train_acc= 0.76383 val_loss= 0.56268 val_acc= 0.71690 test_acc= 0.70343 time= 9.43493
Epoch: 0089 train_loss= 0.47832 train_acc= 0.76571 val_loss= 0.57382 val_acc= 0.70563 test_acc= 0.69049 time= 9.47057
Epoch: 0090 train_loss= 0.47783 train_acc= 0.76586 val_loss= 0.57692 val_acc= 0.69577 test_acc= 0.68824 time= 9.38049
Epoch: 0091 train_loss= 0.47617 train_acc= 0.76633 val_loss= 0.56344 val_acc= 0.71972 test_acc= 0.69921 time= 9.47777
Epoch: 0092 train_loss= 0.47945 train_acc= 0.76790 val_loss= 0.57406 val_acc= 0.70986 test_acc= 0.69668 time= 9.40344
Epoch: 0093 train_loss= 0.47006 train_acc= 0.77102 val_loss= 0.55573 val_acc= 0.71831 test_acc= 0.69977 time= 9.46768
Epoch: 0094 train_loss= 0.46662 train_acc= 0.77259 val_loss= 0.58422 val_acc= 0.69718 test_acc= 0.69865 time= 9.42955
Epoch: 0095 train_loss= 0.46412 train_acc= 0.77384 val_loss= 0.55243 val_acc= 0.70423 test_acc= 0.70934 time= 9.44401
Epoch: 0096 train_loss= 0.46291 train_acc= 0.77602 val_loss= 0.58821 val_acc= 0.71268 test_acc= 0.69302 time= 9.38259
Epoch: 0097 train_loss= 0.45973 train_acc= 0.77165 val_loss= 0.57701 val_acc= 0.70845 test_acc= 0.69133 time= 9.36779
Epoch: 0098 train_loss= 0.45685 train_acc= 0.77931 val_loss= 0.59032 val_acc= 0.70423 test_acc= 0.69724 time= 9.45963
Epoch: 0099 train_loss= 0.45767 train_acc= 0.78056 val_loss= 0.58403 val_acc= 0.70423 test_acc= 0.70597 time= 9.41978
Epoch: 0100 train_loss= 0.46881 train_acc= 0.77243 val_loss= 0.58420 val_acc= 0.70282 test_acc= 0.69246 time= 9.40690
Epoch: 0101 train_loss= 0.45332 train_acc= 0.78650 val_loss= 0.59482 val_acc= 0.71408 test_acc= 0.69105 time= 9.45697
Epoch: 0102 train_loss= 0.46016 train_acc= 0.78118 val_loss= 0.57547 val_acc= 0.71831 test_acc= 0.70850 time= 9.35378
Epoch: 0103 train_loss= 0.45195 train_acc= 0.78321 val_loss= 0.55837 val_acc= 0.70986 test_acc= 0.70653 time= 9.36887
Epoch: 0104 train_loss= 0.44663 train_acc= 0.78446 val_loss= 0.57434 val_acc= 0.70423 test_acc= 0.70174 time= 9.35993
Epoch: 0105 train_loss= 0.44569 train_acc= 0.78618 val_loss= 0.58374 val_acc= 0.71549 test_acc= 0.70146 time= 9.33158
Epoch: 0106 train_loss= 0.44866 train_acc= 0.78587 val_loss= 0.56979 val_acc= 0.72113 test_acc= 0.70371 time= 9.50261
Epoch: 0107 train_loss= 0.42919 train_acc= 0.79337 val_loss= 0.58493 val_acc= 0.72958 test_acc= 0.69893 time= 9.42851
Epoch: 0108 train_loss= 0.44082 train_acc= 0.79431 val_loss= 0.55284 val_acc= 0.72394 test_acc= 0.70203 time= 9.42461
Epoch: 0109 train_loss= 0.42901 train_acc= 0.79884 val_loss= 0.58290 val_acc= 0.72676 test_acc= 0.70568 time= 9.49824
Epoch: 0110 train_loss= 0.43164 train_acc= 0.79415 val_loss= 0.57066 val_acc= 0.72394 test_acc= 0.71019 time= 9.37873
Epoch: 0111 train_loss= 0.43608 train_acc= 0.79619 val_loss= 0.55648 val_acc= 0.73662 test_acc= 0.71384 time= 9.40850
Epoch: 0112 train_loss= 0.43360 train_acc= 0.79462 val_loss= 0.54401 val_acc= 0.73239 test_acc= 0.70709 time= 9.35584
Epoch: 0113 train_loss= 0.43198 train_acc= 0.79697 val_loss= 0.58489 val_acc= 0.71127 test_acc= 0.70625 time= 9.41200
Epoch: 0114 train_loss= 0.42983 train_acc= 0.79384 val_loss= 0.57864 val_acc= 0.72676 test_acc= 0.70287 time= 9.43206
Epoch: 0115 train_loss= 0.40749 train_acc= 0.81197 val_loss= 0.60357 val_acc= 0.71831 test_acc= 0.70597 time= 9.34099
Epoch: 0116 train_loss= 0.41109 train_acc= 0.80416 val_loss= 0.57735 val_acc= 0.72254 test_acc= 0.71075 time= 9.41332
Epoch: 0117 train_loss= 0.41575 train_acc= 0.81041 val_loss= 0.57531 val_acc= 0.73521 test_acc= 0.70878 time= 9.44986
Epoch: 0118 train_loss= 0.42112 train_acc= 0.80119 val_loss= 0.58868 val_acc= 0.72113 test_acc= 0.71019 time= 9.42124
Epoch: 0119 train_loss= 0.41584 train_acc= 0.80994 val_loss= 0.56866 val_acc= 0.72113 test_acc= 0.71300 time= 9.57050
Epoch: 0120 train_loss= 0.41819 train_acc= 0.80494 val_loss= 0.56076 val_acc= 0.73521 test_acc= 0.71412 time= 9.67463
Epoch: 0121 train_loss= 0.41302 train_acc= 0.80416 val_loss= 0.58956 val_acc= 0.71268 test_acc= 0.71047 time= 9.37353
Epoch: 0122 train_loss= 0.42278 train_acc= 0.80275 val_loss= 0.58229 val_acc= 0.71690 test_acc= 0.71328 time= 9.31625
Epoch: 0123 train_loss= 0.41020 train_acc= 0.80807 val_loss= 0.56670 val_acc= 0.71972 test_acc= 0.72032 time= 9.43863
Epoch: 0124 train_loss= 0.39935 train_acc= 0.81166 val_loss= 0.60529 val_acc= 0.70563 test_acc= 0.71609 time= 9.37355
Epoch: 0125 train_loss= 0.40125 train_acc= 0.81260 val_loss= 0.58211 val_acc= 0.71831 test_acc= 0.71497 time= 9.42178
Epoch: 0126 train_loss= 0.39861 train_acc= 0.81166 val_loss= 0.58454 val_acc= 0.72817 test_acc= 0.71384 time= 9.49300
Epoch: 0127 train_loss= 0.39411 train_acc= 0.81666 val_loss= 0.59354 val_acc= 0.72394 test_acc= 0.71384 time= 9.80655
Epoch: 0128 train_loss= 0.40381 train_acc= 0.80744 val_loss= 0.59680 val_acc= 0.71972 test_acc= 0.71750 time= 9.37681
Epoch: 0129 train_loss= 0.39638 train_acc= 0.81713 val_loss= 0.59846 val_acc= 0.71972 test_acc= 0.71806 time= 9.41873
Epoch: 0130 train_loss= 0.39646 train_acc= 0.81682 val_loss= 0.58924 val_acc= 0.72535 test_acc= 0.72032 time= 9.46292
Epoch: 0131 train_loss= 0.39027 train_acc= 0.81947 val_loss= 0.58117 val_acc= 0.73099 test_acc= 0.72003 time= 9.33863
Epoch: 0132 train_loss= 0.38905 train_acc= 0.82448 val_loss= 0.59530 val_acc= 0.73803 test_acc= 0.72032 time= 9.40483
Epoch: 0133 train_loss= 0.39510 train_acc= 0.81229 val_loss= 0.59878 val_acc= 0.72394 test_acc= 0.72032 time= 9.49056
Epoch: 0134 train_loss= 0.38096 train_acc= 0.82495 val_loss= 0.58953 val_acc= 0.72254 test_acc= 0.72257 time= 9.41398
Epoch: 0135 train_loss= 0.38859 train_acc= 0.81947 val_loss= 0.58494 val_acc= 0.71408 test_acc= 0.71525 time= 9.41143
Epoch: 0136 train_loss= 0.39587 train_acc= 0.81760 val_loss= 0.58340 val_acc= 0.72817 test_acc= 0.71638 time= 9.31381
Epoch: 0137 train_loss= 0.37793 train_acc= 0.82510 val_loss= 0.59162 val_acc= 0.72676 test_acc= 0.71891 time= 9.30900
Epoch: 0138 train_loss= 0.37971 train_acc= 0.82776 val_loss= 0.59937 val_acc= 0.73099 test_acc= 0.71750 time= 9.38446
Epoch: 0139 train_loss= 0.37744 train_acc= 0.82917 val_loss= 0.60426 val_acc= 0.72535 test_acc= 0.71750 time= 9.35224
Epoch: 0140 train_loss= 0.37850 train_acc= 0.82791 val_loss= 0.58454 val_acc= 0.72113 test_acc= 0.71947 time= 9.42728
Epoch: 0141 train_loss= 0.37923 train_acc= 0.82323 val_loss= 0.60706 val_acc= 0.71972 test_acc= 0.71778 time= 9.56808
Epoch: 0142 train_loss= 0.37662 train_acc= 0.82526 val_loss= 0.59700 val_acc= 0.72958 test_acc= 0.72032 time= 9.32104
Epoch: 0143 train_loss= 0.36976 train_acc= 0.83323 val_loss= 0.60116 val_acc= 0.72254 test_acc= 0.72060 time= 9.38784
Epoch: 0144 train_loss= 0.36209 train_acc= 0.83589 val_loss= 0.61184 val_acc= 0.72958 test_acc= 0.72116 time= 9.37734
Epoch: 0145 train_loss= 0.37256 train_acc= 0.82979 val_loss= 0.58463 val_acc= 0.72254 test_acc= 0.72397 time= 9.31005
Epoch: 0146 train_loss= 0.36009 train_acc= 0.83886 val_loss= 0.61629 val_acc= 0.72817 test_acc= 0.71835 time= 9.50525
Epoch: 0147 train_loss= 0.37229 train_acc= 0.83088 val_loss= 0.61193 val_acc= 0.72113 test_acc= 0.72088 time= 9.38858
Epoch: 0148 train_loss= 0.36914 train_acc= 0.83104 val_loss= 0.60582 val_acc= 0.73239 test_acc= 0.72257 time= 9.36295
Epoch: 0149 train_loss= 0.36389 train_acc= 0.83651 val_loss= 0.59865 val_acc= 0.71831 test_acc= 0.72482 time= 9.35121
Epoch: 0150 train_loss= 0.36667 train_acc= 0.83385 val_loss= 0.57042 val_acc= 0.73099 test_acc= 0.72341 time= 9.33676
Epoch: 0151 train_loss= 0.35544 train_acc= 0.83807 val_loss= 0.64527 val_acc= 0.70282 test_acc= 0.71553 time= 9.42994
Epoch: 0152 train_loss= 0.36471 train_acc= 0.83276 val_loss= 0.59884 val_acc= 0.72113 test_acc= 0.71919 time= 9.42044
Epoch: 0153 train_loss= 0.35266 train_acc= 0.84417 val_loss= 0.59181 val_acc= 0.73521 test_acc= 0.72341 time= 9.28569
Epoch: 0154 train_loss= 0.36347 train_acc= 0.83323 val_loss= 0.61167 val_acc= 0.72254 test_acc= 0.72735 time= 9.38037
Epoch: 0155 train_loss= 0.36133 train_acc= 0.83854 val_loss= 0.59357 val_acc= 0.72535 test_acc= 0.72622 time= 9.41030
Epoch: 0156 train_loss= 0.34430 train_acc= 0.85308 val_loss= 0.63279 val_acc= 0.71549 test_acc= 0.72228 time= 9.31758
Epoch: 0157 train_loss= 0.35987 train_acc= 0.83698 val_loss= 0.60655 val_acc= 0.72394 test_acc= 0.72988 time= 9.41044
Epoch: 0158 train_loss= 0.35081 train_acc= 0.84386 val_loss= 0.65924 val_acc= 0.70423 test_acc= 0.72088 time= 9.96002
Epoch: 0159 train_loss= 0.35186 train_acc= 0.83854 val_loss= 0.59736 val_acc= 0.71549 test_acc= 0.72847 time= 9.88755
Epoch: 0160 train_loss= 0.34767 train_acc= 0.83886 val_loss= 0.62272 val_acc= 0.72817 test_acc= 0.72622 time= 9.39469
Epoch: 0161 train_loss= 0.34599 train_acc= 0.84620 val_loss= 0.63145 val_acc= 0.72535 test_acc= 0.72369 time= 9.34634
Epoch: 0162 train_loss= 0.34574 train_acc= 0.84745 val_loss= 0.62908 val_acc= 0.72817 test_acc= 0.72819 time= 9.31153
Epoch: 0163 train_loss= 0.33907 train_acc= 0.84745 val_loss= 0.62480 val_acc= 0.73239 test_acc= 0.72819 time= 9.39163
Epoch: 0164 train_loss= 0.34932 train_acc= 0.84323 val_loss= 0.61683 val_acc= 0.72817 test_acc= 0.72707 time= 9.35730
Epoch: 0165 train_loss= 0.33863 train_acc= 0.84745 val_loss= 0.60814 val_acc= 0.72394 test_acc= 0.72960 time= 9.43012
Epoch: 0166 train_loss= 0.33578 train_acc= 0.85183 val_loss= 0.63145 val_acc= 0.72535 test_acc= 0.73298 time= 9.33584
Epoch: 0167 train_loss= 0.33308 train_acc= 0.85058 val_loss= 0.61737 val_acc= 0.73521 test_acc= 0.73129 time= 9.34226
Epoch: 0168 train_loss= 0.34508 train_acc= 0.84448 val_loss= 0.59861 val_acc= 0.72817 test_acc= 0.73382 time= 9.36551
Epoch: 0169 train_loss= 0.33622 train_acc= 0.85027 val_loss= 0.61087 val_acc= 0.72254 test_acc= 0.73101 time= 9.42611
Epoch: 0170 train_loss= 0.32623 train_acc= 0.85277 val_loss= 0.65388 val_acc= 0.71690 test_acc= 0.72566 time= 9.33569
Epoch: 0171 train_loss= 0.32377 train_acc= 0.85605 val_loss= 0.65708 val_acc= 0.73662 test_acc= 0.72904 time= 9.44486
Epoch: 0172 train_loss= 0.33807 train_acc= 0.84448 val_loss= 0.61732 val_acc= 0.73380 test_acc= 0.72707 time= 9.35273
Epoch: 0173 train_loss= 0.33269 train_acc= 0.84948 val_loss= 0.60996 val_acc= 0.73099 test_acc= 0.73016 time= 9.41739
Epoch: 0174 train_loss= 0.32402 train_acc= 0.85417 val_loss= 0.62004 val_acc= 0.73662 test_acc= 0.72904 time= 9.43093
Epoch: 0175 train_loss= 0.32745 train_acc= 0.85214 val_loss= 0.64181 val_acc= 0.71549 test_acc= 0.73523 time= 9.36108
Epoch: 0176 train_loss= 0.32169 train_acc= 0.85886 val_loss= 0.65001 val_acc= 0.72817 test_acc= 0.73157 time= 9.42805
Epoch: 0177 train_loss= 0.31034 train_acc= 0.86480 val_loss= 0.63860 val_acc= 0.74085 test_acc= 0.73016 time= 9.35984
Epoch: 0178 train_loss= 0.33070 train_acc= 0.85058 val_loss= 0.60783 val_acc= 0.72535 test_acc= 0.73298 time= 9.35859
Epoch: 0179 train_loss= 0.31777 train_acc= 0.85824 val_loss= 0.63734 val_acc= 0.73380 test_acc= 0.73101 time= 9.43445
Epoch: 0180 train_loss= 0.31881 train_acc= 0.85777 val_loss= 0.65004 val_acc= 0.73521 test_acc= 0.73326 time= 9.35490
Epoch: 0181 train_loss= 0.32317 train_acc= 0.85917 val_loss= 0.63719 val_acc= 0.74225 test_acc= 0.73185 time= 9.34799
Epoch: 0182 train_loss= 0.32433 train_acc= 0.85714 val_loss= 0.64764 val_acc= 0.73662 test_acc= 0.73270 time= 9.39804
Epoch: 0183 train_loss= 0.31378 train_acc= 0.86105 val_loss= 0.65028 val_acc= 0.73239 test_acc= 0.73073 time= 9.33598
Epoch: 0184 train_loss= 0.31487 train_acc= 0.85964 val_loss= 0.63196 val_acc= 0.73380 test_acc= 0.72735 time= 9.44273
Epoch: 0185 train_loss= 0.30901 train_acc= 0.86183 val_loss= 0.66464 val_acc= 0.73662 test_acc= 0.73438 time= 9.31948
Epoch: 0186 train_loss= 0.31742 train_acc= 0.86199 val_loss= 0.63261 val_acc= 0.72958 test_acc= 0.73607 time= 9.38919
Epoch: 0187 train_loss= 0.30934 train_acc= 0.86590 val_loss= 0.62695 val_acc= 0.73380 test_acc= 0.73213 time= 9.40234
Epoch: 0188 train_loss= 0.31406 train_acc= 0.86277 val_loss= 0.64771 val_acc= 0.74085 test_acc= 0.73185 time= 9.39779
Epoch: 0189 train_loss= 0.30987 train_acc= 0.86261 val_loss= 0.66118 val_acc= 0.72817 test_acc= 0.73016 time= 9.29694
Epoch: 0190 train_loss= 0.30590 train_acc= 0.86793 val_loss= 0.66389 val_acc= 0.73380 test_acc= 0.73129 time= 9.36289
Epoch: 0191 train_loss= 0.31167 train_acc= 0.86433 val_loss= 0.63139 val_acc= 0.73662 test_acc= 0.73438 time= 9.39836
Epoch: 0192 train_loss= 0.29010 train_acc= 0.87293 val_loss= 0.69216 val_acc= 0.73521 test_acc= 0.73382 time= 9.51976
Epoch: 0193 train_loss= 0.29448 train_acc= 0.87480 val_loss= 0.66219 val_acc= 0.73239 test_acc= 0.73354 time= 9.59536
Epoch: 0194 train_loss= 0.30787 train_acc= 0.86465 val_loss= 0.65270 val_acc= 0.73099 test_acc= 0.73213 time= 9.36797
Epoch: 0195 train_loss= 0.30018 train_acc= 0.86730 val_loss= 0.67225 val_acc= 0.73099 test_acc= 0.73354 time= 9.44877
Epoch: 0196 train_loss= 0.30359 train_acc= 0.86918 val_loss= 0.61512 val_acc= 0.73521 test_acc= 0.73917 time= 9.34018
Epoch: 0197 train_loss= 0.30450 train_acc= 0.86824 val_loss= 0.65564 val_acc= 0.72817 test_acc= 0.72988 time= 9.40780
Epoch: 0198 train_loss= 0.29891 train_acc= 0.86715 val_loss= 0.64581 val_acc= 0.72817 test_acc= 0.73382 time= 9.41069
Epoch: 0199 train_loss= 0.29888 train_acc= 0.86418 val_loss= 0.68361 val_acc= 0.73521 test_acc= 0.73889 time= 9.42118
Epoch: 0200 train_loss= 0.30234 train_acc= 0.86808 val_loss= 0.64843 val_acc= 0.73521 test_acc= 0.73438 time= 9.37062
Epoch: 0201 train_loss= 0.30521 train_acc= 0.86652 val_loss= 0.63632 val_acc= 0.74648 test_acc= 0.73270 time= 9.37372
Epoch: 0202 train_loss= 0.29607 train_acc= 0.87027 val_loss= 0.67864 val_acc= 0.73662 test_acc= 0.73298 time= 9.30866
Epoch: 0203 train_loss= 0.30031 train_acc= 0.86668 val_loss= 0.66764 val_acc= 0.72676 test_acc= 0.73270 time= 9.42382
Epoch: 0204 train_loss= 0.29469 train_acc= 0.87168 val_loss= 0.66391 val_acc= 0.71972 test_acc= 0.73326 time= 9.34886
Epoch: 0205 train_loss= 0.29236 train_acc= 0.87449 val_loss= 0.65517 val_acc= 0.73099 test_acc= 0.73016 time= 9.39549
Epoch: 0206 train_loss= 0.28222 train_acc= 0.87934 val_loss= 0.71691 val_acc= 0.72817 test_acc= 0.73101 time= 9.36884
Epoch: 0207 train_loss= 0.28236 train_acc= 0.87699 val_loss= 0.68498 val_acc= 0.72535 test_acc= 0.73129 time= 9.35433
Epoch: 0208 train_loss= 0.29061 train_acc= 0.87777 val_loss= 0.71091 val_acc= 0.72254 test_acc= 0.72904 time= 9.36703
Epoch: 0209 train_loss= 0.29422 train_acc= 0.87355 val_loss= 0.65503 val_acc= 0.73099 test_acc= 0.72988 time= 9.35378
Epoch: 0210 train_loss= 0.28849 train_acc= 0.87480 val_loss= 0.65143 val_acc= 0.73239 test_acc= 0.73410 time= 9.45285
Epoch: 0211 train_loss= 0.28198 train_acc= 0.88199 val_loss= 0.65315 val_acc= 0.73099 test_acc= 0.73579 time= 9.37534
Epoch: 0212 train_loss= 0.29390 train_acc= 0.87246 val_loss= 0.63479 val_acc= 0.74225 test_acc= 0.73241 time= 9.33378
Epoch: 0213 train_loss= 0.28111 train_acc= 0.88262 val_loss= 0.66012 val_acc= 0.73662 test_acc= 0.73044 time= 9.33752
Epoch: 0214 train_loss= 0.28540 train_acc= 0.87527 val_loss= 0.63557 val_acc= 0.73944 test_acc= 0.73663 time= 9.35667
Epoch: 0215 train_loss= 0.29046 train_acc= 0.87168 val_loss= 0.67847 val_acc= 0.73803 test_acc= 0.74029 time= 9.37935
Epoch: 0216 train_loss= 0.27732 train_acc= 0.88278 val_loss= 0.64426 val_acc= 0.74366 test_acc= 0.73354 time= 9.69392
Epoch: 0217 train_loss= 0.28077 train_acc= 0.87668 val_loss= 0.66215 val_acc= 0.73521 test_acc= 0.73467 time= 9.36394
Epoch: 0218 train_loss= 0.27585 train_acc= 0.88074 val_loss= 0.70318 val_acc= 0.73099 test_acc= 0.73467 time= 9.38656
Epoch: 0219 train_loss= 0.27480 train_acc= 0.88184 val_loss= 0.72657 val_acc= 0.72676 test_acc= 0.72791 time= 9.51133
Epoch: 0220 train_loss= 0.27627 train_acc= 0.88184 val_loss= 0.66763 val_acc= 0.72394 test_acc= 0.73748 time= 9.41609
Epoch: 0221 train_loss= 0.28462 train_acc= 0.87527 val_loss= 0.62838 val_acc= 0.74085 test_acc= 0.73354 time= 9.61491
Epoch: 0222 train_loss= 0.28093 train_acc= 0.87777 val_loss= 0.67483 val_acc= 0.73239 test_acc= 0.73044 time= 9.61457
Epoch: 0223 train_loss= 0.28075 train_acc= 0.88028 val_loss= 0.61219 val_acc= 0.74225 test_acc= 0.73073 time= 9.47224
Epoch: 0224 train_loss= 0.26586 train_acc= 0.88981 val_loss= 0.69529 val_acc= 0.73099 test_acc= 0.73044 time= 9.38987
Epoch: 0225 train_loss= 0.28823 train_acc= 0.87512 val_loss= 0.66700 val_acc= 0.72535 test_acc= 0.73073 time= 9.45238
Epoch: 0226 train_loss= 0.26763 train_acc= 0.88512 val_loss= 0.69256 val_acc= 0.73521 test_acc= 0.73270 time= 9.34147
Epoch: 0227 train_loss= 0.26706 train_acc= 0.88731 val_loss= 0.69621 val_acc= 0.72535 test_acc= 0.74198 time= 9.38819
Epoch: 0228 train_loss= 0.27970 train_acc= 0.87981 val_loss= 0.67239 val_acc= 0.71408 test_acc= 0.73776 time= 9.40750
Epoch: 0229 train_loss= 0.28627 train_acc= 0.88121 val_loss= 0.67689 val_acc= 0.72958 test_acc= 0.73241 time= 9.43458
Epoch: 0230 train_loss= 0.27617 train_acc= 0.88512 val_loss= 0.62532 val_acc= 0.74507 test_acc= 0.73213 time= 9.49115
Epoch: 0231 train_loss= 0.26751 train_acc= 0.88621 val_loss= 0.67229 val_acc= 0.72958 test_acc= 0.73635 time= 9.37430
Epoch: 0232 train_loss= 0.25995 train_acc= 0.88903 val_loss= 0.67166 val_acc= 0.73380 test_acc= 0.73467 time= 9.42031
Epoch: 0233 train_loss= 0.27531 train_acc= 0.88465 val_loss= 0.67780 val_acc= 0.72817 test_acc= 0.73438 time= 9.37990
Epoch: 0234 train_loss= 0.26727 train_acc= 0.88778 val_loss= 0.65669 val_acc= 0.73803 test_acc= 0.73270 time= 9.49047
Epoch: 0235 train_loss= 0.25700 train_acc= 0.89059 val_loss= 0.73397 val_acc= 0.73239 test_acc= 0.72876 time= 9.43395
Epoch: 0236 train_loss= 0.27693 train_acc= 0.87949 val_loss= 0.72715 val_acc= 0.71831 test_acc= 0.72819 time= 9.37171
Epoch: 0237 train_loss= 0.26260 train_acc= 0.88590 val_loss= 0.66580 val_acc= 0.73099 test_acc= 0.73270 time= 9.40740
Epoch: 0238 train_loss= 0.25863 train_acc= 0.88997 val_loss= 0.66178 val_acc= 0.74085 test_acc= 0.74057 time= 9.35542
Epoch: 0239 train_loss= 0.25929 train_acc= 0.89028 val_loss= 0.69102 val_acc= 0.72958 test_acc= 0.73832 time= 9.33036
Epoch: 0240 train_loss= 0.26485 train_acc= 0.88840 val_loss= 0.65980 val_acc= 0.73380 test_acc= 0.73917 time= 9.35281
Epoch: 0241 train_loss= 0.26637 train_acc= 0.88715 val_loss= 0.69797 val_acc= 0.72676 test_acc= 0.73635 time= 9.37982
Epoch: 0242 train_loss= 0.26549 train_acc= 0.88621 val_loss= 0.70432 val_acc= 0.72817 test_acc= 0.74198 time= 9.36356
Epoch: 0243 train_loss= 0.25673 train_acc= 0.89262 val_loss= 0.63438 val_acc= 0.74085 test_acc= 0.74142 time= 9.34668
Epoch: 0244 train_loss= 0.26590 train_acc= 0.88981 val_loss= 0.62542 val_acc= 0.73803 test_acc= 0.73804 time= 9.40314
Epoch: 0245 train_loss= 0.25687 train_acc= 0.88981 val_loss= 0.66705 val_acc= 0.73803 test_acc= 0.74311 time= 9.26673
Epoch: 0246 train_loss= 0.25020 train_acc= 0.89215 val_loss= 0.67204 val_acc= 0.73944 test_acc= 0.73832 time= 9.40202
Epoch: 0247 train_loss= 0.26998 train_acc= 0.88621 val_loss= 0.67130 val_acc= 0.73662 test_acc= 0.74142 time= 9.40909
Epoch: 0248 train_loss= 0.25466 train_acc= 0.88778 val_loss= 0.69664 val_acc= 0.74085 test_acc= 0.74001 time= 9.37671
Epoch: 0249 train_loss= 0.25122 train_acc= 0.89325 val_loss= 0.64389 val_acc= 0.74648 test_acc= 0.73945 time= 9.34132
Epoch: 0250 train_loss= 0.25811 train_acc= 0.89309 val_loss= 0.67433 val_acc= 0.74225 test_acc= 0.74226 time= 9.39195
Epoch: 0251 train_loss= 0.25919 train_acc= 0.89106 val_loss= 0.63643 val_acc= 0.73099 test_acc= 0.73973 time= 9.34966
Epoch: 0252 train_loss= 0.25793 train_acc= 0.89090 val_loss= 0.65691 val_acc= 0.73521 test_acc= 0.74114 time= 9.41477
Epoch: 0253 train_loss= 0.24874 train_acc= 0.89309 val_loss= 0.67438 val_acc= 0.74507 test_acc= 0.73804 time= 9.36380
Epoch: 0254 train_loss= 0.24812 train_acc= 0.89419 val_loss= 0.70219 val_acc= 0.72394 test_acc= 0.74114 time= 9.41626
Epoch: 0255 train_loss= 0.24409 train_acc= 0.89559 val_loss= 0.69852 val_acc= 0.73944 test_acc= 0.73973 time= 9.36738
Epoch: 0256 train_loss= 0.24873 train_acc= 0.89606 val_loss= 0.66105 val_acc= 0.74225 test_acc= 0.73748 time= 9.37715
Epoch: 0257 train_loss= 0.25190 train_acc= 0.89559 val_loss= 0.68010 val_acc= 0.73380 test_acc= 0.73945 time= 9.35769
Epoch: 0258 train_loss= 0.26159 train_acc= 0.88965 val_loss= 0.63750 val_acc= 0.73803 test_acc= 0.74282 time= 9.35684
Epoch: 0259 train_loss= 0.24228 train_acc= 0.89809 val_loss= 0.70676 val_acc= 0.73944 test_acc= 0.73776 time= 9.34766
Epoch: 0260 train_loss= 0.25195 train_acc= 0.89372 val_loss= 0.67741 val_acc= 0.72676 test_acc= 0.74451 time= 9.40812
Epoch: 0261 train_loss= 0.24696 train_acc= 0.89262 val_loss= 0.68972 val_acc= 0.73521 test_acc= 0.74508 time= 9.35904
Epoch: 0262 train_loss= 0.24756 train_acc= 0.89497 val_loss= 0.66983 val_acc= 0.73803 test_acc= 0.74451 time= 9.42902
Epoch: 0263 train_loss= 0.23576 train_acc= 0.90013 val_loss= 0.72569 val_acc= 0.73944 test_acc= 0.74367 time= 9.39227
Epoch: 0264 train_loss= 0.24694 train_acc= 0.89387 val_loss= 0.68797 val_acc= 0.72958 test_acc= 0.74198 time= 9.41038
Epoch: 0265 train_loss= 0.25045 train_acc= 0.89184 val_loss= 0.69657 val_acc= 0.72817 test_acc= 0.73832 time= 9.44044
Epoch: 0266 train_loss= 0.24415 train_acc= 0.89622 val_loss= 0.68202 val_acc= 0.72535 test_acc= 0.74311 time= 9.36992
Epoch: 0267 train_loss= 0.24140 train_acc= 0.89825 val_loss= 0.69179 val_acc= 0.74085 test_acc= 0.74451 time= 9.41314
Epoch: 0268 train_loss= 0.24556 train_acc= 0.89981 val_loss= 0.69714 val_acc= 0.72676 test_acc= 0.74114 time= 9.41774
Epoch: 0269 train_loss= 0.24127 train_acc= 0.89825 val_loss= 0.68328 val_acc= 0.73521 test_acc= 0.74029 time= 9.34645
Epoch: 0270 train_loss= 0.23652 train_acc= 0.89950 val_loss= 0.71008 val_acc= 0.73239 test_acc= 0.74367 time= 9.30617
Epoch: 0271 train_loss= 0.23571 train_acc= 0.90138 val_loss= 0.70447 val_acc= 0.75211 test_acc= 0.74282 time= 9.65783
Epoch: 0272 train_loss= 0.22610 train_acc= 0.90622 val_loss= 0.71326 val_acc= 0.72817 test_acc= 0.74311 time= 9.28041
Epoch: 0273 train_loss= 0.24877 train_acc= 0.89716 val_loss= 0.66448 val_acc= 0.73803 test_acc= 0.74282 time= 9.44393
Epoch: 0274 train_loss= 0.22312 train_acc= 0.90606 val_loss= 0.72372 val_acc= 0.73944 test_acc= 0.74479 time= 9.44585
Epoch: 0275 train_loss= 0.23680 train_acc= 0.90200 val_loss= 0.71687 val_acc= 0.71831 test_acc= 0.73495 time= 9.39549
Epoch: 0276 train_loss= 0.24296 train_acc= 0.89684 val_loss= 0.68692 val_acc= 0.73239 test_acc= 0.74648 time= 9.42836
Epoch: 0277 train_loss= 0.23027 train_acc= 0.90513 val_loss= 0.69398 val_acc= 0.73521 test_acc= 0.74282 time= 9.26210
Epoch: 0278 train_loss= 0.23032 train_acc= 0.90403 val_loss= 0.70078 val_acc= 0.73239 test_acc= 0.74451 time= 9.37999
Epoch: 0279 train_loss= 0.23837 train_acc= 0.90560 val_loss= 0.69338 val_acc= 0.73521 test_acc= 0.74226 time= 9.40872
Epoch: 0280 train_loss= 0.23402 train_acc= 0.90138 val_loss= 0.68410 val_acc= 0.73099 test_acc= 0.74648 time= 9.31526
Epoch: 0281 train_loss= 0.23569 train_acc= 0.90091 val_loss= 0.70239 val_acc= 0.73099 test_acc= 0.73917 time= 9.38258
Epoch: 0282 train_loss= 0.23820 train_acc= 0.90309 val_loss= 0.70577 val_acc= 0.73521 test_acc= 0.74282 time= 9.46656
Epoch: 0283 train_loss= 0.24047 train_acc= 0.89950 val_loss= 0.65842 val_acc= 0.73380 test_acc= 0.74395 time= 9.37246
Epoch: 0284 train_loss= 0.22730 train_acc= 0.90450 val_loss= 0.70678 val_acc= 0.73944 test_acc= 0.73945 time= 9.51553
Epoch: 0285 train_loss= 0.23061 train_acc= 0.90685 val_loss= 0.71630 val_acc= 0.73239 test_acc= 0.74142 time= 9.31654
Epoch: 0286 train_loss= 0.24047 train_acc= 0.89809 val_loss= 0.67403 val_acc= 0.73239 test_acc= 0.74029 time= 9.38103
Epoch: 0287 train_loss= 0.22682 train_acc= 0.91044 val_loss= 0.68074 val_acc= 0.74225 test_acc= 0.74114 time= 9.41290
Epoch: 0288 train_loss= 0.23378 train_acc= 0.90075 val_loss= 0.68912 val_acc= 0.73662 test_acc= 0.74001 time= 9.37079
Epoch: 0289 train_loss= 0.23816 train_acc= 0.90372 val_loss= 0.72509 val_acc= 0.72394 test_acc= 0.73804 time= 9.47469
Epoch: 0290 train_loss= 0.23620 train_acc= 0.89934 val_loss= 0.69028 val_acc= 0.75352 test_acc= 0.73945 time= 9.41381
Epoch: 0291 train_loss= 0.22896 train_acc= 0.89966 val_loss= 0.67896 val_acc= 0.74648 test_acc= 0.74114 time= 9.35215
Epoch: 0292 train_loss= 0.22673 train_acc= 0.90544 val_loss= 0.70703 val_acc= 0.73662 test_acc= 0.74564 time= 9.44908
Epoch: 0293 train_loss= 0.22722 train_acc= 0.90341 val_loss= 0.70001 val_acc= 0.73662 test_acc= 0.73889 time= 9.38804
Epoch: 0294 train_loss= 0.23273 train_acc= 0.90435 val_loss= 0.71646 val_acc= 0.73380 test_acc= 0.74086 time= 9.35751
Epoch: 0295 train_loss= 0.23544 train_acc= 0.90700 val_loss= 0.70393 val_acc= 0.74930 test_acc= 0.74339 time= 9.45055
Epoch: 0296 train_loss= 0.22091 train_acc= 0.90763 val_loss= 0.68733 val_acc= 0.74648 test_acc= 0.74226 time= 9.33286
Epoch: 0297 train_loss= 0.22432 train_acc= 0.90935 val_loss= 0.70470 val_acc= 0.75352 test_acc= 0.74282 time= 9.45516
Epoch: 0298 train_loss= 0.22138 train_acc= 0.90638 val_loss= 0.69942 val_acc= 0.73944 test_acc= 0.74198 time= 9.38376
Epoch: 0299 train_loss= 0.22233 train_acc= 0.90982 val_loss= 0.72312 val_acc= 0.73521 test_acc= 0.74086 time= 9.50871
Optimization Finished!
Best epoch: 297
Test set results: cost= 0.78937 accuracy= 0.74282
Test Precision, Recall and F1-Score...
precision recall f1-score support

       0     0.7372    0.7546    0.7458      1777
       1     0.7487    0.7310    0.7397      1777

accuracy                         0.7428      3554

macro avg 0.7430 0.7428 0.7428 3554
weighted avg 0.7430 0.7428 0.7428 3554

Macro average Test Precision, Recall and F1-Score...
(0.7429607109077572, 0.7428249859313449, 0.7427890645389335, None)
Micro average Test Precision, Recall and F1-Score...
(0.7428249859313449, 0.7428249859313449, 0.742824985931345, None)

@Magicat128
Copy link
Collaborator

@whateverud
您好,看起来好像计算精度有些问题,请问有在其他机器上运行过吗?比如linux系统

@susanhhhhhh
Copy link

你好,我看这个val的loss是不是过拟合了呀,不知道该怎么解决

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants