Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

parameters setting #23

Open
LiangqingZhang opened this issue May 13, 2022 · 0 comments
Open

parameters setting #23

LiangqingZhang opened this issue May 13, 2022 · 0 comments

Comments

@LiangqingZhang
Copy link

您好,我在给出的默认配置下在MR只能取得0.7左右的acc
我参考这个issue下面的一些配置,如下:
环境配置:
Python 3.6.13
Tensorflow 1.11.0
Scipy 1.5.4
参数配置:
learning_rate 0.005
epochs 300
batch_size 256
input_dim 300
Hidden 128
steps 1
dropout 0.5
weight_decay 0
early_stopping -1
max_degree 3
也无法取得论文汇报的结果
这是我的运行日志
D:\ProgramData\Anaconda3\lib\site-packages\tensorflow\python\framework\dtypes.py:523: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future versio
n of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_qint8 = np.dtype([("qint8", np.int8, 1)])
D:\ProgramData\Anaconda3\lib\site-packages\tensorflow\python\framework\dtypes.py:524: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future versio
n of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_quint8 = np.dtype([("quint8", np.uint8, 1)])
D:\ProgramData\Anaconda3\lib\site-packages\tensorflow\python\framework\dtypes.py:525: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future versio
n of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_qint16 = np.dtype([("qint16", np.int16, 1)])
D:\ProgramData\Anaconda3\lib\site-packages\tensorflow\python\framework\dtypes.py:526: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future versio
n of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_quint16 = np.dtype([("quint16", np.uint16, 1)])
D:\ProgramData\Anaconda3\lib\site-packages\tensorflow\python\framework\dtypes.py:527: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future versio
n of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_qint32 = np.dtype([("qint32", np.int32, 1)])
D:\ProgramData\Anaconda3\lib\site-packages\tensorflow\python\framework\dtypes.py:532: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future versio
n of numpy, it will be understood as (type, (1,)) / '(1,)type'.
np_resource = np.dtype([("resource", np.ubyte, 1)])
E:\desktop\TextING-master\utils.py:82: VisibleDeprecationWarning: Creating an ndarray from ragged nested sequences (which is a list-or-tuple of lists-or-tuples-or ndarrays with differe
nt lengths or shapes) is deprecated. If you meant to do this, you must specify 'dtype=object' when creating the ndarray
train_adj = np.array(train_adj)
E:\desktop\TextING-master\utils.py:83: VisibleDeprecationWarning: Creating an ndarray from ragged nested sequences (which is a list-or-tuple of lists-or-tuples-or ndarrays with differe
nt lengths or shapes) is deprecated. If you meant to do this, you must specify 'dtype=object' when creating the ndarray
val_adj = np.array(val_adj)
E:\desktop\TextING-master\utils.py:84: VisibleDeprecationWarning: Creating an ndarray from ragged nested sequences (which is a list-or-tuple of lists-or-tuples-or ndarrays with differe
nt lengths or shapes) is deprecated. If you meant to do this, you must specify 'dtype=object' when creating the ndarray
test_adj = np.array(test_adj)
E:\desktop\TextING-master\utils.py:85: VisibleDeprecationWarning: Creating an ndarray from ragged nested sequences (which is a list-or-tuple of lists-or-tuples-or ndarrays with differe
nt lengths or shapes) is deprecated. If you meant to do this, you must specify 'dtype=object' when creating the ndarray
train_embed = np.array(train_embed)
E:\desktop\TextING-master\utils.py:86: VisibleDeprecationWarning: Creating an ndarray from ragged nested sequences (which is a list-or-tuple of lists-or-tuples-or ndarrays with differe
nt lengths or shapes) is deprecated. If you meant to do this, you must specify 'dtype=object' when creating the ndarray
val_embed = np.array(val_embed)
E:\desktop\TextING-master\utils.py:87: VisibleDeprecationWarning: Creating an ndarray from ragged nested sequences (which is a list-or-tuple of lists-or-tuples-or ndarrays with differe
nt lengths or shapes) is deprecated. If you meant to do this, you must specify 'dtype=object' when creating the ndarray
test_embed = np.array(test_embed)
loading training set
100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 6398/6398 [00:00<00:00, 10601.33it/s]
100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 6398/6398 [00:00<00:00, 11214.96it/s]
loading validation set
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 710/710 [00:00<00:00, 10162.46it/s]
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 710/710 [00:00<00:00, 12274.92it/s]
loading test set
100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 3554/3554 [00:00<00:00, 10605.50it/s]
100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 3554/3554 [00:00<00:00, 11161.50it/s]
build...
WARNING:tensorflow:From E:\desktop\TextING-master\metrics.py:6: softmax_cross_entropy_with_logits (from tensorflow.python.ops.nn_ops) is deprecated and will be removed in a future vers
ion.
Instructions for updating:

Future major versions of TensorFlow will allow gradients to flow
into the labels input on backprop by default.

See tf.nn.softmax_cross_entropy_with_logits_v2.

2022-05-13 10:29:41.954666: I tensorflow/core/platform/cpu_feature_guard.cc:141] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2
train start...
Epoch: 0001 train_loss= 0.68974 train_acc= 0.52032 val_loss= 0.70149 val_acc= 0.50563 test_acc= 0.54811 time= 5.35665
Epoch: 0002 train_loss= 0.68966 train_acc= 0.51766 val_loss= 0.69056 val_acc= 0.50563 test_acc= 0.54446 time= 4.99060
Epoch: 0003 train_loss= 0.68522 train_acc= 0.51454 val_loss= 0.69074 val_acc= 0.52394 test_acc= 0.51379 time= 4.98833
Epoch: 0004 train_loss= 0.68161 train_acc= 0.53063 val_loss= 0.68953 val_acc= 0.50282 test_acc= 0.54615 time= 4.96672
Epoch: 0005 train_loss= 0.67835 train_acc= 0.53814 val_loss= 0.68977 val_acc= 0.52676 test_acc= 0.51041 time= 4.95613
Epoch: 0006 train_loss= 0.67787 train_acc= 0.53892 val_loss= 0.68808 val_acc= 0.53099 test_acc= 0.51379 time= 4.97210
Epoch: 0007 train_loss= 0.67805 train_acc= 0.53470 val_loss= 0.68568 val_acc= 0.54225 test_acc= 0.53433 time= 4.94400
Epoch: 0008 train_loss= 0.67743 train_acc= 0.55674 val_loss= 0.68912 val_acc= 0.51831 test_acc= 0.54727 time= 5.01859
Epoch: 0009 train_loss= 0.67182 train_acc= 0.56549 val_loss= 0.69753 val_acc= 0.52958 test_acc= 0.51407 time= 4.98467
Epoch: 0010 train_loss= 0.67658 train_acc= 0.55080 val_loss= 0.68936 val_acc= 0.52676 test_acc= 0.51519 time= 4.98866
Epoch: 0011 train_loss= 0.67274 train_acc= 0.56486 val_loss= 0.69097 val_acc= 0.53944 test_acc= 0.52729 time= 4.96491
Epoch: 0012 train_loss= 0.67346 train_acc= 0.54658 val_loss= 0.68696 val_acc= 0.53099 test_acc= 0.51491 time= 4.97670
Epoch: 0013 train_loss= 0.66519 train_acc= 0.57627 val_loss= 0.69759 val_acc= 0.57746 test_acc= 0.58301 time= 4.98898
Epoch: 0014 train_loss= 0.65975 train_acc= 0.59597 val_loss= 0.68219 val_acc= 0.59155 test_acc= 0.58188 time= 4.99568
Epoch: 0015 train_loss= 0.65782 train_acc= 0.59565 val_loss= 0.67787 val_acc= 0.57324 test_acc= 0.59764 time= 4.95875
Epoch: 0016 train_loss= 0.65551 train_acc= 0.59691 val_loss= 0.67830 val_acc= 0.58169 test_acc= 0.59313 time= 4.98974
Epoch: 0017 train_loss= 0.65227 train_acc= 0.60613 val_loss= 0.68627 val_acc= 0.56197 test_acc= 0.57147 time= 4.96651
Epoch: 0018 train_loss= 0.64660 train_acc= 0.61676 val_loss= 0.67300 val_acc= 0.60704 test_acc= 0.59876 time= 4.98168
Epoch: 0019 train_loss= 0.64479 train_acc= 0.62254 val_loss= 0.67970 val_acc= 0.57606 test_acc= 0.58497 time= 4.94029
Epoch: 0020 train_loss= 0.64635 train_acc= 0.62316 val_loss= 0.66683 val_acc= 0.61549 test_acc= 0.60411 time= 4.96796
Epoch: 0021 train_loss= 0.63985 train_acc= 0.62879 val_loss= 0.66047 val_acc= 0.59577 test_acc= 0.60355 time= 4.96473
Epoch: 0022 train_loss= 0.63489 train_acc= 0.63129 val_loss= 0.66431 val_acc= 0.60000 test_acc= 0.59623 time= 5.02756
Epoch: 0023 train_loss= 0.63856 train_acc= 0.62426 val_loss= 0.66811 val_acc= 0.58732 test_acc= 0.58554 time= 4.94330
Epoch: 0024 train_loss= 0.63843 train_acc= 0.62613 val_loss= 0.65142 val_acc= 0.60423 test_acc= 0.61480 time= 4.97187
Epoch: 0025 train_loss= 0.62820 train_acc= 0.64739 val_loss= 0.65906 val_acc= 0.61831 test_acc= 0.61367 time= 4.98656
Epoch: 0026 train_loss= 0.62406 train_acc= 0.64364 val_loss= 0.65610 val_acc= 0.61690 test_acc= 0.61621 time= 4.99516
Epoch: 0027 train_loss= 0.62658 train_acc= 0.64270 val_loss= 0.64364 val_acc= 0.63239 test_acc= 0.61227 time= 5.03854
Epoch: 0028 train_loss= 0.62254 train_acc= 0.64458 val_loss= 0.64637 val_acc= 0.63099 test_acc= 0.61790 time= 4.98452
Epoch: 0029 train_loss= 0.61804 train_acc= 0.65036 val_loss= 0.64005 val_acc= 0.63239 test_acc= 0.61649 time= 4.97380
Epoch: 0030 train_loss= 0.63158 train_acc= 0.63614 val_loss= 0.65047 val_acc= 0.61972 test_acc= 0.62324 time= 4.94871
Epoch: 0031 train_loss= 0.62685 train_acc= 0.64379 val_loss= 0.65903 val_acc= 0.61408 test_acc= 0.59539 time= 4.92983
Epoch: 0032 train_loss= 0.61852 train_acc= 0.65192 val_loss= 0.65042 val_acc= 0.62394 test_acc= 0.62183 time= 5.01659
Epoch: 0033 train_loss= 0.62141 train_acc= 0.65052 val_loss= 0.63492 val_acc= 0.64225 test_acc= 0.62859 time= 4.96573
Epoch: 0034 train_loss= 0.61643 train_acc= 0.66146 val_loss= 0.64947 val_acc= 0.62676 test_acc= 0.62409 time= 4.95609
Epoch: 0035 train_loss= 0.61474 train_acc= 0.65692 val_loss= 0.64714 val_acc= 0.64085 test_acc= 0.62212 time= 4.95162
Epoch: 0036 train_loss= 0.61268 train_acc= 0.66130 val_loss= 0.64826 val_acc= 0.63099 test_acc= 0.62971 time= 4.98868
Epoch: 0037 train_loss= 0.61732 train_acc= 0.65442 val_loss= 0.65289 val_acc= 0.61549 test_acc= 0.59651 time= 4.94732
Epoch: 0038 train_loss= 0.61634 train_acc= 0.65599 val_loss= 0.65297 val_acc= 0.64225 test_acc= 0.63534 time= 4.98268
Epoch: 0039 train_loss= 0.61135 train_acc= 0.66114 val_loss= 0.64255 val_acc= 0.62394 test_acc= 0.62465 time= 4.99166
Epoch: 0040 train_loss= 0.60466 train_acc= 0.66818 val_loss= 0.64700 val_acc= 0.62958 test_acc= 0.63253 time= 4.98893
Epoch: 0041 train_loss= 0.61299 train_acc= 0.65708 val_loss= 0.64733 val_acc= 0.63099 test_acc= 0.62521 time= 4.98567
Epoch: 0042 train_loss= 0.60736 train_acc= 0.67349 val_loss= 0.65121 val_acc= 0.63662 test_acc= 0.62577 time= 4.95973
Epoch: 0043 train_loss= 0.59990 train_acc= 0.67115 val_loss= 0.64064 val_acc= 0.63662 test_acc= 0.62999 time= 4.99592
Epoch: 0044 train_loss= 0.60069 train_acc= 0.67240 val_loss= 0.64601 val_acc= 0.64366 test_acc= 0.63844 time= 4.98468
Epoch: 0045 train_loss= 0.60442 train_acc= 0.66443 val_loss= 0.63394 val_acc= 0.64225 test_acc= 0.64012 time= 5.04814
Epoch: 0046 train_loss= 0.59759 train_acc= 0.67318 val_loss= 0.65898 val_acc= 0.63944 test_acc= 0.63478 time= 4.99514
Epoch: 0047 train_loss= 0.59201 train_acc= 0.67146 val_loss= 0.64651 val_acc= 0.63803 test_acc= 0.62943 time= 5.02191
Epoch: 0048 train_loss= 0.59398 train_acc= 0.67537 val_loss= 0.65326 val_acc= 0.63944 test_acc= 0.63225 time= 4.92983
Epoch: 0049 train_loss= 0.59927 train_acc= 0.66943 val_loss= 0.64316 val_acc= 0.65070 test_acc= 0.64575 time= 4.97969
Epoch: 0050 train_loss= 0.59338 train_acc= 0.68053 val_loss= 0.65488 val_acc= 0.63380 test_acc= 0.63844 time= 4.96904
Epoch: 0051 train_loss= 0.59872 train_acc= 0.67224 val_loss= 0.64356 val_acc= 0.65493 test_acc= 0.63618 time= 4.96250
Epoch: 0052 train_loss= 0.58985 train_acc= 0.68037 val_loss= 0.63632 val_acc= 0.65775 test_acc= 0.64547 time= 4.95216
Epoch: 0053 train_loss= 0.59337 train_acc= 0.67818 val_loss= 0.65191 val_acc= 0.64366 test_acc= 0.63900 time= 4.96043
Epoch: 0054 train_loss= 0.59341 train_acc= 0.67209 val_loss= 0.64374 val_acc= 0.64648 test_acc= 0.63900 time= 5.06845
Epoch: 0055 train_loss= 0.59038 train_acc= 0.67412 val_loss= 0.63084 val_acc= 0.65493 test_acc= 0.64800 time= 5.00462
Epoch: 0056 train_loss= 0.58991 train_acc= 0.68146 val_loss= 0.64089 val_acc= 0.66056 test_acc= 0.63759 time= 5.08230
Epoch: 0057 train_loss= 0.58881 train_acc= 0.67974 val_loss= 0.63646 val_acc= 0.65775 test_acc= 0.65504 time= 5.02537
Epoch: 0058 train_loss= 0.59118 train_acc= 0.68021 val_loss= 0.65055 val_acc= 0.64507 test_acc= 0.65110 time= 5.11517
Epoch: 0059 train_loss= 0.58563 train_acc= 0.68318 val_loss= 0.64125 val_acc= 0.64930 test_acc= 0.65307 time= 5.12387
Epoch: 0060 train_loss= 0.58107 train_acc= 0.68693 val_loss= 0.65093 val_acc= 0.65634 test_acc= 0.64885 time= 5.06449
Epoch: 0061 train_loss= 0.58328 train_acc= 0.68943 val_loss= 0.65025 val_acc= 0.64507 test_acc= 0.64434 time= 5.13727
Epoch: 0062 train_loss= 0.59260 train_acc= 0.66865 val_loss= 0.64489 val_acc= 0.64366 test_acc= 0.64631 time= 5.07548
Epoch: 0063 train_loss= 0.57940 train_acc= 0.68428 val_loss= 0.64480 val_acc= 0.63944 test_acc= 0.64547 time= 5.01047
Epoch: 0064 train_loss= 0.57667 train_acc= 0.69256 val_loss= 0.64620 val_acc= 0.64225 test_acc= 0.65869 time= 4.95010
Epoch: 0065 train_loss= 0.57287 train_acc= 0.69756 val_loss= 0.64612 val_acc= 0.65352 test_acc= 0.65926 time= 5.08206
Epoch: 0066 train_loss= 0.57957 train_acc= 0.68834 val_loss= 0.64258 val_acc= 0.65493 test_acc= 0.64885 time= 5.06247
Epoch: 0067 train_loss= 0.57854 train_acc= 0.69068 val_loss= 0.63763 val_acc= 0.64789 test_acc= 0.64828 time= 5.20110
Epoch: 0068 train_loss= 0.57406 train_acc= 0.69834 val_loss= 0.64716 val_acc= 0.65493 test_acc= 0.65869 time= 5.20409
Epoch: 0069 train_loss= 0.58027 train_acc= 0.68897 val_loss= 0.64970 val_acc= 0.64085 test_acc= 0.65138 time= 4.97971
Epoch: 0070 train_loss= 0.57864 train_acc= 0.68646 val_loss= 0.64732 val_acc= 0.64930 test_acc= 0.65616 time= 5.00182
Epoch: 0071 train_loss= 0.57718 train_acc= 0.69193 val_loss= 0.65608 val_acc= 0.63944 test_acc= 0.64800 time= 4.95645
Epoch: 0072 train_loss= 0.57407 train_acc= 0.69381 val_loss= 0.66782 val_acc= 0.63944 test_acc= 0.63675 time= 4.98421
Epoch: 0073 train_loss= 0.56996 train_acc= 0.69897 val_loss= 0.66796 val_acc= 0.65352 test_acc= 0.65053 time= 4.94778
Epoch: 0074 train_loss= 0.56765 train_acc= 0.68990 val_loss= 0.66288 val_acc= 0.64789 test_acc= 0.65138 time= 4.96218
Epoch: 0075 train_loss= 0.57340 train_acc= 0.69272 val_loss= 0.65589 val_acc= 0.64507 test_acc= 0.65813 time= 4.95720
Epoch: 0076 train_loss= 0.56416 train_acc= 0.70288 val_loss= 0.65214 val_acc= 0.63803 test_acc= 0.66826 time= 4.99068
Epoch: 0077 train_loss= 0.57238 train_acc= 0.70084 val_loss= 0.64875 val_acc= 0.63521 test_acc= 0.66545 time= 4.97100
Epoch: 0078 train_loss= 0.57246 train_acc= 0.68943 val_loss= 0.65582 val_acc= 0.64366 test_acc= 0.66770 time= 4.96927
Epoch: 0079 train_loss= 0.56804 train_acc= 0.70006 val_loss= 0.66752 val_acc= 0.64225 test_acc= 0.66629 time= 4.93702
Epoch: 0080 train_loss= 0.56916 train_acc= 0.69709 val_loss= 0.66552 val_acc= 0.64225 test_acc= 0.65869 time= 4.96917
Epoch: 0081 train_loss= 0.56625 train_acc= 0.70585 val_loss= 0.66044 val_acc= 0.63803 test_acc= 0.66432 time= 4.93676
Epoch: 0082 train_loss= 0.55204 train_acc= 0.71132 val_loss= 0.67733 val_acc= 0.65211 test_acc= 0.67023 time= 5.00409
Epoch: 0083 train_loss= 0.56306 train_acc= 0.70006 val_loss= 0.66638 val_acc= 0.64225 test_acc= 0.66826 time= 4.92743
Epoch: 0084 train_loss= 0.57073 train_acc= 0.70131 val_loss= 0.64397 val_acc= 0.63803 test_acc= 0.65954 time= 5.01360
Epoch: 0085 train_loss= 0.55809 train_acc= 0.70600 val_loss= 0.65279 val_acc= 0.65352 test_acc= 0.67501 time= 4.96874
Epoch: 0086 train_loss= 0.55939 train_acc= 0.69975 val_loss= 0.67705 val_acc= 0.63662 test_acc= 0.65476 time= 5.01061
Epoch: 0087 train_loss= 0.56485 train_acc= 0.70334 val_loss= 0.66047 val_acc= 0.65352 test_acc= 0.66320 time= 4.97470
Epoch: 0088 train_loss= 0.56820 train_acc= 0.70225 val_loss= 0.66241 val_acc= 0.65352 test_acc= 0.66207 time= 5.01859
Epoch: 0089 train_loss= 0.56629 train_acc= 0.70272 val_loss= 0.67573 val_acc= 0.64648 test_acc= 0.66263 time= 4.97581
Epoch: 0090 train_loss= 0.56202 train_acc= 0.70194 val_loss= 0.64849 val_acc= 0.65775 test_acc= 0.66685 time= 4.97205
Epoch: 0091 train_loss= 0.55850 train_acc= 0.70475 val_loss= 0.65591 val_acc= 0.65211 test_acc= 0.65954 time= 4.95616
Epoch: 0092 train_loss= 0.55660 train_acc= 0.70491 val_loss= 0.68784 val_acc= 0.64789 test_acc= 0.66123 time= 4.98866
Epoch: 0093 train_loss= 0.55643 train_acc= 0.71241 val_loss= 0.66816 val_acc= 0.65211 test_acc= 0.65476 time= 4.95595
Epoch: 0094 train_loss= 0.55211 train_acc= 0.70756 val_loss= 0.68170 val_acc= 0.65211 test_acc= 0.65025 time= 5.02158
Epoch: 0095 train_loss= 0.55504 train_acc= 0.70835 val_loss= 0.65302 val_acc= 0.66901 test_acc= 0.66882 time= 5.00530
Epoch: 0096 train_loss= 0.55029 train_acc= 0.71647 val_loss= 0.66579 val_acc= 0.66901 test_acc= 0.66911 time= 5.02058
Epoch: 0097 train_loss= 0.55393 train_acc= 0.70991 val_loss= 0.65799 val_acc= 0.65775 test_acc= 0.66404 time= 5.00297
Epoch: 0098 train_loss= 0.55430 train_acc= 0.70710 val_loss= 0.68202 val_acc= 0.66056 test_acc= 0.65701 time= 4.98578
Epoch: 0099 train_loss= 0.55447 train_acc= 0.70803 val_loss= 0.65508 val_acc= 0.66197 test_acc= 0.66798 time= 4.98626
Epoch: 0100 train_loss= 0.54873 train_acc= 0.71257 val_loss= 0.64963 val_acc= 0.65634 test_acc= 0.67361 time= 4.99864
Epoch: 0101 train_loss= 0.54908 train_acc= 0.71132 val_loss= 0.66229 val_acc= 0.66056 test_acc= 0.66770 time= 4.96474
Epoch: 0102 train_loss= 0.55235 train_acc= 0.71647 val_loss= 0.66755 val_acc= 0.66197 test_acc= 0.66939 time= 4.95418
Epoch: 0103 train_loss= 0.54913 train_acc= 0.71491 val_loss= 0.65488 val_acc= 0.66338 test_acc= 0.67839 time= 4.96672
Epoch: 0104 train_loss= 0.54182 train_acc= 0.71522 val_loss= 0.66820 val_acc= 0.65493 test_acc= 0.66460 time= 5.00501
Epoch: 0105 train_loss= 0.53851 train_acc= 0.72288 val_loss= 0.67286 val_acc= 0.65634 test_acc= 0.67248 time= 5.01858
Epoch: 0106 train_loss= 0.53727 train_acc= 0.71679 val_loss= 0.66831 val_acc= 0.66056 test_acc= 0.67304 time= 4.98966
Epoch: 0107 train_loss= 0.54169 train_acc= 0.72319 val_loss= 0.66395 val_acc= 0.66620 test_acc= 0.67361 time= 4.92732
Epoch: 0108 train_loss= 0.54057 train_acc= 0.72226 val_loss= 0.67008 val_acc= 0.66197 test_acc= 0.67164 time= 4.96099
Epoch: 0109 train_loss= 0.53815 train_acc= 0.71819 val_loss= 0.66328 val_acc= 0.64648 test_acc= 0.67389 time= 4.95283
Epoch: 0110 train_loss= 0.54955 train_acc= 0.71569 val_loss= 0.66743 val_acc= 0.65915 test_acc= 0.67304 time= 4.99247
Epoch: 0111 train_loss= 0.53546 train_acc= 0.72648 val_loss= 0.66124 val_acc= 0.66901 test_acc= 0.67136 time= 4.98678
Epoch: 0112 train_loss= 0.53669 train_acc= 0.71804 val_loss= 0.66199 val_acc= 0.66338 test_acc= 0.66601 time= 4.99452
Epoch: 0113 train_loss= 0.54023 train_acc= 0.71991 val_loss= 0.67346 val_acc= 0.64648 test_acc= 0.65982 time= 4.96174
Epoch: 0114 train_loss= 0.53957 train_acc= 0.72382 val_loss= 0.69259 val_acc= 0.64930 test_acc= 0.67445 time= 4.98375
Epoch: 0115 train_loss= 0.53671 train_acc= 0.71944 val_loss= 0.65979 val_acc= 0.65352 test_acc= 0.66995 time= 4.93533
Epoch: 0116 train_loss= 0.53831 train_acc= 0.72132 val_loss= 0.65991 val_acc= 0.65915 test_acc= 0.67614 time= 4.98467
Epoch: 0117 train_loss= 0.52836 train_acc= 0.73070 val_loss= 0.66416 val_acc= 0.65775 test_acc= 0.67895 time= 4.95901
Epoch: 0118 train_loss= 0.52833 train_acc= 0.73367 val_loss= 0.66063 val_acc= 0.66479 test_acc= 0.67164 time= 4.98524
Epoch: 0119 train_loss= 0.53873 train_acc= 0.72319 val_loss= 0.65915 val_acc= 0.65775 test_acc= 0.67333 time= 5.00176
Epoch: 0120 train_loss= 0.53022 train_acc= 0.72663 val_loss= 0.67533 val_acc= 0.66479 test_acc= 0.68346 time= 4.98169
Epoch: 0121 train_loss= 0.52038 train_acc= 0.73398 val_loss= 0.66636 val_acc= 0.67465 test_acc= 0.66742 time= 4.95211
Epoch: 0122 train_loss= 0.53495 train_acc= 0.72319 val_loss= 0.66245 val_acc= 0.66056 test_acc= 0.67839 time= 4.99012
Epoch: 0123 train_loss= 0.53108 train_acc= 0.72460 val_loss= 0.66969 val_acc= 0.67183 test_acc= 0.68008 time= 4.99652
Epoch: 0124 train_loss= 0.52125 train_acc= 0.74008 val_loss= 0.66424 val_acc= 0.66761 test_acc= 0.67614 time= 5.00263
Epoch: 0125 train_loss= 0.52038 train_acc= 0.73242 val_loss= 0.66380 val_acc= 0.66901 test_acc= 0.67811 time= 4.97313
Epoch: 0126 train_loss= 0.53327 train_acc= 0.71835 val_loss= 0.66741 val_acc= 0.65352 test_acc= 0.67642 time= 4.98268
Epoch: 0127 train_loss= 0.51911 train_acc= 0.74461 val_loss= 0.67886 val_acc= 0.67042 test_acc= 0.67051 time= 4.97729
Epoch: 0128 train_loss= 0.51586 train_acc= 0.73992 val_loss= 0.67685 val_acc= 0.66761 test_acc= 0.67670 time= 4.96672
Epoch: 0129 train_loss= 0.52237 train_acc= 0.73836 val_loss= 0.67010 val_acc= 0.66056 test_acc= 0.67698 time= 4.95643
Epoch: 0130 train_loss= 0.51809 train_acc= 0.72992 val_loss= 0.68235 val_acc= 0.67183 test_acc= 0.67586 time= 5.00884
Epoch: 0131 train_loss= 0.51905 train_acc= 0.73742 val_loss= 0.68609 val_acc= 0.66901 test_acc= 0.67980 time= 4.96295
Epoch: 0132 train_loss= 0.51128 train_acc= 0.73836 val_loss= 0.67983 val_acc= 0.66761 test_acc= 0.68796 time= 4.96868
Epoch: 0133 train_loss= 0.51202 train_acc= 0.74304 val_loss= 0.68441 val_acc= 0.66056 test_acc= 0.66714 time= 4.95504
Epoch: 0134 train_loss= 0.50597 train_acc= 0.75102 val_loss= 0.72047 val_acc= 0.66761 test_acc= 0.68008 time= 4.94378
Epoch: 0135 train_loss= 0.51088 train_acc= 0.74789 val_loss= 0.69227 val_acc= 0.66479 test_acc= 0.67811 time= 4.97969
Epoch: 0136 train_loss= 0.50734 train_acc= 0.74476 val_loss= 0.68167 val_acc= 0.66338 test_acc= 0.67530 time= 4.98000
Epoch: 0137 train_loss= 0.51246 train_acc= 0.73773 val_loss= 0.70116 val_acc= 0.66338 test_acc= 0.67755 time= 4.94073
Epoch: 0138 train_loss= 0.52333 train_acc= 0.73085 val_loss= 0.66459 val_acc= 0.66620 test_acc= 0.68458 time= 4.98419
Epoch: 0139 train_loss= 0.50838 train_acc= 0.73914 val_loss= 0.69545 val_acc= 0.66197 test_acc= 0.68036 time= 4.99279
Epoch: 0140 train_loss= 0.50828 train_acc= 0.74461 val_loss= 0.67580 val_acc= 0.66479 test_acc= 0.67952 time= 4.96573
Epoch: 0141 train_loss= 0.50259 train_acc= 0.74789 val_loss= 0.68539 val_acc= 0.67042 test_acc= 0.68852 time= 4.94604
Epoch: 0142 train_loss= 0.50153 train_acc= 0.75148 val_loss= 0.68514 val_acc= 0.67183 test_acc= 0.69584 time= 4.95781
Epoch: 0143 train_loss= 0.50346 train_acc= 0.75352 val_loss= 0.66855 val_acc= 0.67465 test_acc= 0.69668 time= 4.96714
Epoch: 0144 train_loss= 0.50636 train_acc= 0.74820 val_loss= 0.67624 val_acc= 0.65775 test_acc= 0.68852 time= 4.98181
Epoch: 0145 train_loss= 0.50029 train_acc= 0.75023 val_loss= 0.66726 val_acc= 0.66901 test_acc= 0.69443 time= 4.93978
Epoch: 0146 train_loss= 0.49692 train_acc= 0.74820 val_loss= 0.67311 val_acc= 0.67746 test_acc= 0.69443 time= 5.01160
Epoch: 0147 train_loss= 0.49912 train_acc= 0.74601 val_loss= 0.69719 val_acc= 0.67606 test_acc= 0.68008 time= 4.97319
Epoch: 0148 train_loss= 0.48868 train_acc= 0.75664 val_loss= 0.69013 val_acc= 0.67324 test_acc= 0.69105 time= 5.01582
Epoch: 0149 train_loss= 0.48671 train_acc= 0.76071 val_loss= 0.70432 val_acc= 0.67465 test_acc= 0.68317 time= 4.98900
Epoch: 0150 train_loss= 0.49042 train_acc= 0.75805 val_loss= 0.68115 val_acc= 0.67183 test_acc= 0.68824 time= 4.98156
Epoch: 0151 train_loss= 0.49113 train_acc= 0.75774 val_loss= 0.68832 val_acc= 0.67465 test_acc= 0.68346 time= 4.97770
Epoch: 0152 train_loss= 0.49713 train_acc= 0.75195 val_loss= 0.68923 val_acc= 0.66901 test_acc= 0.69612 time= 5.02513
Epoch: 0153 train_loss= 0.48653 train_acc= 0.76336 val_loss= 0.69096 val_acc= 0.67606 test_acc= 0.69246 time= 5.01855
Epoch: 0154 train_loss= 0.48528 train_acc= 0.76102 val_loss= 0.69321 val_acc= 0.68169 test_acc= 0.69387 time= 5.01182
Epoch: 0155 train_loss= 0.48941 train_acc= 0.75633 val_loss= 0.68016 val_acc= 0.67887 test_acc= 0.69612 time= 4.94686
Epoch: 0156 train_loss= 0.48943 train_acc= 0.76024 val_loss= 0.67278 val_acc= 0.67042 test_acc= 0.69077 time= 4.99375
Epoch: 0157 train_loss= 0.48270 train_acc= 0.77259 val_loss= 0.67920 val_acc= 0.67606 test_acc= 0.68768 time= 4.95768
Epoch: 0158 train_loss= 0.48653 train_acc= 0.76149 val_loss= 0.72142 val_acc= 0.66197 test_acc= 0.68036 time= 4.99765
Epoch: 0159 train_loss= 0.48814 train_acc= 0.75742 val_loss= 0.70440 val_acc= 0.66901 test_acc= 0.68965 time= 4.96672
Epoch: 0160 train_loss= 0.46947 train_acc= 0.77305 val_loss= 0.69346 val_acc= 0.66761 test_acc= 0.68796 time= 4.94904
Epoch: 0161 train_loss= 0.47349 train_acc= 0.76915 val_loss= 0.70256 val_acc= 0.66338 test_acc= 0.69865 time= 4.96292
Epoch: 0162 train_loss= 0.47093 train_acc= 0.77305 val_loss= 0.72137 val_acc= 0.66620 test_acc= 0.68739 time= 4.97870
Epoch: 0163 train_loss= 0.47031 train_acc= 0.77368 val_loss= 0.69899 val_acc= 0.68310 test_acc= 0.68768 time= 4.96324
Epoch: 0164 train_loss= 0.47593 train_acc= 0.77493 val_loss= 0.69749 val_acc= 0.67324 test_acc= 0.68908 time= 4.94618
Epoch: 0165 train_loss= 0.47000 train_acc= 0.76633 val_loss= 0.72306 val_acc= 0.65634 test_acc= 0.69133 time= 5.02387
Epoch: 0166 train_loss= 0.47706 train_acc= 0.76508 val_loss= 0.68819 val_acc= 0.66338 test_acc= 0.69133 time= 4.96905
Epoch: 0167 train_loss= 0.46394 train_acc= 0.77462 val_loss= 0.69400 val_acc= 0.66761 test_acc= 0.69471 time= 4.96885
Epoch: 0168 train_loss= 0.46237 train_acc= 0.77555 val_loss= 0.70935 val_acc= 0.67465 test_acc= 0.69584 time= 4.99166
Epoch: 0169 train_loss= 0.46857 train_acc= 0.77524 val_loss= 0.70653 val_acc= 0.67887 test_acc= 0.69133 time= 4.97471
Epoch: 0170 train_loss= 0.46968 train_acc= 0.77227 val_loss= 0.69461 val_acc= 0.67887 test_acc= 0.69133 time= 5.00219
Epoch: 0171 train_loss= 0.46122 train_acc= 0.77649 val_loss= 0.69894 val_acc= 0.67324 test_acc= 0.69668 time= 4.98766
Epoch: 0172 train_loss= 0.45405 train_acc= 0.77977 val_loss= 0.70575 val_acc= 0.66901 test_acc= 0.69612 time= 5.01872
Epoch: 0173 train_loss= 0.45008 train_acc= 0.78540 val_loss= 0.72087 val_acc= 0.66620 test_acc= 0.68852 time= 4.98065
Epoch: 0174 train_loss= 0.46957 train_acc= 0.77227 val_loss= 0.69190 val_acc= 0.67887 test_acc= 0.69105 time= 4.97172
Epoch: 0175 train_loss= 0.45999 train_acc= 0.77821 val_loss= 0.71654 val_acc= 0.67324 test_acc= 0.69752 time= 4.97172
Epoch: 0176 train_loss= 0.45098 train_acc= 0.78399 val_loss= 0.71002 val_acc= 0.67606 test_acc= 0.69865 time= 5.01759
Epoch: 0177 train_loss= 0.45315 train_acc= 0.78478 val_loss= 0.71155 val_acc= 0.68592 test_acc= 0.69246 time= 4.97595
Epoch: 0178 train_loss= 0.45232 train_acc= 0.78462 val_loss= 0.71577 val_acc= 0.67746 test_acc= 0.69415 time= 4.95676
Epoch: 0179 train_loss= 0.45345 train_acc= 0.78196 val_loss= 0.68200 val_acc= 0.68028 test_acc= 0.70062 time= 5.01559
Epoch: 0180 train_loss= 0.45020 train_acc= 0.78618 val_loss= 0.71269 val_acc= 0.67183 test_acc= 0.69668 time= 4.95874
Epoch: 0181 train_loss= 0.44114 train_acc= 0.78978 val_loss= 0.67986 val_acc= 0.67465 test_acc= 0.70990 time= 4.94629
Epoch: 0182 train_loss= 0.44462 train_acc= 0.79165 val_loss= 0.70316 val_acc= 0.69014 test_acc= 0.70006 time= 4.96818
Epoch: 0183 train_loss= 0.44734 train_acc= 0.78759 val_loss= 0.72214 val_acc= 0.67887 test_acc= 0.69415 time= 5.08965
Epoch: 0184 train_loss= 0.44845 train_acc= 0.78556 val_loss= 0.70294 val_acc= 0.68873 test_acc= 0.70146 time= 4.97769
Epoch: 0185 train_loss= 0.44025 train_acc= 0.78665 val_loss= 0.69748 val_acc= 0.69296 test_acc= 0.70287 time= 4.99449
Epoch: 0186 train_loss= 0.44616 train_acc= 0.78478 val_loss= 0.72333 val_acc= 0.67887 test_acc= 0.70343 time= 4.94402
Epoch: 0187 train_loss= 0.44732 train_acc= 0.78478 val_loss= 0.69303 val_acc= 0.68592 test_acc= 0.70709 time= 4.97866
Epoch: 0188 train_loss= 0.43935 train_acc= 0.79118 val_loss= 0.71342 val_acc= 0.68873 test_acc= 0.70174 time= 4.96473
Epoch: 0189 train_loss= 0.44050 train_acc= 0.79056 val_loss= 0.72025 val_acc= 0.68451 test_acc= 0.70315 time= 4.97172
Epoch: 0190 train_loss= 0.43303 train_acc= 0.79337 val_loss= 0.73369 val_acc= 0.69014 test_acc= 0.70371 time= 5.01559
Epoch: 0191 train_loss= 0.43840 train_acc= 0.79400 val_loss= 0.73038 val_acc= 0.67746 test_acc= 0.70118 time= 5.16619
Epoch: 0192 train_loss= 0.43250 train_acc= 0.79712 val_loss= 0.71148 val_acc= 0.68169 test_acc= 0.70231 time= 5.29484
Epoch: 0193 train_loss= 0.42585 train_acc= 0.80103 val_loss= 0.73560 val_acc= 0.68310 test_acc= 0.69977 time= 5.09471
Epoch: 0194 train_loss= 0.42801 train_acc= 0.79619 val_loss= 0.71638 val_acc= 0.68451 test_acc= 0.69977 time= 4.97954
Epoch: 0195 train_loss= 0.42988 train_acc= 0.79587 val_loss= 0.72307 val_acc= 0.68169 test_acc= 0.70174 time= 4.99864
Epoch: 0196 train_loss= 0.42701 train_acc= 0.79384 val_loss= 0.70023 val_acc= 0.68592 test_acc= 0.69921 time= 4.98510
Epoch: 0197 train_loss= 0.42972 train_acc= 0.79353 val_loss= 0.71931 val_acc= 0.68592 test_acc= 0.69865 time= 4.98434
Epoch: 0198 train_loss= 0.43490 train_acc= 0.78915 val_loss= 0.70607 val_acc= 0.68028 test_acc= 0.70597 time= 4.97593
Epoch: 0199 train_loss= 0.41755 train_acc= 0.79884 val_loss= 0.71773 val_acc= 0.68169 test_acc= 0.70737 time= 4.95259
Epoch: 0200 train_loss= 0.41566 train_acc= 0.80025 val_loss= 0.72945 val_acc= 0.68169 test_acc= 0.70625 time= 4.98809
Epoch: 0201 train_loss= 0.41714 train_acc= 0.80181 val_loss= 0.74671 val_acc= 0.68310 test_acc= 0.70934 time= 4.99951
Epoch: 0202 train_loss= 0.40958 train_acc= 0.80478 val_loss= 0.72055 val_acc= 0.69014 test_acc= 0.70484 time= 4.95974
Epoch: 0203 train_loss= 0.41111 train_acc= 0.80541 val_loss= 0.75938 val_acc= 0.67465 test_acc= 0.70343 time= 4.98591
Epoch: 0204 train_loss= 0.40949 train_acc= 0.80556 val_loss= 0.75863 val_acc= 0.69014 test_acc= 0.70203 time= 4.97869
Epoch: 0205 train_loss= 0.41858 train_acc= 0.80916 val_loss= 0.73022 val_acc= 0.69014 test_acc= 0.70597 time= 4.99465
Epoch: 0206 train_loss= 0.41316 train_acc= 0.80510 val_loss= 0.71530 val_acc= 0.69577 test_acc= 0.70006 time= 5.08947
Epoch: 0207 train_loss= 0.41662 train_acc= 0.80775 val_loss= 0.76140 val_acc= 0.68873 test_acc= 0.69809 time= 4.97380
Epoch: 0208 train_loss= 0.41664 train_acc= 0.80306 val_loss= 0.74439 val_acc= 0.68310 test_acc= 0.70287 time= 5.03718
Epoch: 0209 train_loss= 0.39808 train_acc= 0.81572 val_loss= 0.73347 val_acc= 0.69014 test_acc= 0.70878 time= 4.95867
Epoch: 0210 train_loss= 0.39934 train_acc= 0.81229 val_loss= 0.74146 val_acc= 0.69155 test_acc= 0.70822 time= 4.99159
Epoch: 0211 train_loss= 0.40220 train_acc= 0.81166 val_loss= 0.73373 val_acc= 0.68310 test_acc= 0.70681 time= 4.99915
Epoch: 0212 train_loss= 0.40562 train_acc= 0.81385 val_loss= 0.72305 val_acc= 0.68310 test_acc= 0.70343 time= 4.96572
Epoch: 0213 train_loss= 0.40094 train_acc= 0.81416 val_loss= 0.72515 val_acc= 0.69437 test_acc= 0.70540 time= 4.99764
Epoch: 0214 train_loss= 0.40088 train_acc= 0.81150 val_loss= 0.72188 val_acc= 0.69155 test_acc= 0.70653 time= 4.98059
Epoch: 0215 train_loss= 0.39861 train_acc= 0.81322 val_loss= 0.74067 val_acc= 0.69859 test_acc= 0.69781 time= 4.99095
Epoch: 0216 train_loss= 0.41137 train_acc= 0.80978 val_loss= 0.71737 val_acc= 0.66620 test_acc= 0.70737 time= 5.00592
Epoch: 0217 train_loss= 0.40234 train_acc= 0.81197 val_loss= 0.71265 val_acc= 0.69296 test_acc= 0.71159 time= 4.93235
Epoch: 0218 train_loss= 0.41343 train_acc= 0.80603 val_loss= 0.70428 val_acc= 0.68310 test_acc= 0.71272 time= 4.98467
Epoch: 0219 train_loss= 0.39096 train_acc= 0.81932 val_loss= 0.72731 val_acc= 0.69718 test_acc= 0.71159 time= 5.09538
Epoch: 0220 train_loss= 0.39463 train_acc= 0.82010 val_loss= 0.74144 val_acc= 0.69155 test_acc= 0.70850 time= 5.02708
Epoch: 0221 train_loss= 0.38607 train_acc= 0.82651 val_loss= 0.75153 val_acc= 0.68451 test_acc= 0.71159 time= 4.96146
Epoch: 0222 train_loss= 0.39322 train_acc= 0.81760 val_loss= 0.72800 val_acc= 0.68310 test_acc= 0.70709 time= 4.97271
Epoch: 0223 train_loss= 0.38014 train_acc= 0.82369 val_loss= 0.73129 val_acc= 0.68873 test_acc= 0.71075 time= 4.98902
Epoch: 0224 train_loss= 0.38032 train_acc= 0.82807 val_loss= 0.73083 val_acc= 0.68451 test_acc= 0.70990 time= 4.97375
Epoch: 0225 train_loss= 0.38770 train_acc= 0.81932 val_loss= 0.73536 val_acc= 0.69296 test_acc= 0.70822 time= 4.96573
Epoch: 0226 train_loss= 0.38128 train_acc= 0.82323 val_loss= 0.73027 val_acc= 0.68028 test_acc= 0.71047 time= 5.05168
Epoch: 0227 train_loss= 0.38259 train_acc= 0.82213 val_loss= 0.74099 val_acc= 0.68732 test_acc= 0.70934 time= 4.97670
Epoch: 0228 train_loss= 0.38452 train_acc= 0.82870 val_loss= 0.74915 val_acc= 0.68873 test_acc= 0.71244 time= 5.00163
Epoch: 0229 train_loss= 0.38967 train_acc= 0.82323 val_loss= 0.74114 val_acc= 0.69014 test_acc= 0.70793 time= 4.96084
Epoch: 0230 train_loss= 0.37379 train_acc= 0.83057 val_loss= 0.75612 val_acc= 0.69859 test_acc= 0.70934 time= 5.05948
Epoch: 0231 train_loss= 0.37851 train_acc= 0.82604 val_loss= 0.73410 val_acc= 0.67606 test_acc= 0.70034 time= 5.05633
Epoch: 0232 train_loss= 0.37561 train_acc= 0.82776 val_loss= 0.75393 val_acc= 0.68592 test_acc= 0.71216 time= 5.00786
Epoch: 0233 train_loss= 0.37317 train_acc= 0.83042 val_loss= 0.76086 val_acc= 0.68732 test_acc= 0.71328 time= 4.99045
Epoch: 0234 train_loss= 0.37371 train_acc= 0.83182 val_loss= 0.74519 val_acc= 0.69155 test_acc= 0.71047 time= 5.04354
Epoch: 0235 train_loss= 0.37036 train_acc= 0.83088 val_loss= 0.75217 val_acc= 0.69577 test_acc= 0.71666 time= 4.98900
Epoch: 0236 train_loss= 0.37925 train_acc= 0.82776 val_loss= 0.74192 val_acc= 0.70141 test_acc= 0.70962 time= 5.02807
Epoch: 0237 train_loss= 0.36757 train_acc= 0.83854 val_loss= 0.76673 val_acc= 0.69155 test_acc= 0.70906 time= 4.99865
Epoch: 0238 train_loss= 0.37365 train_acc= 0.83073 val_loss= 0.75775 val_acc= 0.68310 test_acc= 0.70625 time= 5.01260
Epoch: 0239 train_loss= 0.37368 train_acc= 0.82588 val_loss= 0.74376 val_acc= 0.69718 test_acc= 0.70765 time= 4.98861
Epoch: 0240 train_loss= 0.36138 train_acc= 0.83682 val_loss= 0.74463 val_acc= 0.69296 test_acc= 0.71553 time= 5.00717
Epoch: 0241 train_loss= 0.36988 train_acc= 0.83292 val_loss= 0.75183 val_acc= 0.69155 test_acc= 0.71581 time= 4.99652
Epoch: 0242 train_loss= 0.36990 train_acc= 0.82995 val_loss= 0.76329 val_acc= 0.68310 test_acc= 0.71469 time= 4.99119
Epoch: 0243 train_loss= 0.36588 train_acc= 0.83667 val_loss= 0.74275 val_acc= 0.69718 test_acc= 0.70456 time= 4.98871
Epoch: 0244 train_loss= 0.37020 train_acc= 0.83010 val_loss= 0.75178 val_acc= 0.69296 test_acc= 0.71131 time= 5.00562
Epoch: 0245 train_loss= 0.36515 train_acc= 0.83542 val_loss= 0.75577 val_acc= 0.69155 test_acc= 0.70878 time= 4.97261
Epoch: 0246 train_loss= 0.36080 train_acc= 0.83776 val_loss= 0.76941 val_acc= 0.69296 test_acc= 0.71244 time= 4.97107
Epoch: 0247 train_loss= 0.36693 train_acc= 0.83448 val_loss= 0.73072 val_acc= 0.69859 test_acc= 0.70878 time= 5.03037
Epoch: 0248 train_loss= 0.34992 train_acc= 0.84417 val_loss= 0.79395 val_acc= 0.70141 test_acc= 0.70653 time= 4.99265
Epoch: 0249 train_loss= 0.35761 train_acc= 0.83854 val_loss= 0.75043 val_acc= 0.70282 test_acc= 0.71187 time= 5.02531
Epoch: 0250 train_loss= 0.36541 train_acc= 0.83854 val_loss= 0.74052 val_acc= 0.69437 test_acc= 0.70456 time= 4.99217
Epoch: 0251 train_loss= 0.35093 train_acc= 0.84495 val_loss= 0.77135 val_acc= 0.69859 test_acc= 0.70906 time= 4.98319
Epoch: 0252 train_loss= 0.35845 train_acc= 0.83839 val_loss= 0.75024 val_acc= 0.69437 test_acc= 0.70765 time= 4.97015
Epoch: 0253 train_loss= 0.35670 train_acc= 0.84151 val_loss= 0.76338 val_acc= 0.70282 test_acc= 0.70737 time= 5.02670
Epoch: 0254 train_loss= 0.34835 train_acc= 0.83948 val_loss= 0.76086 val_acc= 0.70704 test_acc= 0.71244 time= 5.01911
Epoch: 0255 train_loss= 0.35931 train_acc= 0.83870 val_loss= 0.74746 val_acc= 0.69859 test_acc= 0.71609 time= 4.98206
Epoch: 0256 train_loss= 0.34308 train_acc= 0.84261 val_loss= 0.76793 val_acc= 0.70563 test_acc= 0.71947 time= 5.01887
Epoch: 0257 train_loss= 0.35208 train_acc= 0.84276 val_loss= 0.78378 val_acc= 0.70000 test_acc= 0.71216 time= 5.07395
Epoch: 0258 train_loss= 0.35098 train_acc= 0.84558 val_loss= 0.74788 val_acc= 0.70986 test_acc= 0.71694 time= 5.03354
Epoch: 0259 train_loss= 0.34421 train_acc= 0.84823 val_loss= 0.77456 val_acc= 0.70282 test_acc= 0.71835 time= 4.99489
Epoch: 0260 train_loss= 0.34757 train_acc= 0.84745 val_loss= 0.75748 val_acc= 0.69014 test_acc= 0.71750 time= 4.98999
Epoch: 0261 train_loss= 0.34313 train_acc= 0.84433 val_loss= 0.78820 val_acc= 0.70563 test_acc= 0.71778 time= 5.03606
Epoch: 0262 train_loss= 0.34806 train_acc= 0.84386 val_loss= 0.74050 val_acc= 0.71268 test_acc= 0.71750 time= 4.96273
Epoch: 0263 train_loss= 0.34410 train_acc= 0.84292 val_loss= 0.77108 val_acc= 0.70986 test_acc= 0.72060 time= 5.03164
Epoch: 0264 train_loss= 0.33754 train_acc= 0.85214 val_loss= 0.76244 val_acc= 0.72535 test_acc= 0.71919 time= 5.02158
Epoch: 0265 train_loss= 0.34625 train_acc= 0.84761 val_loss= 0.74414 val_acc= 0.71268 test_acc= 0.71525 time= 5.01659
Epoch: 0266 train_loss= 0.32765 train_acc= 0.85511 val_loss= 0.73852 val_acc= 0.72113 test_acc= 0.72200 time= 5.03354
Epoch: 0267 train_loss= 0.34910 train_acc= 0.84261 val_loss= 0.72343 val_acc= 0.70845 test_acc= 0.71750 time= 5.03354
Epoch: 0268 train_loss= 0.33532 train_acc= 0.84776 val_loss= 0.76488 val_acc= 0.71549 test_acc= 0.71806 time= 5.03055
Epoch: 0269 train_loss= 0.32657 train_acc= 0.85261 val_loss= 0.78776 val_acc= 0.70704 test_acc= 0.71131 time= 4.97796
Epoch: 0270 train_loss= 0.34652 train_acc= 0.84276 val_loss= 0.74937 val_acc= 0.70141 test_acc= 0.71750 time= 5.00198
Epoch: 0271 train_loss= 0.33882 train_acc= 0.85089 val_loss= 0.76570 val_acc= 0.71549 test_acc= 0.71806 time= 5.00761
Epoch: 0272 train_loss= 0.33545 train_acc= 0.84776 val_loss= 0.79784 val_acc= 0.70704 test_acc= 0.71356 time= 4.95475
Epoch: 0273 train_loss= 0.33411 train_acc= 0.84714 val_loss= 0.77424 val_acc= 0.71127 test_acc= 0.71525 time= 4.98905
Epoch: 0274 train_loss= 0.33638 train_acc= 0.85152 val_loss= 0.77830 val_acc= 0.71268 test_acc= 0.71778 time= 5.03374
Epoch: 0275 train_loss= 0.32634 train_acc= 0.85636 val_loss= 0.78228 val_acc= 0.70704 test_acc= 0.71159 time= 5.00214
Epoch: 0276 train_loss= 0.33080 train_acc= 0.85871 val_loss= 0.79556 val_acc= 0.70845 test_acc= 0.71525 time= 4.99417
Epoch: 0277 train_loss= 0.33437 train_acc= 0.85027 val_loss= 0.76795 val_acc= 0.71268 test_acc= 0.71300 time= 5.01273
Epoch: 0278 train_loss= 0.33273 train_acc= 0.85120 val_loss= 0.80859 val_acc= 0.70704 test_acc= 0.71497 time= 5.00562
Epoch: 0279 train_loss= 0.32521 train_acc= 0.85746 val_loss= 0.80111 val_acc= 0.70282 test_acc= 0.71497 time= 4.97071
Epoch: 0280 train_loss= 0.32394 train_acc= 0.85542 val_loss= 0.79826 val_acc= 0.70282 test_acc= 0.71187 time= 4.99963
Epoch: 0281 train_loss= 0.31955 train_acc= 0.85933 val_loss= 0.81747 val_acc= 0.72113 test_acc= 0.71750 time= 5.00562
Epoch: 0282 train_loss= 0.32885 train_acc= 0.86199 val_loss= 0.78439 val_acc= 0.71408 test_acc= 0.71300 time= 4.99764
Epoch: 0283 train_loss= 0.33212 train_acc= 0.85449 val_loss= 0.78911 val_acc= 0.70423 test_acc= 0.70906 time= 5.04252
Epoch: 0284 train_loss= 0.31630 train_acc= 0.85433 val_loss= 0.79538 val_acc= 0.71690 test_acc= 0.71525 time= 5.00301
Epoch: 0285 train_loss= 0.32232 train_acc= 0.85996 val_loss= 0.79981 val_acc= 0.69577 test_acc= 0.71553 time= 4.98567
Epoch: 0286 train_loss= 0.32676 train_acc= 0.85058 val_loss= 0.75683 val_acc= 0.71268 test_acc= 0.71216 time= 5.17485
Epoch: 0287 train_loss= 0.32191 train_acc= 0.85746 val_loss= 0.81409 val_acc= 0.70423 test_acc= 0.71187 time= 5.32049
Epoch: 0288 train_loss= 0.31612 train_acc= 0.86277 val_loss= 0.80182 val_acc= 0.70000 test_acc= 0.71609 time= 5.03054
Epoch: 0289 train_loss= 0.31940 train_acc= 0.86324 val_loss= 0.79205 val_acc= 0.71408 test_acc= 0.71497 time= 5.45430
Epoch: 0290 train_loss= 0.31313 train_acc= 0.86496 val_loss= 0.79241 val_acc= 0.70423 test_acc= 0.71919 time= 5.13028
Epoch: 0291 train_loss= 0.31354 train_acc= 0.86152 val_loss= 0.79377 val_acc= 0.70986 test_acc= 0.71947 time= 5.22304
Epoch: 0292 train_loss= 0.30692 train_acc= 0.86621 val_loss= 0.79869 val_acc= 0.71549 test_acc= 0.71356 time= 5.15921
Epoch: 0293 train_loss= 0.32157 train_acc= 0.86324 val_loss= 0.80658 val_acc= 0.69859 test_acc= 0.71328 time= 4.98767
Epoch: 0294 train_loss= 0.30807 train_acc= 0.86511 val_loss= 0.81952 val_acc= 0.70704 test_acc= 0.71356 time= 5.10336
Epoch: 0295 train_loss= 0.31901 train_acc= 0.86152 val_loss= 0.80780 val_acc= 0.69859 test_acc= 0.71047 time= 5.12929
Epoch: 0296 train_loss= 0.32257 train_acc= 0.85621 val_loss= 0.78455 val_acc= 0.70845 test_acc= 0.72228 time= 5.55216
Epoch: 0297 train_loss= 0.30977 train_acc= 0.86480 val_loss= 0.80951 val_acc= 0.70282 test_acc= 0.71244 time= 5.42651
Epoch: 0298 train_loss= 0.31464 train_acc= 0.85558 val_loss= 0.80726 val_acc= 0.70845 test_acc= 0.71581 time= 5.08067
Epoch: 0299 train_loss= 0.30763 train_acc= 0.86558 val_loss= 0.83737 val_acc= 0.71268 test_acc= 0.71328 time= 5.05651
Epoch: 0300 train_loss= 0.31558 train_acc= 0.86558 val_loss= 0.79260 val_acc= 0.70000 test_acc= 0.71244 time= 5.31176
Optimization Finished!
Best epoch: 263
Test set results: cost= 0.74827 accuracy= 0.71919
Test Precision, Recall and F1-Score...
precision recall f1-score support

      0     0.7013    0.7636    0.7311      1777
      1     0.7406    0.6747    0.7061      1777

avg / total 0.7209 0.7192 0.7186 3554

Macro average Test Precision, Recall and F1-Score...
(0.7209362974880018, 0.7191896454698932, 0.7186335470736362, None)
Micro average Test Precision, Recall and F1-Score...
(0.7191896454698931, 0.7191896454698931, 0.7191896454698931, None)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant