Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

There is artifact points in out_face.mp4 #53

Closed
EmmaThompson123 opened this issue Nov 26, 2024 · 3 comments
Closed

There is artifact points in out_face.mp4 #53

EmmaThompson123 opened this issue Nov 26, 2024 · 3 comments

Comments

@EmmaThompson123
Copy link

This the ./output/macron/train/ours_None/renders/out_face.mp4 generated by python synthesize_fuse.py -S data/macron -M output/macron --use_train --audio data/macron/infer.npy:

out_face.mp4

As we can see, there are two artiface points at the lower lip
What cause it and how to solve it ?
BTW, what is ./output/macron/train/ours_None/gt/out.mp4 ? What is its meaning and usage ?

This my training log:

bash scripts/train_xx.sh data/macron output/macron 0
Optimizing output/macron
Output folder: output/macron [26/11 17:34:16]
Found transforms_train.json file, assuming Blender data set! [26/11 17:34:17]
Reading Training Transforms [26/11 17:34:17]
7938it [00:14, 540.49it/s]
7938it [03:52, 34.08it/s]
Reading Test Transforms [26/11 17:38:26]
794it [00:01, 488.84it/s]
794it [00:23, 34.09it/s]
Generating random point cloud (10000)... [26/11 17:38:52]
Loading Training Cameras [26/11 17:38:53]
Loading Test Cameras [26/11 17:39:34]
Number of points at initialisation :  10000 [26/11 17:39:38]
Setting up [LPIPS] perceptual loss: trunk [alex], v[0.1], spatial [off] [26/11 17:39:38]
/opt/conda/envs/talking_gaussian/lib/python3.7/site-packages/torchvision/models/_utils.py:209: UserWarning: The parameter 'pretrained' is deprecated since 0.13 and will be removed in 0.15, please use 'weights' instead.
  f"The parameter '{pretrained_param}' is deprecated since 0.13 and will be removed in 0.15, "
/opt/conda/envs/talking_gaussian/lib/python3.7/site-packages/torchvision/models/_utils.py:223: UserWarning: Arguments other than a weight enum or `None` for 'weights' are deprecated since 0.13 and will be removed in 0.15. The current behavior is equivalent to passing `weights=AlexNet_Weights.IMAGENET1K_V1`. You can also use `weights=AlexNet_Weights.DEFAULT` to get the most up-to-date weights.
  warnings.warn(msg)
Loading model from: /opt/conda/envs/talking_gaussian/lib/python3.7/site-packages/lpips/weights/v0.1/alex.pth [26/11 17:39:39]
Training progress:   4%|####1                                                                                                    | 1990/50000 [00:19<06:08, 130.20it/s, Loss=0.00065, AU25=1.7-1.8]
[ITER 2000] Evaluating test: L1 0.04321341328322888 PSNR 20.388440704345705 [26/11 17:40:00]

[ITER 2000] Evaluating train: L1 0.04436415955424309 PSNR 20.231896209716798 [26/11 17:40:01]
Training progress:   8%|########4                                                                                                 | 4000/50000 [00:58<14:18, 53.56it/s, Loss=0.00014, AU25=1.6-1.8]
[ITER 4000] Evaluating test: L1 0.04326598979532719 PSNR 20.38434047698975 [26/11 17:40:38]

[ITER 4000] Evaluating train: L1 0.0443608894944191 PSNR 20.23892593383789 [26/11 17:40:39]
Training progress:  12%|############7                                                                                             | 6000/50000 [01:47<13:12, 55.51it/s, Loss=0.00013, AU25=1.5-1.8]
[ITER 6000] Evaluating test: L1 0.043344740569591526 PSNR 20.37428779602051 [26/11 17:41:28]

[ITER 6000] Evaluating train: L1 0.04442257806658745 PSNR 20.232425689697266 [26/11 17:41:30]
Training progress:  16%|################9                                                                                         | 8000/50000 [02:37<14:17, 49.00it/s, Loss=0.00016, AU25=1.4-1.8]
[ITER 8000] Evaluating test: L1 0.04341809079051018 PSNR 20.359968757629396 [26/11 17:42:17]

[ITER 8000] Evaluating train: L1 0.04446881040930748 PSNR 20.219767379760743 [26/11 17:42:18]
Training progress:  20%|#####################                                                                                    | 10000/50000 [03:24<14:12, 46.95it/s, Loss=0.00014, AU25=1.3-1.8]
[ITER 10000] Saving Gaussians [26/11 17:43:03]

[ITER 10000] Evaluating test: L1 0.043632520362734795 PSNR 20.324302101135256 [26/11 17:43:05]

[ITER 10000] Evaluating train: L1 0.04466085582971573 PSNR 20.188661193847658 [26/11 17:43:06]

[ITER 10000] Saving Checkpoint [26/11 17:43:06]
Training progress:  24%|#########################2                                                                               | 12000/50000 [04:12<17:46, 35.64it/s, Loss=0.00016, AU25=1.2-1.8]
[ITER 12000] Evaluating test: L1 0.043603584915399556 PSNR 20.320470809936523 [26/11 17:43:53]

[ITER 12000] Evaluating train: L1 0.04463521167635918 PSNR 20.183846664428714 [26/11 17:43:55]
Training progress:  28%|#############################4                                                                           | 14000/50000 [05:05<12:37, 47.50it/s, Loss=0.00021, AU25=1.1-1.8]
[ITER 14000] Evaluating test: L1 0.04363173022866249 PSNR 20.31661376953125 [26/11 17:44:45]

[ITER 14000] Evaluating train: L1 0.04466876462101937 PSNR 20.17709884643555 [26/11 17:44:47]
Training progress:  32%|#################################6                                                                       | 16000/50000 [05:57<12:10, 46.51it/s, Loss=0.00010, AU25=1.1-1.8]
[ITER 16000] Evaluating test: L1 0.04371554665267468 PSNR 20.298939514160157 [26/11 17:45:38]

[ITER 16000] Evaluating train: L1 0.04467338398098946 PSNR 20.17211036682129 [26/11 17:45:39]
Training progress:  36%|#####################################8                                                                   | 18000/50000 [06:45<10:30, 50.74it/s, Loss=0.00007, AU25=1.0-1.8]
[ITER 18000] Evaluating test: L1 0.04383750408887863 PSNR 20.27889404296875 [26/11 17:46:26]

[ITER 18000] Evaluating train: L1 0.04490960687398911 PSNR 20.13560256958008 [26/11 17:46:28]
Training progress:  40%|##########################################                                                               | 20000/50000 [07:30<10:43, 46.61it/s, Loss=0.00008, AU25=0.9-1.8]
[ITER 20000] Saving Gaussians [26/11 17:47:10]

[ITER 20000] Evaluating test: L1 0.043879886716604234 PSNR 20.266443252563477 [26/11 17:47:11]

[ITER 20000] Evaluating train: L1 0.0449658177793026 PSNR 20.121083450317386 [26/11 17:47:12]

[ITER 20000] Saving Checkpoint [26/11 17:47:12]
Training progress:  44%|##############################################2                                                          | 22000/50000 [08:19<10:31, 44.35it/s, Loss=0.00015, AU25=0.8-1.8]
[ITER 22000] Evaluating test: L1 0.04398515895009041 PSNR 20.245458030700686 [26/11 17:48:00]

[ITER 22000] Evaluating train: L1 0.045025700330734254 PSNR 20.107759857177737 [26/11 17:48:02]
Training progress:  48%|##################################################4                                                      | 24000/50000 [09:06<08:29, 51.07it/s, Loss=0.00010, AU25=0.7-1.8]
[ITER 24000] Evaluating test: L1 0.043808186426758766 PSNR 20.278767585754395 [26/11 17:48:47]

[ITER 24000] Evaluating train: L1 0.0448475182056427 PSNR 20.13943862915039 [26/11 17:48:48]
Training progress:  52%|######################################################6                                                  | 26000/50000 [09:57<07:14, 55.24it/s, Loss=0.00007, AU25=0.6-1.8]
[ITER 26000] Evaluating test: L1 0.0437110859900713 PSNR 20.292341613769533 [26/11 17:49:38]

[ITER 26000] Evaluating train: L1 0.044771794229745865 PSNR 20.151067352294923 [26/11 17:49:39]
Training progress:  56%|##########################################################8                                              | 28000/50000 [10:45<06:51, 53.45it/s, Loss=0.00007, AU25=0.5-1.8]
[ITER 28000] Evaluating test: L1 0.04393263198435307 PSNR 20.2498743057251 [26/11 17:50:25]

[ITER 28000] Evaluating train: L1 0.04497582092881203 PSNR 20.11314582824707 [26/11 17:50:26]
Training progress:  60%|###############################################################                                          | 30000/50000 [11:27<07:52, 42.36it/s, Loss=0.00016, AU25=0.5-1.8]
[ITER 30000] Saving Gaussians [26/11 17:51:07]

[ITER 30000] Evaluating test: L1 0.0439519613981247 PSNR 20.25190315246582 [26/11 17:51:08]

[ITER 30000] Evaluating train: L1 0.045012725889682775 PSNR 20.110002899169924 [26/11 17:51:09]

[ITER 30000] Saving Checkpoint [26/11 17:51:09]
Training progress:  64%|###################################################################2                                     | 32000/50000 [12:17<05:40, 52.84it/s, Loss=0.00027, AU25=0.4-1.8]
[ITER 32000] Evaluating test: L1 0.04398152679204941 PSNR 20.24061851501465 [26/11 17:51:58]

[ITER 32000] Evaluating train: L1 0.04504002630710602 PSNR 20.098149490356448 [26/11 17:51:59]
Training progress:  68%|#######################################################################4                                 | 34000/50000 [13:09<06:54, 38.56it/s, Loss=0.00011, AU25=0.3-1.8]
[ITER 34000] Evaluating test: L1 0.04393311552703381 PSNR 20.2568359375 [26/11 17:52:51]

[ITER 34000] Evaluating train: L1 0.04500926658511162 PSNR 20.114342880249026 [26/11 17:52:52]
Training progress:  72%|###########################################################################6                             | 36000/50000 [13:56<05:26, 42.90it/s, Loss=0.00009, AU25=0.2-1.8]
[ITER 36000] Evaluating test: L1 0.04395270720124245 PSNR 20.251499748229982 [26/11 17:53:37]

[ITER 36000] Evaluating train: L1 0.04503850042819977 PSNR 20.106470489501955 [26/11 17:53:38]
Training progress:  76%|###############################################################################8                         | 38000/50000 [14:44<04:44, 42.13it/s, Loss=0.00014, AU25=0.1-1.8]
[ITER 38000] Evaluating test: L1 0.04389232993125916 PSNR 20.2657922744751 [26/11 17:54:25]

[ITER 38000] Evaluating train: L1 0.0449357308447361 PSNR 20.126680755615237 [26/11 17:54:26]
Training progress:  80%|####################################################################################                     | 40000/50000 [15:33<03:45, 44.29it/s, Loss=0.00015, AU25=0.0-1.8]
[ITER 40000] Saving Gaussians [26/11 17:55:12]

[ITER 40000] Evaluating test: L1 0.04402610696852208 PSNR 20.22906360626221 [26/11 17:55:14]

[ITER 40000] Evaluating train: L1 0.045116475969553 PSNR 20.079296875 [26/11 17:55:15]

[ITER 40000] Saving Checkpoint [26/11 17:55:15]
Training progress:  84%|#######################################################################################3                | 42000/50000 [16:22<03:42, 35.98it/s, Loss=0.00004, AU25=-0.1-1.8]
[ITER 42000] Evaluating test: L1 0.04405398815870285 PSNR 20.22461395263672 [26/11 17:56:03]

[ITER 42000] Evaluating train: L1 0.04515979215502739 PSNR 20.074445724487305 [26/11 17:56:04]
Training progress:  88%|###########################################################################################5            | 44000/50000 [17:10<01:51, 53.99it/s, Loss=0.00009, AU25=-0.1-1.8]
[ITER 44000] Evaluating test: L1 0.043937151134014134 PSNR 20.251065444946292 [26/11 17:56:51]

[ITER 44000] Evaluating train: L1 0.045021121948957445 PSNR 20.10339279174805 [26/11 17:56:52]
Training progress:  92%|###############################################################################################6        | 46000/50000 [17:54<01:20, 49.51it/s, Loss=0.00006, AU25=-0.2-1.8]
[ITER 46000] Evaluating test: L1 0.04410051926970482 PSNR 20.221428298950197 [26/11 17:57:36]

[ITER 46000] Evaluating train: L1 0.04521547257900238 PSNR 20.0704647064209 [26/11 17:57:37]
Training progress:  96%|###################################################################################################8    | 48000/50000 [18:35<00:45, 44.25it/s, Loss=0.00005, AU25=-0.3-1.8]
[ITER 48000] Evaluating test: L1 0.04497724808752537 PSNR 20.029319763183594 [26/11 17:58:17]

[ITER 48000] Evaluating train: L1 0.04609552398324013 PSNR 19.88223342895508 [26/11 17:58:18]
Training progress: 100%|########################################################################################################| 50000/50000 [19:22<00:00, 43.03it/s, Loss=0.00020, AU25=-0.4-1.8]

[ITER 50000] Saving Gaussians [26/11 17:59:01]

[ITER 50000] Evaluating test: L1 0.044975836202502256 PSNR 20.041740608215335 [26/11 17:59:03]

[ITER 50000] Evaluating train: L1 0.046054369211196905 PSNR 19.898664855957033 [26/11 17:59:04]

[ITER 50000] Saving Checkpoint [26/11 17:59:04]

Training complete. [26/11 17:59:04]
Optimizing output/macron
Output folder: output/macron [26/11 17:59:10]
Found transforms_train.json file, assuming Blender data set! [26/11 17:59:11]
Reading Training Transforms [26/11 17:59:11]
7938it [00:13, 606.00it/s]
7938it [03:38, 36.37it/s]
Reading Test Transforms [26/11 18:03:04]
794it [00:00, 810.53it/s] 
794it [00:26, 29.94it/s]
Generating random point cloud (2000)... [26/11 18:03:32]
Loading Training Cameras [26/11 18:03:32]
Loading Test Cameras [26/11 18:04:10]
Number of points at initialisation :  2000 [26/11 18:04:13]
Setting up [LPIPS] perceptual loss: trunk [alex], v[0.1], spatial [off] [26/11 18:04:14]
/opt/conda/envs/talking_gaussian/lib/python3.7/site-packages/torchvision/models/_utils.py:209: UserWarning: The parameter 'pretrained' is deprecated since 0.13 and will be removed in 0.15, please use 'weights' instead.
  f"The parameter '{pretrained_param}' is deprecated since 0.13 and will be removed in 0.15, "
/opt/conda/envs/talking_gaussian/lib/python3.7/site-packages/torchvision/models/_utils.py:223: UserWarning: Arguments other than a weight enum or `None` for 'weights' are deprecated since 0.13 and will be removed in 0.15. The current behavior is equivalent to passing `weights=AlexNet_Weights.IMAGENET1K_V1`. You can also use `weights=AlexNet_Weights.DEFAULT` to get the most up-to-date weights.
  warnings.warn(msg)
Loading model from: /opt/conda/envs/talking_gaussian/lib/python3.7/site-packages/lpips/weights/v0.1/alex.pth [26/11 18:04:14]
Training progress:   4%|####1                                                                                                   | 2000/50000 [00:18<09:44, 82.06it/s, Loss=0.01836, Mouth=2.2-10.8]
[ITER 2000] Evaluating test: L1 0.008030179422348738 PSNR 34.0363261574193 [26/11 18:04:36]

[ITER 2000] Evaluating train: L1 0.008198659308254719 PSNR 33.690686035156254 [26/11 18:04:38]
Training progress:   8%|########3                                                                                               | 4000/50000 [01:00<16:43, 45.86it/s, Loss=0.01250, Mouth=3.2-11.9]
[ITER 4000] Evaluating test: L1 0.00751017514420183 PSNR 33.847042284513776 [26/11 18:05:18]

[ITER 4000] Evaluating train: L1 0.007729826774448157 PSNR 33.73421859741211 [26/11 18:05:21]
Training progress:  12%|############4                                                                                           | 6000/50000 [01:55<13:41, 53.54it/s, Loss=0.01290, Mouth=4.3-13.0]
[ITER 6000] Evaluating test: L1 0.007266176006707706 PSNR 33.55537866291247 [26/11 18:06:13]

[ITER 6000] Evaluating train: L1 0.007530601508915425 PSNR 33.529770660400395 [26/11 18:06:15]
Training progress:  16%|################6                                                                                       | 8000/50000 [02:43<12:29, 56.07it/s, Loss=0.01113, Mouth=5.4-14.0]
[ITER 8000] Evaluating test: L1 0.00694055370006122 PSNR 33.95660721628289 [26/11 18:07:01]

[ITER 8000] Evaluating train: L1 0.007104572281241417 PSNR 33.99801254272461 [26/11 18:07:05]
Training progress:  20%|####################6                                                                                  | 10000/50000 [03:42<13:30, 49.36it/s, Loss=0.01132, Mouth=6.5-15.1]
[ITER 10000] Evaluating test: L1 0.006972186463443856 PSNR 33.95093937924034 [26/11 18:08:01]

[ITER 10000] Evaluating train: L1 0.007383815851062537 PSNR 33.83737335205078 [26/11 18:08:04]

[ITER 10000] Saving Gaussians [26/11 18:08:04]

[ITER 10000] Saving Checkpoint [26/11 18:08:05]
Training progress:  24%|########################7                                                                              | 12000/50000 [04:42<12:43, 49.79it/s, Loss=0.01234, Mouth=7.6-16.2]
[ITER 12000] Evaluating test: L1 0.006809828631383808 PSNR 34.635341945447415 [26/11 18:09:00]

[ITER 12000] Evaluating train: L1 0.007021050807088614 PSNR 34.63308792114258 [26/11 18:09:04]
Training progress:  28%|############################8                                                                          | 14000/50000 [05:41<12:19, 48.70it/s, Loss=0.01643, Mouth=8.6-17.3]
[ITER 14000] Evaluating test: L1 0.006701016578039056 PSNR 34.59437520880448 [26/11 18:09:58]

[ITER 14000] Evaluating train: L1 0.007259354926645756 PSNR 34.37795867919922 [26/11 18:10:03]
Training progress:  32%|################################9                                                                      | 16000/50000 [06:35<14:13, 39.83it/s, Loss=0.01017, Mouth=9.7-18.4]
[ITER 16000] Evaluating test: L1 0.0066763861104846 PSNR 34.70490967599969 [26/11 18:10:53]

[ITER 16000] Evaluating train: L1 0.00684666782617569 PSNR 34.85522232055664 [26/11 18:10:56]
Training progress:  36%|####################################7                                                                 | 18000/50000 [07:34<12:24, 43.01it/s, Loss=0.01279, Mouth=10.8-19.4]
[ITER 18000] Evaluating test: L1 0.006707222127404652 PSNR 34.53730031063682 [26/11 18:11:51]

[ITER 18000] Evaluating train: L1 0.006862495094537735 PSNR 34.95119400024414 [26/11 18:11:54]
Training progress:  40%|########################################8                                                             | 20000/50000 [08:25<10:39, 46.94it/s, Loss=0.01106, Mouth=11.9-20.5]
[ITER 20000] Evaluating test: L1 0.006605773428945165 PSNR 34.84388010125411 [26/11 18:12:44]

[ITER 20000] Evaluating train: L1 0.006918946281075478 PSNR 34.71606369018555 [26/11 18:12:47]

[ITER 20000] Saving Gaussians [26/11 18:12:47]

[ITER 20000] Saving Checkpoint [26/11 18:12:47]
Training progress:  44%|############################################8                                                         | 22000/50000 [09:23<08:34, 54.45it/s, Loss=0.01003, Mouth=13.0-21.6]
[ITER 22000] Evaluating test: L1 0.006681662346971662 PSNR 34.698546961734166 [26/11 18:13:40]

[ITER 22000] Evaluating train: L1 0.006769489590078593 PSNR 35.03809509277344 [26/11 18:13:43]
Training progress:  48%|################################################9                                                     | 24000/50000 [10:18<11:08, 38.92it/s, Loss=0.01359, Mouth=14.0-22.7]
[ITER 24000] Evaluating test: L1 0.006563693358513869 PSNR 34.998518090499076 [26/11 18:14:36]

[ITER 24000] Evaluating train: L1 0.006773597653955222 PSNR 35.02405853271485 [26/11 18:14:38]
Training progress:  52%|#####################################################                                                 | 26000/50000 [11:11<10:58, 36.43it/s, Loss=0.01209, Mouth=15.1-23.8]
[ITER 26000] Evaluating test: L1 0.006563915037795117 PSNR 34.959510000128496 [26/11 18:15:28]

[ITER 26000] Evaluating train: L1 0.006881114374846221 PSNR 34.90394668579102 [26/11 18:15:32]
Training progress:  56%|#########################################################1                                            | 28000/50000 [12:05<09:19, 39.31it/s, Loss=0.01488, Mouth=16.2-24.8]
[ITER 28000] Evaluating test: L1 0.006538670166934791 PSNR 34.9542101809853 [26/11 18:16:23]

[ITER 28000] Evaluating train: L1 0.006908524595201016 PSNR 34.73491668701172 [26/11 18:16:27]
Training progress:  60%|#############################################################2                                        | 30000/50000 [13:01<08:07, 41.01it/s, Loss=0.01160, Mouth=17.3-25.9]
[ITER 30000] Evaluating test: L1 0.00651365359264769 PSNR 34.9791972511693 [26/11 18:17:18]

[ITER 30000] Evaluating train: L1 0.006659545004367828 PSNR 35.03929443359375 [26/11 18:17:21]

[ITER 30000] Saving Gaussians [26/11 18:17:21]

[ITER 30000] Saving Checkpoint [26/11 18:17:21]
Training progress:  64%|#################################################################2                                    | 32000/50000 [13:52<06:08, 48.86it/s, Loss=0.01158, Mouth=18.4-27.0]
[ITER 32000] Evaluating test: L1 0.006511228722765257 PSNR 34.94422792133532 [26/11 18:18:09]

[ITER 32000] Evaluating train: L1 0.006873730570077896 PSNR 34.897884368896484 [26/11 18:18:12]
Training progress:  68%|#####################################################################3                                | 34000/50000 [14:48<07:20, 36.32it/s, Loss=0.01151, Mouth=19.4-28.1]
[ITER 34000] Evaluating test: L1 0.006488516608155087 PSNR 34.96477207384611 [26/11 18:19:06]

[ITER 34000] Evaluating train: L1 0.006659877672791481 PSNR 35.02527770996094 [26/11 18:19:08]
Training progress:  72%|#########################################################################4                            | 36000/50000 [15:44<06:09, 37.90it/s, Loss=0.01115, Mouth=20.5-29.2]
[ITER 36000] Evaluating test: L1 0.006425490970478245 PSNR 35.095076310007194 [26/11 18:20:02]

[ITER 36000] Evaluating train: L1 0.006827933248132467 PSNR 34.99650497436524 [26/11 18:20:04]
Training progress:  76%|#############################################################################5                        | 38000/50000 [16:43<04:05, 48.78it/s, Loss=0.01098, Mouth=21.6-30.2]
[ITER 38000] Evaluating test: L1 0.006310129562686932 PSNR 35.28989450555098 [26/11 18:21:00]

[ITER 38000] Evaluating train: L1 0.006741211377084256 PSNR 35.04918899536133 [26/11 18:21:03]
Training progress:  80%|#################################################################################6                    | 40000/50000 [17:41<03:45, 44.30it/s, Loss=0.00976, Mouth=22.7-31.3]
[ITER 40000] Evaluating test: L1 0.006456462053680106 PSNR 35.10747789081774 [26/11 18:21:58]

[ITER 40000] Evaluating train: L1 0.00666232854127884 PSNR 35.1264030456543 [26/11 18:22:01]

[ITER 40000] Saving Gaussians [26/11 18:22:01]

[ITER 40000] Saving Checkpoint [26/11 18:22:02]
Training progress:  84%|#####################################################################################6                | 42000/50000 [18:33<02:28, 53.76it/s, Loss=0.01280, Mouth=23.8-32.4]
[ITER 42000] Evaluating test: L1 0.006408446100785544 PSNR 35.120523151598476 [26/11 18:22:50]

[ITER 42000] Evaluating train: L1 0.0066517236642539505 PSNR 35.17534408569336 [26/11 18:22:53]
Training progress:  88%|#########################################################################################7            | 44000/50000 [19:23<02:25, 41.28it/s, Loss=0.00932, Mouth=24.8-33.5]
[ITER 44000] Evaluating test: L1 0.006419359363223377 PSNR 35.16974720201994 [26/11 18:23:41]

[ITER 44000] Evaluating train: L1 0.006736644543707371 PSNR 35.115429687500004 [26/11 18:23:43]
Training progress:  92%|#############################################################################################8        | 46000/50000 [20:17<01:29, 44.94it/s, Loss=0.01371, Mouth=25.9-34.6]
[ITER 46000] Evaluating test: L1 0.006372460580774043 PSNR 35.2029501262464 [26/11 18:24:35]

[ITER 46000] Evaluating train: L1 0.0066849195398390295 PSNR 35.16448059082031 [26/11 18:24:37]
Training progress:  96%|#################################################################################################9    | 48000/50000 [21:16<01:00, 32.80it/s, Loss=0.01879, Mouth=27.0-35.6]
[ITER 48000] Evaluating test: L1 0.006398710177132958 PSNR 35.453636972527754 [26/11 18:25:35]

[ITER 48000] Evaluating train: L1 0.0064405609853565695 PSNR 35.55682449340821 [26/11 18:25:37]
Training progress: 100%|######################################################################################################| 50000/50000 [22:23<00:00, 37.22it/s, Loss=0.01883, Mouth=28.1-36.7]

[ITER 50000] Evaluating test: L1 0.006333483186991591 PSNR 35.463139182642884 [26/11 18:26:41]

[ITER 50000] Evaluating train: L1 0.006608067732304335 PSNR 35.392144775390626 [26/11 18:26:44]

[ITER 50000] Saving Gaussians [26/11 18:26:44]

[ITER 50000] Saving Checkpoint [26/11 18:26:45]

Training complete. [26/11 18:26:45]
Optimizing output/macron
Output folder: output/macron [26/11 18:26:52]
Found transforms_train.json file, assuming Blender data set! [26/11 18:26:53]
Reading Training Transforms [26/11 18:26:53]
7938it [00:14, 556.06it/s]
7938it [03:42, 35.70it/s]
Reading Test Transforms [26/11 18:30:50]
794it [00:00, 869.59it/s]
794it [00:21, 37.35it/s]
Generating random point cloud (10000)... [26/11 18:31:13]
Loading Training Cameras [26/11 18:31:14]
Loading Test Cameras [26/11 18:31:49]
Number of points at initialisation :  10000 [26/11 18:31:51]
Setting up [LPIPS] perceptual loss: trunk [alex], v[0.1], spatial [off] [26/11 18:31:52]
/opt/conda/envs/talking_gaussian/lib/python3.7/site-packages/torchvision/models/_utils.py:209: UserWarning: The parameter 'pretrained' is deprecated since 0.13 and will be removed in 0.15, please use 'weights' instead.
  f"The parameter '{pretrained_param}' is deprecated since 0.13 and will be removed in 0.15, "
/opt/conda/envs/talking_gaussian/lib/python3.7/site-packages/torchvision/models/_utils.py:223: UserWarning: Arguments other than a weight enum or `None` for 'weights' are deprecated since 0.13 and will be removed in 0.15. The current behavior is equivalent to passing `weights=AlexNet_Weights.IMAGENET1K_V1`. You can also use `weights=AlexNet_Weights.DEFAULT` to get the most up-to-date weights.
  warnings.warn(msg)
Loading model from: /opt/conda/envs/talking_gaussian/lib/python3.7/site-packages/lpips/weights/v0.1/alex.pth [26/11 18:31:53]
Training progress: 100%|#######################################################################################################################| 10000/10000 [02:47<00:00, 59.56it/s, Loss=0.01402]

[ITER 10000] Saving Checkpoint [26/11 18:34:41]

Training complete. [26/11 18:34:41]
Looking for config file in output/macron/cfg_args
Config file found: output/macron/cfg_args
Rendering output/macron
Found transforms_train.json file, assuming Blender data set! [26/11 18:34:47]
Reading Test Transforms [26/11 18:34:47]
794it [00:01, 747.12it/s] 
794it [00:23, 34.01it/s]
Generating random point cloud (10000)... [26/11 18:35:12]
Loading Training Cameras [26/11 18:35:12]
Loading Test Cameras [26/11 18:35:20]
Number of points at initialisation :  10000 [26/11 18:35:23]
Rendering progress: 100%|########################################################################################################################################| 794/794 [00:10<00:00, 77.79it/s]
Setting up [LPIPS] perceptual loss: trunk [alex], v[0.1], spatial [off]
/opt/conda/envs/talking_gaussian/lib/python3.7/site-packages/torchvision/models/_utils.py:209: UserWarning: The parameter 'pretrained' is deprecated since 0.13 and will be removed in 0.15, please use 'weights' instead.
  f"The parameter '{pretrained_param}' is deprecated since 0.13 and will be removed in 0.15, "
/opt/conda/envs/talking_gaussian/lib/python3.7/site-packages/torchvision/models/_utils.py:223: UserWarning: Arguments other than a weight enum or `None` for 'weights' are deprecated since 0.13 and will be removed in 0.15. The current behavior is equivalent to passing `weights=AlexNet_Weights.IMAGENET1K_V1`. You can also use `weights=AlexNet_Weights.DEFAULT` to get the most up-to-date weights.
  warnings.warn(msg)
Loading model from: /opt/conda/envs/talking_gaussian/lib/python3.7/site-packages/lpips/weights/v0.1/alex.pth
100
200
300
400
500
600
700
LMD (fan) = 2.424562
PSNR = 35.592678
LPIPS (alex) = 0.019378
@Fictionarry
Copy link
Owner

These noisy points are some wrong Gaussians born during the LPIPS refine stage. You can relieve this by turning the weight at this line to a smaller value, or directly to zero, but the high-frequency details at the lips may become less.

loss += 0.01 * lpips_criterion(image_t.clone()[:, xmin:xmax, ymin:ymax] * 2 - 1, gt_image_t.clone()[:, xmin:xmax, ymin:ymax] * 2 - 1).mean()

We also found some potential solutions for it, like adding the opacity reset operation borrowed from the original 3DGS. You can try it.

./output/macron/train/ours_None/gt/out.mp4 is the ground truth video for the synthesized talking head. We then use it for evaluation.

@EmmaThompson123
Copy link
Author

@Fictionarry I change the LPIPS weight from 0.01 to 0.001, these two noisy points seems disappear !

BTW, how to visualize the output 3DGS head model, I download the Pre-built Windows Binaries from here, and run F:\FDMDownload\viewers\bin>SIBR_gaussianViewer_app.exe --m F:\FDMDownload\TalkingGaussian_output_macron, but the log says:

[SIBR] --  INFOS  --:   Initialization of GLFW
[SIBR] --  INFOS  --:   OpenGL Version: 4.6.0 NVIDIA 537.58[major: 4, minor: 6]
[SIBR] ##  ERROR  ##:   FILE C:\Users\alanvin.AD\Repos\Github\3dgs\SIBR_viewers\src\core\scene\ParseData.cpp
                        LINE 560, FUNC sibr::ParseData::getParsedData
                        Cannot determine type of dataset at //ckptstorage/user/repo/3DGS/TalkingGaussian/data/macron
[SIBR] --  INFOS  --:   Did not find specified input folder, loading from model path
Number of input Images to read: 1284
Number of Cameras set up: 1284
[SIBR] --  INFOS  --:   Mesh contains: colors: 1, normals: 1, texcoords: 0
[SIBR] --  INFOS  --:   Mesh 'F:\FDMDownload\TalkingGaussian_output_macron/input.ply successfully loaded. 1 meshes were loaded with a total of  (0) faces and  (10000) vertices detected. Init GL ...
[SIBR] --  INFOS  --:   Init GL mesh complete
Warning: GLParameter user_color does not exist in shader PointBased
[SIBR] ##  ERROR  ##:   FILE C:\Users\alanvin.AD\Repos\Github\3dgs\SIBR_viewers\src\projects\gaussianviewer\renderer\GaussianView.cpp
                        LINE 82, FUNC loadPly
                        Unable to find model's PLY file, attempted:F:\FDMDownload\TalkingGaussian_output_macron/point_cloud//point_cloud.ply

This is the content under folder of TalkingGaussian_output_macron:
image
and this is the content under folder of point_cloud
image

@Fictionarry
Copy link
Owner

@Fictionarry I change the LPIPS weight from 0.01 to 0.001, these two noisy points seems disappear !

BTW, how to visualize the output 3DGS head model, I download the Pre-built Windows Binaries from here, and run F:\FDMDownload\viewers\bin>SIBR_gaussianViewer_app.exe --m F:\FDMDownload\TalkingGaussian_output_macron, but the log says:

The 3DGS viewer seeks the point cloud following a formatted path name such as point_cloud/iteration_xxxx, while you can see the point cloud path in our project is with an additional postfix. You can manually rename it, or use some convenient online tools for visualization, such as https://github.com/playcanvas/supersplat .

Notably, the saved point cloud is the ones before deformation, which may be less reasonable in geometry. You can get a deformed one by adding this line after once rendering.

gaussians.save_deformed_ply(gaussians.get_xyz + render_pkg['motion']['d_xyz'], gaussians._scaling + render_pkg['motion']['d_scale'], gaussians._rotation + render_pkg['motion']['d_rot'], save_path)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants