mirror of
https://github.com/iperov/DeepFaceLab.git
synced 2024-03-22 13:10:55 +08:00
f1d115b63b
Basic usage instruction: https://i.imgur.com/w7LkId2.jpg 'whole_face' requires skill in Adobe After Effects. For using whole_face you have to extract whole_face's by using 4) data_src extract whole_face and 5) data_dst extract whole_face Images will be extracted in 512 resolution, so they can be used for regular full_face's and half_face's. 'whole_face' covers whole area of face include forehead in training square, but training mask is still 'full_face' therefore it requires manual final masking and composing in Adobe After Effects. added option 'masked_training' This option is available only for 'whole_face' type. Default is ON. Masked training clips training area to full_face mask, thus network will train the faces properly. When the face is trained enough, disable this option to train all area of the frame. Merge with 'raw-rgb' mode, then use Adobe After Effects to manually mask, tune color, and compose whole face include forehead. |
||
---|---|---|
.. | ||
__init__.py | ||
FaceEnhancer.npy | ||
FaceEnhancer.py | ||
FaceType.py | ||
FAN.npy | ||
FANExtractor.py | ||
FANSeg_256_full_face.npy | ||
LandmarksProcessor.py | ||
S3FD.npy | ||
S3FDExtractor.py | ||
TernausNet.py | ||
vgg11_enc_weights.npy |