Now you can replace the head.
Example: https://www.youtube.com/watch?v=xr5FHd0AdlQ
Requirements:
Post processing skill in Adobe After Effects or Davinci Resolve.
Usage:
1) Find suitable dst footage with the monotonous background behind head
2) Use “extract head” script
3) Gather rich src headset from only one scene (same color and haircut)
4) Mask whole head for src and dst using XSeg editor
5) Train XSeg
6) Apply trained XSeg mask for src and dst headsets
7) Train SAEHD using ‘head’ face_type as regular deepfake model with DF archi. You can use pretrained model for head. Minimum recommended resolution for head is 224.
8) Extract multiple tracks, using Merger:
a. Raw-rgb
b. XSeg-prd mask
c. XSeg-dst mask
9) Using AAE or DavinciResolve, do:
a. Hide source head using XSeg-prd mask: content-aware-fill, clone-stamp, background retraction, or other technique
b. Overlay new head using XSeg-dst mask
Warning: Head faceset can be used for whole_face or less types of training only with XSeg masking.
XSegEditor: added button ‘view trained XSeg mask’, so you can see which frames should be masked to improve mask quality.
New help messages for these options:
Face style power
Learn the color of the predicted face to be the same as dst inside mask.
If you want to use this option with 'whole_face' you have to use XSeg trained mask.
Warning: Enable it only after 10k iters, when predicted face is clear enough to start learn style.
Start from 0.001 value and check history changes.
Enabling this option increases the chance of model collapse
Background style power
Learn the area outside mask of the predicted face to be the same as dst.
If you want to use this option with 'whole_face' you have to use XSeg trained mask.
This can make face more like dst.
Enabling this option increases the chance of model collapse. Typical value is 2.0
5.XSeg) data_dst/src mask for XSeg trainer - fetch.bat
Copies faces containing XSeg polygons to aligned_xseg\ dir.
Useful only if you want to collect labeled faces and reuse them in other fakes.
Now you can use trained XSeg mask in the SAEHD training process.
It’s mean default ‘full_face’ mask obtained from landmarks will be replaced with the mask obtained from the trained XSeg model.
use
5.XSeg.optional) trained mask for data_dst/data_src - apply.bat
5.XSeg.optional) trained mask for data_dst/data_src - remove.bat
Normally you don’t need it. You can use it, if you want to use ‘face_style’ and ‘bg_style’ with obstructions.
XSeg trainer : now you can choose type of face
XSeg trainer : now you can restart training in “override settings”
Merger: XSeg-* modes now can be used with all types of faces.
Therefore old MaskEditor, FANSEG models, and FAN-x modes have been removed,
because the new XSeg solution is better, simpler and more convenient, which costs only 1 hour of manual masking for regular deepfake.
here new whole_face + XSeg workflow:
with XSeg model you can train your own mask segmentator for dst(and/or src) faces
that will be used by the merger for whole_face.
Instead of using a pretrained segmentator model (which does not exist),
you control which part of faces should be masked.
new scripts:
5.XSeg) data_dst edit masks.bat
5.XSeg) data_src edit masks.bat
5.XSeg) train.bat
Usage:
unpack dst faceset if packed
run 5.XSeg) data_dst edit masks.bat
Read tooltips on the buttons (en/ru/zn languages are supported)
mask the face using include or exclude polygon mode.
repeat for 50/100 faces,
!!! you don't need to mask every frame of dst
only frames where the face is different significantly,
for example:
closed eyes
changed head direction
changed light
the more various faces you mask, the more quality you will get
Start masking from the upper left area and follow the clockwise direction.
Keep the same logic of masking for all frames, for example:
the same approximated jaw line of the side faces, where the jaw is not visible
the same hair line
Mask the obstructions using exclude polygon mode.
run XSeg) train.bat
train the model
Check the faces of 'XSeg dst faces' preview.
if some faces have wrong or glitchy mask, then repeat steps:
run edit
find these glitchy faces and mask them
train further or restart training from scratch
Restart training of XSeg model is only possible by deleting all 'model\XSeg_*' files.
If you want to get the mask of the predicted face (XSeg-prd mode) in merger,
you should repeat the same steps for src faceset.
New mask modes available in merger for whole_face:
XSeg-prd - XSeg mask of predicted face -> faces from src faceset should be labeled
XSeg-dst - XSeg mask of dst face -> faces from dst faceset should be labeled
XSeg-prd*XSeg-dst - the smallest area of both
if workspace\model folder contains trained XSeg model, then merger will use it,
otherwise you will get transparent mask by using XSeg-* modes.
Some screenshots:
XSegEditor: https://i.imgur.com/7Bk4RRV.jpg
trainer : https://i.imgur.com/NM1Kn3s.jpg
merger : https://i.imgur.com/glUzFQ8.jpg
example of the fake using 13 segmented dst faces
: https://i.imgur.com/wmvyizU.gifv