Deepfake technology has great potential in media and entertainment, but it also poses serious risks, including privacy leakage and identity fraud. To address these threats, proactive watermarking methods have emerged by embedding invisible signals for active protection. However, existing approaches are often vulnerable to watermark destruction under malicious distortions, leading to insufficient robustness. Moreover, strong embedding may degrade image quality, making it difficult to balance robustness and imperceptibility.
To solve these problems, we propose WaveGuard, a proactive watermarking framework that explores frequency-domain embedding and graph-based structural consistency optimization. Watermarks are embedded into high-frequency sub-bands using dual-tree complex wavelet transform (DT-CWT) to enhance robustness against distortions and deepfake forgeries. By leveraging joint sub-band correlations, WaveGuard supports robust extraction for source tracing and semi-robust extraction for deepfake detection. We also employ dense connectivity strategies for feature reuse and propose a Structural Consistency Graph Neural Network (SC-GNN) to reduce perceptual artifacts and improve visual quality. Additionally, a Tanh-based Spatial Embedding Attention Module (TSEAM) refines both global and local features, improving watermark concealment without sacrificing robustness.
The figure shows the comparison of three watermarking embedding strategies: the first two are based on GAN and VAE to optimize the image quality, while WaveGuard introduces GNN structural consistency constraints to enhance invisibility and robustness.
- Project page released
- Dataset preparation instructions released
- Release of core implementation
- Release of training and evaluation scripts
- Pretrained model and demo
python -m pip install -r requirements.txt
WaveGuard was trained and tested in CelebA-HQ. We don't own data sets, they can be downloaded from the official website.
This project uses the CelebA-HQ dataset with 128×128 and 256×256 resolutions. Please organize images as follows:
CelebA-HQ
├── train
│ ├── 000001.jpg
│ ├── 000002.jpg
│ └── ...
├── val
│ └── ...
└── test
└── ...
Ensure all images are cropped and resized appropriately before training.
We provide ready-to-use noise generation layers for simulating realistic deepfake perturbations in our experiments. Specifically, the following deepfake generation techniques are supported:
These modules simulate various deepfake attacks and are used to evaluate the robustness and traceability of our watermarking system under adversarial scenarios.
We provide pre-configured noise models and environments. You can download them from Google Drive:
After downloading, please unzip the contents into the following path:
./network/
Ensure that your final project structure includes:
network/
├── noise/
│ ├── simswap/
│ ├── ganimation/
│ ├── stargan/
│ └── ...
These noise layers are automatically invoked during test-time robustness evaluation.
python train.py --config train.yaml
python test.py --config test.yaml
If you have any questions, please contact: