Excluding subgraphs from backward
Every Variable has two flags: requires_grad
and volatile
.
requires_grad
If there's a single input to an operation that requires gradient, its output will also require gradient.
1 | x = Variable(torch.randn(5, 5)) |
1 | model = torchvision.models.resnet18(pretrain=True) |
volatile
Volatile is recommended for purely inference mode, when you're sure you won't be even calling .backward()
. volatile
also determines that requires_grad
is False
.
If there's even a single volatile input to an operation, its output is also going to be volatile.
1 | regular_input = Variable(torch.randn(1, 3, 277, 277)) |