Broadcasting semantics

General semantics

  • Each tensor has at least one dimension.
  • When iterating over the dimension sizes, starting at the trailing dimension, the dimension size must either be equal, one of them is 1, or one of them does exist.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
x = torch.FloatTensor(5, 7, 3)
y = torch.FloatTensor(5, 7, 3)
# same shapes are always broadcastable

x = torch.FloatTensor(5, 3, 4, 1)
y = torch.FloatTensor(3, 1, 1)
# x and y are broadcastable
# 1st trailing dimension: both have size 1
# 2nd trailing dimension: y has size 1
# 3rd trailing dimension: x size == y size
# 4th trailing dimension: y dimension doesn't exist

# but
x = torch.FloatTensor(5, 2, 4, 1)
y = torch.FloatTensor(3, 1, 1)
# x and y are not broadcastable, because in the 3rd trailing dimension 2 != 3
1
2
3
4
5
6
7
8
9
10
11
12
13
14
x = torch.FloatTensor(5, 1, 4, 1)
y = torch.FloatTensor(3, 1, 1)
(x+y).size()
torch.Size([5, 3, 4, 1])

# but not necesary:
x = torch.FloatTensor(1)
y = torch.FloatTensor(3, 1, 7)
(x+y).size()
torch.Size([3, 1, 7])

x = torch.FloatTensor(5, 2, 4, 1)
y = torch.FloatTensor(3, 1, 1)
(x+y).size()

In-place sementics

1
2
3
4
5
6
7
8
x = torch.FloatTensor(5, 3, 4, 1)
y = torch.FloatTensor(3, 1, 1)
(x.add_(y)).size()

# but
x = torch.FloatTensor(1, 3, 1)
y = torch.FloatTensor(3, 1, 7)
(x.add_(y)).size()

Backwards compatibility

1
torch.add(torch.ones(4, 1), torch.randn(4))
1
torch.utils.backcompat.broadcast_warning.enabled=True