In numpy, we use `ndarray.reshape()`

for reshaping an array.

I noticed that in pytorch, people use `torch.view(...)`

for the same purpose, but at the same time, there is also a `torch.reshape(...)`

existing.

So I am wondering what the differences are between them and when I should use either of them?

·
Santiago Trujillo

`torch.view`

has existed for a long time. It will return a tensor with the new shape. The returned tensor will share the underling data with the original tensor.
See the documentation here.

On the other hand, it seems that `torch.reshape`

has been introduced recently in version 0.4. According to the document, this method will

Returns a tensor with the same data and number of elements as input, but with the specified shape. When possible, the returned tensor will be a view of input. Otherwise, it will be a copy. Contiguous inputs and inputs with compatible strides can be reshaped without copying, but you should not depend on the copying vs. viewing behavior.

It means that `torch.reshape`

may return a copy or a view of the original tensor. You can not count on that to return a view or a copy. According to the developer:

if you need a copy use clone() if you need the same storage use view(). The semantics of reshape() are that it may or may not share the storage and you don't know beforehand.

Another difference is that `reshape()`

can operate on both contiguous and non-contiguous tensor while `view()`

can only operate on contiguous tensor. Also see here about the meaning of `contiguous`

.

·
Santiago Trujillo
Denunciar

Although both `torch.view`

and `torch.reshape`

are used to reshape tensors, here are the differences between them.

- As the name suggests,
`torch.view`

merely creates a*view*of the original tensor. The new tensor will*always*share its data with the original tensor. This means that if you change the original tensor, the reshaped tensor will change and vice versa.

```
>>> z = torch.zeros(3, 2)
>>> x = z.view(2, 3)
>>> z.fill_(1)
>>> x
tensor([[1., 1., 1.],
[1., 1., 1.]])
```

- To ensure that the new tensor always shares its data with the original,
`torch.view`

imposes some contiguity constraints on the shapes of the two tensors [docs]. More often than not this is not a concern, but sometimes`torch.view`

throws an error even if the shapes of the two tensors are compatible. Here's a famous counter-example.

```
>>> z = torch.zeros(3, 2)
>>> y = z.t()
>>> y.size()
torch.Size([2, 3])
>>> y.view(6)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
RuntimeError: invalid argument 2: view size is not compatible with input tensor's
size and stride (at least one dimension spans across two contiguous subspaces).
Call .contiguous() before .view().
```

`torch.reshape`

doesn't impose any contiguity constraints, but also doesn't guarantee data sharing. The new tensor may be a view of the original tensor, or it may be a new tensor altogether.

```
>>> z = torch.zeros(3, 2)
>>> y = z.reshape(6)
>>> x = z.t().reshape(6)
>>> z.fill_(1)
tensor([[1., 1.],
[1., 1.],
[1., 1.]])
>>> y
tensor([1., 1., 1., 1., 1., 1.])
>>> x
tensor([0., 0., 0., 0., 0., 0.])
```

**TL;DR:**

If you just want to reshape tensors, use `torch.reshape`

. If you're also concerned about memory usage and want to ensure that the two tensors share the same data, use `torch.view`

.

·
Santiago Trujillo
Denunciar

`Tensor.reshape()`

is more robust. It will work on any tensor, while `Tensor.view()`

works only on tensor `t`

where `t.is_contiguous()==True`

.

To explain about non-contiguous and contiguous is another story, but you can always make the tensor `t`

contiguous if you call `t.contiguous()`

and then you can call `view()`

without the error.

·
Santiago Trujillo
Denunciar