Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Error encountered during evaluation #38

Open
jkini opened this issue Feb 16, 2021 · 4 comments
Open

Error encountered during evaluation #38

jkini opened this issue Feb 16, 2021 · 4 comments

Comments

@jkini
Copy link

jkini commented Feb 16, 2021

ValueError: Expected more than 1 value per channel when training, got input size [1, 512].

Please let me know the environment details.
pytorch 1.1.0, cuda10.2, and torchvision is 0.6.0 on python 3.7

@zhanghaobucunzai
Copy link

请问下,该问题是怎么解决的啊?Excuse me, how is this problem solved?

@synsin0
Copy link

synsin0 commented Sep 14, 2021

I have encountered the same question, seemingly the batch size can not be 1 because group normalization is used. but when I change the batch size to 2, another problem appears, maybe the author may solve the problem.

problem is as this:
return torch.stack(batch, 0, out=out)
RuntimeError: stack expects each tensor to be equal size, but got [5] at entry 0 and [4] at entry 1

@Ri-Bai
Copy link

Ri-Bai commented Sep 18, 2021

I have encountered the same question, seemingly the batch size can not be 1 because group normalization is used. but when I change the batch size to 2, another problem appears, maybe the author may solve the problem.

problem is as this:
return torch.stack(batch, 0, out=out)
RuntimeError: stack expects each tensor to be equal size, but got [5] at entry 0 and [4] at entry 1

are your solve it? I have the same question

@0-ing
Copy link

0-ing commented Aug 15, 2022

I have encountered the same question, seemingly the batch size can not be 1 because group normalization is used. but when I change the batch size to 2, another problem appears, maybe the author may solve the problem.
problem is as this:
return torch.stack(batch, 0, out=out)
RuntimeError: stack expects each tensor to be equal size, but got [5] at entry 0 and [4] at entry 1

are your solve it? I have the same question

I also encountered this problem. Have you solved it?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants