Welcome to OStack Knowledge Sharing Community for programmer and developer-Open, Learning and Share
Welcome To Ask or Share your Answers For Others

Categories

0 votes
440 views
in Technique[技术] by (71.8m points)

python - How to run PyTorch on GPU by default?

I want to run PyTorch using cuda. I set model.cuda() and torch.cuda.LongTensor() for all tensors.

Do I have to create tensors using .cuda explicitly if I have used model.cuda()?

Is there a way to make all computations run on GPU by default?

See Question&Answers more detail:os

与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
Welcome To Ask or Share your Answers For Others

1 Answer

0 votes
by (71.8m points)

I do not think you can specify that you want to use cuda tensors by default. However you should have a look to the pytorch offical examples.

In the imagenet training/testing script, they use a wrapper over the model called DataParallel. This wrapper has two advantages:

  • it handles the data parallelism over multiple GPUs
  • it handles the casting of cpu tensors to cuda tensors

As you can see in L164, you don't have to cast manually your inputs/targets to cuda.

Note that, if you have multiple GPUs and you want to use a single one, launch any python/pytorch scripts with the CUDA_VISIBLE_DEVICES prefix. For instance CUDA_VISIBLE_DEVICES=0 python main.py.


与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
Welcome to OStack Knowledge Sharing Community for programmer and developer-Open, Learning and Share
Click Here to Ask a Question

2.1m questions

2.1m answers

60 comments

57.0k users

...