I'm trying to get mpi4py set up on a system with a quad-core processor running Fedora 33. I would assume that I should have 8 available processes, but when I run a test script, I only see a size of 1 and all processes return a rank of 0. I've tried setting up a host list file as well as specifying multiple hosts on the command line, but the results are always the same.
I cannot seem to find any information beyond the very simple installation of mpi4py instructions, but something is clearly off. Any suggestions regarding how to diagnose this problem are appreciated.
The script I'm trying to run is called mpi_test.py:
from mpi4py import MPI
comm = MPI.COMM_WORLD
print('%d of %d' % (comm.Get_rank(), comm.Get_size()))
A simple command line execution and results are:
mpiexec -np 4 python mpi_test.py
0 of 1
0 of 1
0 of 1
0 of 1
Using this host file (kanagawa is my hostname):
kanagawa:4
I see this:
mpiexec -np 4 -f hostfile python mpi_test.py
0 of 1
0 of 1
0 of 1
0 of 1
And specifying the hosts/processes on the command line:
mpiexec -np 4 -hosts kanagawa,kanagawa,kanagawa,kanagawa python mpi_test.py
0 of 1
0 of 1
0 of 1
0 of 1
Processor specs for this machine:
lscpu
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
Address sizes: 39 bits physical, 48 bits virtual
CPU(s): 8
On-line CPU(s) list: 0-7
Thread(s) per core: 2
Core(s) per socket: 4
Socket(s): 1
NUMA node(s): 1
Vendor ID: GenuineIntel
CPU family: 6
Model: 142
Model name: Intel(R) Core(TM) i7-8550U CPU @ 1.80GHz
Stepping: 10
CPU MHz: 800.228
CPU max MHz: 4000.0000
CPU min MHz: 400.0000
BogoMIPS: 3999.93
Virtualization: VT-x
L1d cache: 128 KiB
L1i cache: 128 KiB
L2 cache: 1 MiB
L3 cache: 8 MiB
NUMA node0 CPU(s): 0-7
Vulnerability Itlb multihit: KVM: Mitigation: VMX disabled
Vulnerability L1tf: Mitigation; PTE Inversion; VMX conditional cache flushes, SMT vulnerabl
e
Vulnerability Mds: Mitigation; Clear CPU buffers; SMT vulnerable
Vulnerability Meltdown: Mitigation; PTI
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Full generic retpoline, IBPB conditional, IBRS_FW, STIBP co
nditional, RSB filling
Vulnerability Srbds: Mitigation; Microcode
Vulnerability Tsx async abort: Not affected
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36
clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdt
scp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology n
onstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx est
tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid sse4_1 sse4_2 x2apic movbe popc
nt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefet
ch cpuid_fault epb invpcid_single pti ssbd ibrs ibpb stibp tpr_shadow v
nmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi
2 erms invpcid mpx rdseed adx smap clflushopt intel_pt xsaveopt xsavec
xgetbv1 xsaves dtherm ida arat pln pts hwp hwp_notify hwp_act_window hw
p_epp md_clear flush_l1d
EDIT:
I tested the same setup on a couple of Raspberry Pi systems and it works as expected:
mpiexec -n 6 --machinefile briar_hosts python -m mpi4py.bench helloworld
Hello, World! I am process 0 of 6 on briar.
Hello, World! I am process 1 of 6 on briar.
Hello, World! I am process 2 of 6 on berry00.
Hello, World! I am process 3 of 6 on berry00.
Hello, World! I am process 4 of 6 on berry00.
Hello, World! I am process 5 of 6 on berry00.