Supposing you want to spark-submit
to YARN a Python script located at /home/user/scripts/spark_streaming.py
, the correct syntax is as follows:
spark-submit --master yarn --deploy-mode client /home/user/scripts/spark_streaming.py
You can interchange the ordering of the various flags, but the script itself must be at the end; if your script accepts arguments, they should follow the script name (e.g. see this example for calculating pi with 10 decimal digits).
For executing locally with, say, 2 cores, you should use --master local[2]
- use --master local[*]
for all available local cores (no deploy-mode
flag in both cases).
Check the docs for more info (although admittedly they are rather poor in pyspark demonstrations).
PS The mention of Jupyter
, as well the path shown in your error message are extremely puzzling...
UPDATE: Seems that PYSPARK_DRIVER_PYTHON=jupyter
messes up everything, funneling the execution through Jupyter (which is undesirable here, and it may explain the weird error message). Try modifying the environment variables in your .bashrc
as follows:
export SPARK_HOME="/usr/local/spark" # do not include /bin
export PYSPARK_PYTHON=python
export PYSPARK_DRIVER_PYTHON=python
export PYSPARK_DRIVER_PYTHON_OPTS=""
and source .bashrc
.
与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…