Hi all so I am working with Apache Spark version spark-3.0.1-bin-hadoop2.7. At the moment if I want to initialize spark from the command line I have to navigate to my downloads directory, then navigate to the spark folder (cd spark-3.0.1-bin-hadoop2.7) then run whatever it is that I want to use for example: spark in python. My question is how would I go about making a command that can be executed in the command line: such that if I named the command 'sparkpython'(wanting to run the python version of spark) and executed 'sparkpython' it would run the python version of spark which is located in the embedded directory that I just described? So I essentially want a single line command that can run python spark.
与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…