在线时间:8:00-16:00
迪恩网络APP
随时随地掌握行业动态
扫描二维码
关注迪恩网络微信公众号
开源软件名称:dimajix/docker-jupyter-spark开源软件地址:https://github.com/dimajix/docker-jupyter-spark开源编程语言:Shell 56.8%开源软件介绍:Jupyter Spark Docker ContainerThis Docker image contains a Jupyter notebook with a PySpark kernel. Per default, the kernel runs in Spark 'local' mode, which does not require any cluster. But the Docker image also supports setting up a Spark standalone cluster which can be accessed from the Notebook. The easiest way to run the Jupyter notebook is to run:
Then when the container is running, point your webbrowser to You can also specify your AWS credentials for accessing data inside S3 via environment variables
ConfigurationThere are some configuration options which can be changed by setting environment variables for your Docker container. Details to all the options are listed below. Jupyter ConfigurationThere are only two Jupyter specific configuration properties:
Spark Kernel ConfigurationThe Jupyter Notebook contains a special PySpark kernel, which also has some configuration options related to Spark itself. Unfortunately you cannot change these settings on a per-notebook basis, but at least you can change these settings per Docker container.
S3 propertiesSince many users want to access data stored on AWS S3, it is also possible to specify AWS credentials and general settings.
Spark Cluster ConfigurationAside from the Jupyter kernel / driver side there are some more Spark related configuration properties, which are used to setup and connect to the Spark cluster. Note that all worker nodes also require the same Python installation as on the notebook server, so essentially the only deployment mode currently supported is Spark Standalone cluster using the same Docker image for both the Spark master and all Spark worker nodes. The following settings configure Spark master and all workers.
Hadoop PropertiesIt is possible to access Hadoop resources (in HDFS) from Spark.
Running a Spark Standalone ClusterThe container already contains all components for running a Spark standalone cluster. This can be achieved by using the two commands
The docker-compose file contains an example of a complete Spark standalone cluster with a Jupyter Notebook as the frontend. |
2023-10-27
2022-08-15
2022-08-17
2022-09-23
2022-08-13
请发表评论