在线时间:8:00-16:00
迪恩网络APP
随时随地掌握行业动态
扫描二维码
关注迪恩网络微信公众号
开源软件名称:spark-project开源软件地址:https://gitee.com/mirrors/spark-project开源软件介绍:Apache SparkSpark is a unified analytics engine for large-scale data processing. It provideshigh-level APIs in Scala, Java, Python, and R, and an optimized engine thatsupports general computation graphs for data analysis. It also supports arich set of higher-level tools including Spark SQL for SQL and DataFrames,pandas API on Spark for pandas workloads, MLlib for machine learning, GraphX for graph processing,and Structured Streaming for stream processing. Online DocumentationYou can find the latest Spark documentation, including a programmingguide, on the project web page.This README file only contains basic setup instructions. Building SparkSpark is built using Apache Maven.To build Spark and its example programs, run: ./build/mvn -DskipTests clean package (You do not need to do this if you downloaded a pre-built package.) More detailed documentation is available from the project site, at"Building Spark". For general development tips, including info on developing Spark using an IDE, see "Useful Developer Tools". Interactive Scala ShellThe easiest way to start using Spark is through the Scala shell: ./bin/spark-shell Try the following command, which should return 1,000,000,000: scala> spark.range(1000 * 1000 * 1000).count() Interactive Python ShellAlternatively, if you prefer Python, you can use the Python shell: ./bin/pyspark And run the following command, which should also return 1,000,000,000: >>> spark.range(1000 * 1000 * 1000).count() Example ProgramsSpark also comes with several sample programs in the ./bin/run-example SparkPi will run the Pi example locally. You can set the MASTER environment variable when running examples to submitexamples to a cluster. This can be a mesos:// or spark:// URL,"yarn" to run on YARN, and "local" to runlocally with one thread, or "local[N]" to run locally with N threads. Youcan also use an abbreviated class name if the class is in the MASTER=spark://host:7077 ./bin/run-example SparkPi Many of the example programs print usage help if no params are given. Running TestsTesting first requires building Spark. Once Spark is built, testscan be run using: ./dev/run-tests Please see the guidance on how torun tests for a module, or individual tests. There is also a Kubernetes integration test, see resource-managers/kubernetes/integration-tests/README.md A Note About Hadoop VersionsSpark uses the Hadoop core library to talk to HDFS and other Hadoop-supportedstorage systems. Because the protocols have changed in different versions ofHadoop, you must build Spark against the same version that your cluster runs. Please refer to the build documentation at"Specifying the Hadoop Version and Enabling YARN"for detailed guidance on building for a particular distribution of Hadoop, includingbuilding for particular Hive and Hive Thriftserver distributions. ConfigurationPlease refer to the Configuration Guidein the online documentation for an overview on how to configure Spark. ContributingPlease review the Contribution to Spark guidefor information on how to get started contributing to the project. |
请发表评论