• 设为首页
  • 点击收藏
  • 手机版
    手机扫一扫访问
    迪恩网络手机版
  • 关注官方公众号
    微信扫一扫关注
    迪恩网络公众号

Python conf.SparkConf类代码示例

原作者: [db:作者] 来自: [db:来源] 收藏 邀请

本文整理汇总了Python中pyspark.conf.SparkConf的典型用法代码示例。如果您正苦于以下问题:Python SparkConf类的具体用法?Python SparkConf怎么用?Python SparkConf使用的例子?那么恭喜您, 这里精选的类代码示例或许可以为您提供帮助。



在下文中一共展示了SparkConf类的20个代码示例,这些例子默认根据受欢迎程度排序。您可以为喜欢或者感觉有用的代码点赞,您的评价将有助于我们的系统推荐出更棒的Python代码示例。

示例1: getOrCreate

        def getOrCreate(self):
            """Gets an existing :class:`SparkSession` or, if there is no existing one, creates a
            new one based on the options set in this builder.

            This method first checks whether there is a valid global default SparkSession, and if
            yes, return that one. If no valid global default SparkSession exists, the method
            creates a new SparkSession and assigns the newly created SparkSession as the global
            default.

            >>> s1 = SparkSession.builder.config("k1", "v1").getOrCreate()
            >>> s1.conf.get("k1") == "v1"
            True

            In case an existing SparkSession is returned, the config options specified
            in this builder will be applied to the existing SparkSession.

            >>> s2 = SparkSession.builder.config("k2", "v2").getOrCreate()
            >>> s1.conf.get("k1") == s2.conf.get("k1")
            True
            >>> s1.conf.get("k2") == s2.conf.get("k2")
            True
            """
            with self._lock:
                from pyspark.context import SparkContext
                from pyspark.conf import SparkConf
                session = SparkSession._instantiatedContext
                if session is None:
                    sparkConf = SparkConf()
                    for key, value in self._options.items():
                        sparkConf.set(key, value)
                    sc = SparkContext.getOrCreate(sparkConf)
                    session = SparkSession(sc)
                for key, value in self._options.items():
                    session.conf.set(key, value)
                return session
开发者ID:Hydrotoast,项目名称:spark,代码行数:35,代码来源:session.py


示例2: _create_shell_session

    def _create_shell_session():
        """
        Initialize a SparkSession for a pyspark shell session. This is called from shell.py
        to make error handling simpler without needing to declare local variables in that
        script, which would expose those to users.
        """
        import py4j
        from pyspark.conf import SparkConf
        from pyspark.context import SparkContext
        try:
            # Try to access HiveConf, it will raise exception if Hive is not added
            conf = SparkConf()
            if conf.get('spark.sql.catalogImplementation', 'hive').lower() == 'hive':
                SparkContext._jvm.org.apache.hadoop.hive.conf.HiveConf()
                return SparkSession.builder\
                    .enableHiveSupport()\
                    .getOrCreate()
            else:
                return SparkSession.builder.getOrCreate()
        except (py4j.protocol.Py4JError, TypeError):
            if conf.get('spark.sql.catalogImplementation', '').lower() == 'hive':
                warnings.warn("Fall back to non-hive support because failing to access HiveConf, "
                              "please make sure you build spark with hive")

        return SparkSession.builder.getOrCreate()
开发者ID:CodingCat,项目名称:spark,代码行数:25,代码来源:session.py


示例3: getOrCreate

 def getOrCreate(self):
     """Gets an existing :class:`SparkSession` or, if there is no existing one, creates a new
     one based on the options set in this builder.
     """
     with self._lock:
         from pyspark.conf import SparkConf
         from pyspark.context import SparkContext
         from pyspark.sql.context import SQLContext
         sparkConf = SparkConf()
         for key, value in self._options.items():
             sparkConf.set(key, value)
         sparkContext = SparkContext.getOrCreate(sparkConf)
         return SQLContext.getOrCreate(sparkContext).sparkSession
开发者ID:GIladland,项目名称:spark,代码行数:13,代码来源:session.py


示例4: test_from_conf_with_settings

 def test_from_conf_with_settings(self):
     conf = SparkConf()
     conf.set("spark.cleaner.ttl", "10")
     conf.setMaster(self.master)
     conf.setAppName(self.appName)
     self.ssc = StreamingContext(conf=conf, duration=self.batachDuration)
     self.assertEqual(int(self.ssc.sparkContext._conf.get("spark.cleaner.ttl")), 10)
开发者ID:giworld,项目名称:spark,代码行数:7,代码来源:tests.py


示例5: getOrCreate

        def getOrCreate(self):
            """Gets an existing :class:`SparkSession` or, if there is no existing one, creates a
            new one based on the options set in this builder.

            This method first checks whether there is a valid global default SparkSession, and if
            yes, return that one. If no valid global default SparkSession exists, the method
            creates a new SparkSession and assigns the newly created SparkSession as the global
            default.

            >>> s1 = SparkSession.builder.config("k1", "v1").getOrCreate()
            >>> s1.conf.get("k1") == s1.sparkContext.getConf().get("k1") == "v1"
            True

            In case an existing SparkSession is returned, the config options specified
            in this builder will be applied to the existing SparkSession.

            >>> s2 = SparkSession.builder.config("k2", "v2").getOrCreate()
            >>> s1.conf.get("k1") == s2.conf.get("k1")
            True
            >>> s1.conf.get("k2") == s2.conf.get("k2")
            True
            """
            with self._lock:
                from pyspark.context import SparkContext
                from pyspark.conf import SparkConf

                session = SparkSession._instantiatedContext
                if session is None:
                    sparkConf = SparkConf()
                    for key, value in self._options.items():
                        sparkConf.set(key, value)
                    sc = SparkContext.getOrCreate(sparkConf)
                    # This SparkContext may be an existing one.
                    for key, value in self._options.items():
                        # we need to propagate the confs
                        # before we create the SparkSession. Otherwise, confs like
                        # warehouse path and metastore url will not be set correctly (
                        # these confs cannot be changed once the SparkSession is created).
                        sc._conf.set(key, value)
                    session = SparkSession(sc)
                for key, value in self._options.items():
                    session.conf.set(key, value)
                for key, value in self._options.items():
                    session.sparkContext._conf.set(key, value)
                return session
开发者ID:ChrisYohann,项目名称:spark,代码行数:45,代码来源:session.py


示例6: __init__

  def __init__(self):
    # Setup PySpark. This is needed until PySpark becomes available on PyPI,
    # after which we can simply add it to requirements.txt.
    _setup_pyspark()
    from pyspark.conf import SparkConf
    from pyspark.context import SparkContext
    from pyspark.serializers import MarshalSerializer

    # Create a temporary .zip lib file for Metis, which will be copied over to
    # Spark workers so they can unpickle Metis functions and objects.
    metis_lib_file = tempfile.NamedTemporaryFile(suffix='.zip', delete=False)
    metis_lib_file.close()
    _copy_lib_for_spark_workers(metis_lib_file.name)

    # Also ship the Metis lib file so worker nodes can deserialize Metis
    # internal data structures.
    conf = SparkConf()
    conf.setMaster(app.config['SPARK_MASTER'])
    conf.setAppName('chronology:metis')
    parallelism = int(app.config.get('SPARK_PARALLELISM', 0))
    if parallelism:
      conf.set('spark.default.parallelism', parallelism)
    self.context = SparkContext(conf=conf,
                                pyFiles=[metis_lib_file.name],
                                serializer=MarshalSerializer())

    # Delete temporary Metis lib file.
    os.unlink(metis_lib_file.name)

    # We'll use this to parallelize fetching events in KronosSource.
    # The default of 8 is from:
    # https://spark.apache.org/docs/latest/configuration.html
    self.parallelism = parallelism or 8
开发者ID:Applied-Duality,项目名称:chronology,代码行数:33,代码来源:executor.py


示例7: getOrCreate

        def getOrCreate(self):
            """Gets an existing :class:`SparkSession` or, if there is no existing one, creates a
            new one based on the options set in this builder.

            This method first checks whether there is a valid thread-local SparkSession,
            and if yes, return that one. It then checks whether there is a valid global
            default SparkSession, and if yes, return that one. If no valid global default
            SparkSession exists, the method creates a new SparkSession and assigns the
            newly created SparkSession as the global default.

            In case an existing SparkSession is returned, the config options specified
            in this builder will be applied to the existing SparkSession.
            """
            with self._lock:
                from pyspark.conf import SparkConf
                from pyspark.context import SparkContext
                from pyspark.sql.context import SQLContext
                sparkConf = SparkConf()
                for key, value in self._options.items():
                    sparkConf.set(key, value)
                sparkContext = SparkContext.getOrCreate(sparkConf)
                return SQLContext.getOrCreate(sparkContext).sparkSession
开发者ID:ArshiyanAlam,项目名称:spark,代码行数:22,代码来源:session.py


示例8: SparkConf

            port += 1

    # return the first available port
    return port


# this is the deprecated equivalent of ADD_JARS
add_files = None
if os.environ.get("ADD_FILES") is not None:
    add_files = os.environ.get("ADD_FILES").split(",")

if os.environ.get("SPARK_EXECUTOR_URI"):
    SparkContext.setSystemProperty("spark.executor.uri", os.environ["SPARK_EXECUTOR_URI"])

# setup mesos-based connection
conf = SparkConf().setMaster(os.environ["SPARK_MASTER"])

# optionally set memory limits
if os.environ.get("SPARK_RAM_DRIVER"):
    conf.set("spark.driver.memory", os.environ["SPARK_RAM_DRIVER"])
if os.environ.get("SPARK_RAM_WORKER"):
    conf.set("spark.executor_memory", os.environ["SPARK_RAM_WORKER"])

# set the UI port
conf.set("spark.ui.port", ui_get_available_port())

# optionally set the Spark binary
if os.environ.get("SPARK_BINARY"):
    conf.set("spark.executor.uri", os.environ["SPARK_BINARY"])

# establish config-based context
开发者ID:wenh123,项目名称:ipython-spark-docker,代码行数:31,代码来源:shell.py


示例9: f

from pyspark.sql import functions as F

import logging
#logging.config.fileConfig('dpa_logging.conf')
logger = logging.getLogger('dpa.pipeline.test')


if __name__ == "__main__":

    def f(x):
        x = random() * x
        return x

    sc = SparkContext(appName="PiPySpark")

    conf = SparkConf()

    print(conf.getAll())
    print(sc.version)
    print(sc)
    #print(sys.argv[1])
    #print(sys.argv[2])

    #sqlCtx = HiveContext(sc)

    print("Iniciando la tarea en spark")
    result = sc.parallelize(range(10000))\
               .map(f)\
               .reduce(add)

    print("{result} es nuestra cálculo".format(result=result))
开发者ID:radianv,项目名称:dpa,代码行数:31,代码来源:suma.py


示例10: getFileList

     '/home/dan/Desktop/IMN432-CW01/TF_IDF/',
     '/home/dan/Desktop/IMN432-CW01/Word_Freq/',
     '/home/dan/Desktop/IMN432-CW01/IDF/',
     '/home/dan/Desktop/IMN432-CW01/IDF/IDF-Pairs',
     '/home/dan/Desktop/IMN432-CW01/IDF',
     '/home/dan/Desktop/IMN432-CW01/TF_IDF/TF_IDF_File',
     '/home/dan/Desktop/IMN432-CW01/processXML/',
     '/home/dan/Desktop/IMN432-CW01/meta/',
     '/home/dan/Desktop/IMN432-CW01/TF_IDF',
     '/home/dan/Desktop/IMN432-CW01/processXML/Subject',
     '/home/dan/Spark_Files/Books/stopwords_en.txt']
 allFiles = getFileList(directory[0])
 # Find the Number of Files in the Directory
 numFiles = len(allFiles)
 # Create Spark Job Name and Configuration Settings
 config = SparkConf().setMaster("local[*]")
 config.set("spark.executor.memory", "5g")
 sc = SparkContext(conf=config, appName="ACKF415-Coursework-1")
 # Create a File Details List
 N = numFiles
 fileEbook = []
 print('################################################')
 print('###### Process Files > Word Freq to Pickle #####\n')
 # Start Timer
 WordFreq_Time = time()
 # Pickled Word Frequencies
 pickleWordF = getFileList(directory[2])
 # Ascertain if Section has already been completed
 if len(pickleWordF) < 1:
     print 'Creating Work Freq Pickles and RDDs \n'
     # Import the Stop and Save as a List
开发者ID:AkiraKane,项目名称:CityUniversity2014,代码行数:31,代码来源:ackf415-Local-LR-Optimisation.py


示例11: SparkConf

'''
Created on Oct 30, 2015

@author: dyerke
'''
from pyspark.context import SparkContext
from pyspark.conf import SparkConf

if __name__ == '__main__':
    m_hostname= "dyerke-Inspiron-7537"
    #
    conf= SparkConf()
    conf.setAppName("MyTestApp")
    conf.setMaster("spark://" + m_hostname + ":7077")
    conf.setSparkHome("/usr/local/spark")
    conf.set("spark.driver.host", m_hostname)
    logFile = "/usr/local/spark/README.md"  # Should be some file on your system
    #
    sc= SparkContext(conf=conf)
    logData= sc.textFile(logFile).cache()
    #
    countAs= logData.filter(lambda x: 'a' in x).count()
    countBs= logData.filter(lambda x: 'b' in x).count()
    #
    print("Lines with a: %i, lines with b: %i" % (countAs, countBs))
    sc.stop()
开发者ID:kkdyer,项目名称:dse_capstone,代码行数:26,代码来源:MyFirstSparkApplicationMain.py


示例12: printSparkConfigurations

 def printSparkConfigurations():
     c = SparkConf()
     print("Spark configurations: {}".format(c.getAll()))
开发者ID:wwken,项目名称:Misc_programs,代码行数:3,代码来源:processTree.py


示例13: main


#.........这里部分代码省略.........

    # {computeStatistic.id -> list[step_conf_tuple]}, 其中 step_conf_tuple = (step_id, step_conf_dict)
    compute_prepares_config_active = dict(map(
        lambda computeStatistic_conf: (computeStatistic_conf[0],
                                       sorted(list_dict_merge(
                                           map(lambda step_conf: map_conf_properties(step_conf[1], 'step.id'),
                                               filter(
                                                   lambda step_conf: step_conf[1].get('step.enabled', False),
                                                   computeStatistic_conf[1].get('prepares.steps', {}).iteritems())
                                           )).iteritems())
        ), compute_computeStatistics_config_active))
    # print('= = ' * 30, compute_prepares_config_active2 == compute_prepares_config_active)

    print('= = ' * 20, type(compute_prepares_config_active), 'compute_prepares_config_active = ')
    pprint(compute_prepares_config_active)

    compute_computes_config_active = dict(map(
        lambda computeStatistic_conf: (computeStatistic_conf[0],
                                       sorted(list_dict_merge(
                                           map(lambda step_conf: map_conf_properties(step_conf[1], 'step.id'),
                                               filter(lambda step_conf: step_conf[1].get('step.enabled', False),
                                                      computeStatistic_conf[1].get('computes.steps', {}).iteritems())
                                           )).iteritems())
        ), compute_computeStatistics_config_active))
    print('= = ' * 20, type(compute_computes_config_active), 'compute_computes_config_active = ')
    pprint(compute_computes_config_active)

    test_flag = False
    if not test_flag:
        # 初始化
        # 测试 serializer
        # serializer 默认取值 PickleSerializer()  #UnpicklingError: invalid load key, '{'.
        # serializer=MarshalSerializer()  # ValueError: bad marshal data
        # serializer=AutoSerializer()  # ValueError: invalid sevialization type: {
        # serializer=CompressedSerializer(PickleSerializer())  # error: Error -3 while decompressing data: incorrect header check

        # sc = SparkContext(master, app_name, sparkHome = spark_home, pyFiles=pyFiles)
        # sc = SparkContext(master, app_name, sparkHome = sparkHome, pyFiles=pyFiles, serializer=MarshalSerializer())
        # sc = SparkContext(master, app_name, sparkHome = sparkHome, pyFiles=pyFiles, serializer=AutoSerializer())
        # sc = SparkContext(master, app_name, sparkHome = sparkHome, pyFiles=pyFiles, serializer=CompressedSerializer(PickleSerializer()))

        spark_conf = SparkConf()
        spark_conf.setMaster(master).setAppName(app_name).setSparkHome(spark_home)

        # spark streaming 调优配置
        spark_streaming_blockInterval = str(app_conf.get('spark.streaming.blockInterval', '')).strip()
        if spark_streaming_blockInterval:
            spark_conf.set('spark.streaming.blockInterval', spark_streaming_blockInterval)

        spark_streaming_kafka_maxRatePerPartition = str(
            app_conf.get('spark.streaming.kafka.maxRatePerPartition', '')).strip()
        if spark_streaming_kafka_maxRatePerPartition:
            spark_conf.set('spark.streaming.kafka.maxRatePerPartition', spark_streaming_kafka_maxRatePerPartition)

        spark_streaming_receiver_maxRate = str(app_conf.get('spark.streaming.receiver.maxRate', '')).strip()
        if spark_streaming_receiver_maxRate:
            spark_conf.set('spark.streaming.receiver.maxRate', spark_streaming_receiver_maxRate)

        spark_streaming_concurrentJobs = str(app_conf.get('spark.streaming.concurrentJobs', '')).strip()
        if spark_streaming_concurrentJobs:
            spark_conf.set('spark.streaming.concurrentJobs', spark_streaming_concurrentJobs)

        # spark sql 调优配置
        spark_sql_shuffle_partitions = str(app_conf.get('spark.sql.shuffle.partitions', '')).strip()
        if spark_sql_shuffle_partitions:
            spark_conf.set('spark.sql.shuffle.partitions', spark_sql_shuffle_partitions)

        sc = SparkContext(conf=spark_conf)
        for path in (pyFiles or []):
            sc.addPyFile(path)

        # 外部缓存优化,broadcast 分发
        cache_manager = CacheManager()
        cache_broadcast_list = \
            [(cache_id, cache_manager.cache_dataset(sc, cache_conf))
             for cache_id, cache_conf in cache_confs_with_ds_conf.iteritems()
             if cache_conf.get('broadcast.enabled', False)]

        for cache_id, cache_broadcast in cache_broadcast_list:
            cache_confs_with_ds_conf[cache_id]['broadcast'] = cache_broadcast

        batchDruationSeconds = app_conf['batchDuration.seconds']
        ssc = StreamingContext(sc, batchDruationSeconds)
        sqlc = SQLContext(sc)

        # 读取数据源
        stream = StreamingReader.readSource(ssc, di_in_conf_with_ds_conf, app_conf)
        # 流处理: 1 根据配置初始化处理指定数据接口的类的实例, 2 调用指定处理类实例的流数据处理方法
        # 测试 kafka_wordcount
        # counts = stream.flatMap(lambda line: line.split(" ")) \
        # .map(lambda word: (word, 1)) \
        # .reduceByKey(lambda a, b: a+b)
        # counts.pprint()
        StreamingApp.process(
            stream, sc, sqlc,
            di_in_conf_with_ds_conf, di_out_confs_with_ds_conf, cache_confs_with_ds_conf,
            prepares_config_active_steps, compute_prepares_config_active, compute_computes_config_active)

        ssc.start()
        ssc.awaitTermination()
开发者ID:tsingfu,项目名称:xuetangx-streaming-app,代码行数:101,代码来源:streaming_app_main.py


示例14: addPysparkPath

    # path for pyspark and py4j
    spark_pylib = os.path.join(spark_home, "python", "lib")
    py4jlib = [ziplib
               for ziplib in os.listdir(spark_pylib)
               if ziplib.startswith('py4j') and ziplib.endswith('.zip')][0]
    py4jlib = os.path.join(spark_pylib, py4jlib)
    sys.path.append(os.path.join(spark_home, "python"))
    sys.path.append(py4jlib)

addPysparkPath()

from pyspark.conf import SparkConf
from pyspark.context import SparkContext
from pyspark.storagelevel import StorageLevel

conf = SparkConf()
conf.setMaster('local[*]').setAppName('SparkLit test')
sc = SparkContext(conf=conf)
logger = sc._jvm.org.apache.log4j
logger.LogManager.getLogger("org"). setLevel( logger.Level.ERROR )
logger.LogManager.getLogger("akka").setLevel( logger.Level.ERROR )

import sparklit

suite = {}


def setUp():
    PATH = 'tests/grace_dubliners_james_joyce.txt'
    data = sc.textFile(PATH, 4)
    data.persist(StorageLevel.MEMORY_ONLY)
开发者ID:petrushev,项目名称:sparklit,代码行数:31,代码来源:test.py


示例15: _do_init

    def _do_init(self, master, appName, sparkHome, pyFiles, environment, batchSize, serializer,
                 conf, jsc, profiler_cls):
        self.environment = environment or {}
        # java gateway must have been launched at this point.
        if conf is not None and conf._jconf is not None:
            # conf has been initialized in JVM properly, so use conf directly. This represent the
            # scenario that JVM has been launched before SparkConf is created (e.g. SparkContext is
            # created and then stopped, and we create a new SparkConf and new SparkContext again)
            self._conf = conf
        else:
            self._conf = SparkConf(_jvm=SparkContext._jvm)
            if conf is not None:
                for k, v in conf.getAll():
                    self._conf.set(k, v)

        self._batchSize = batchSize  # -1 represents an unlimited batch size
        self._unbatched_serializer = serializer
        if batchSize == 0:
            self.serializer = AutoBatchedSerializer(self._unbatched_serializer)
        else:
            self.serializer = BatchedSerializer(self._unbatched_serializer,
                                                batchSize)

        # Set any parameters passed directly to us on the conf
        if master:
            self._conf.setMaster(master)
        if appName:
            self._conf.setAppName(appName)
        if sparkHome:
            self._conf.setSparkHome(sparkHome)
        if environment:
            for key, value in environment.items():
                self._conf.setExecutorEnv(key, value)
        for key, value in DEFAULT_CONFIGS.items():
            self._conf.setIfMissing(key, value)

        # Check that we have at least the required parameters
        if not self._conf.contains("spark.master"):
            raise Exception("A master URL must be set in your configuration")
        if not self._conf.contains("spark.app.name"):
            raise Exception("An application name must be set in your configuration")

        # Read back our properties from the conf in case we loaded some of them from
        # the classpath or an external config file
        self.master = self._conf.get("spark.master")
        self.appName = self._conf.get("spark.app.name")
        self.sparkHome = self._conf.get("spark.home", None)

        for (k, v) in self._conf.getAll():
            if k.startswith("spark.executorEnv."):
                varName = k[len("spark.executorEnv."):]
                self.environment[varName] = v

        self.environment["PYTHONHASHSEED"] = os.environ.get("PYTHONHASHSEED", "0")

        # Create the Java SparkContext through Py4J
        self._jsc = jsc or self._initialize_context(self._conf._jconf)
        # Reset the SparkConf to the one actually used by the SparkContext in JVM.
        self._conf = SparkConf(_jconf=self._jsc.sc().conf())

        # Create a single Accumulator in Java that we'll send all our updates through;
        # they will be passed back to us through a TCP server
        self._accumulatorServer = accumulators._start_update_server()
        (host, port) = self._accumulatorServer.server_address
        self._javaAccumulator = self._jvm.PythonAccumulatorV2(host, port)
        self._jsc.sc().register(self._javaAccumulator)

        self.pythonExec = os.environ.get("PYSPARK_PYTHON", 'python')
        self.pythonVer = "%d.%d" % sys.version_info[:2]

        # Broadcast's __reduce__ method stores Broadcast instances here.
        # This allows other code to determine which Broadcast instances have
        # been pickled, so it can determine which Java broadcast objects to
        # send.
        self._pickled_broadcast_vars = BroadcastPickleRegistry()

        SparkFiles._sc = self
        root_dir = SparkFiles.getRootDirectory()
        sys.path.insert(1, root_dir)

        # Deploy any code dependencies specified in the constructor
        self._python_includes = list()
        for path in (pyFiles or []):
            self.addPyFile(path)

        # Deploy code dependencies set by spark-submit; these will already have been added
        # with SparkContext.addFile, so we just need to add them to the PYTHONPATH
        for path in self._conf.get("spark.submit.pyFiles", "").split(","):
            if path != "":
                (dirname, filename) = os.path.split(path)
                if filename[-4:].lower() in self.PACKAGE_EXTENSIONS:
                    self._python_includes.append(filename)
                    sys.path.insert(1, os.path.join(SparkFiles.getRootDirectory(), filename))

        # Create a temporary directory inside spark.local.dir:
        local_dir = self._jvm.org.apache.spark.util.Utils.getLocalDir(self._jsc.sc().conf())
        self._temp_dir = \
            self._jvm.org.apache.spark.util.Utils.createTempDir(local_dir, "pyspark") \
                .getAbsolutePath()

#.........这里部分代码省略.........
开发者ID:AllenShi,项目名称:spark,代码行数:101,代码来源:context.py


示例16: test_existing_spark_context_with_settings

 def test_existing_spark_context_with_settings(self):
     conf = SparkConf()
     conf.set("spark.cleaner.ttl", "10")
     self.sc = SparkContext(master=self.master, appName=self.appName, conf=conf)
     self.ssc = StreamingContext(sparkContext=self.sc, duration=self.batachDuration)
     self.assertEqual(int(self.ssc.sparkContext._conf.get("spark.cleaner.ttl")), 10)
开发者ID:giworld,项目名称:spark,代码行数:6,代码来源:tests.py


示例17: SparkConf

import datetime
from pytz import timezone
print "Last run @%s" % (datetime.datetime.now(timezone('US/Pacific')))


# In[2]:

from pyspark.context import SparkContext
print "Running Spark Version %s" % (sc.version)


# In[3]:

from pyspark.conf import SparkConf
conf = SparkConf()
print conf.toDebugString()


# In[4]:

# Read Orders
orders = sqlContext.read.format('com.databricks.spark.csv').options(header='true').load('NW/NW-Orders.csv')


# In[5]:

order_details = sqlContext.read.format('com.databricks.spark.csv').options(header='true').load('NW/NW-Order-Details.csv')


# In[6]:
开发者ID:xsankar,项目名称:fdps-v3,代码行数:30,代码来源:Orders.py


示例18: getConf

 def getConf(self):
     conf = SparkConf()
     conf.setAll(self._conf.getAll())
     return conf
开发者ID:AllenShi,项目名称:spark,代码行数:4,代码来源:context.py


示例19: SparkConf

    # return the first available port
    return port


# this is the deprecated equivalent of ADD_JARS
add_files = None
if os.environ.get("ADD_FILES") is not None:
    add_files = os.environ.get("ADD_FILES").split(",")

if os.environ.get("SPARK_EXECUTOR_URI"):
    SparkContext.setSystemProperty("spark.executor.uri", os.environ["SPARK_EXECUTOR_URI"])


# setup mesos-based connection
conf = SparkConf().setMaster(os.environ["SPARK_MASTER"])


# set the UI port
conf.set("spark.ui.port", ui_get_available_port())


# optionally set the Spark binary
if os.environ.get("SPARK_BINARY"):
    conf.set("spark.executor.uri", os.environ["SPARK_BINARY"])


# establish config-based context
sc = SparkContext(appName="DockerIPythonShell", pyFiles=add_files, conf=conf)
atexit.register(lambda: sc.stop())
开发者ID:codeaudit,项目名称:ipython-spark-docker,代码行数:29,代码来源:shell.py


示例20: SparkContext

class SparkContext(object):

    """
    Main entry point for Spark functionality. A SparkContext represents the
    connection to a Spark cluster, and can be used to create L{RDD} and
    broadcast variables on that cluster.
    """

    _gateway = None
    _jvm = None
    _next_accum_id = 0
    _active_spark_context = None
    _lock = RLock()
    _python_includes = None  # zip and egg files that need to be added to PYTHONPATH

    PACKAGE_EXTENSIONS = ('.zip', '.egg', '.jar')

    def __init__(self, master=None, appName=None, sparkHome=None, pyFiles=None,
                 environment=None, batchSize=0, serializer=PickleSerializer(), conf=None,
                 gateway=None, jsc=None, profiler_cls=BasicProfiler):
        """
        Create a new SparkContext. At least the master and app name should be set,
        either through the named parameters here or through C{conf}.

        :param master: Cluster URL to connect to
               (e.g. mesos://host:port, spark://host:port, local[4]).
        :param appName: A name for your job, to display on the cluster web UI.
        :param sparkHome: Location where Spark is installed on cluster nodes.
        :param pyFiles: Collection of .zip or .py files to send to the cluster
               and add to PYTHONPATH.  These can be paths on the local file
               system or HDFS, HTTP, HTTPS, or FTP URLs.
        :param environment: A dictionary of environment variables to set on
               worker nodes.
        :param batchSize: The number of Python objects represented as a single
               Java object. Set 1 to disable batching, 0 to automatically choose
               the batch size based on object sizes, or -1 to use an unlimited
               batch size
        :param serializer: The serializer for RDDs.
        :param conf: A L{SparkConf} object setting Spark properties.
        :param gateway: Use an existing gateway and JVM, otherwise a new JVM
               will be instantiated.
        :param jsc: The JavaSparkContext instance (optional).
        :param profiler_cls: A class of custom Profiler used to do profiling
               (default is pyspark.profiler.BasicProfiler).


        >>> from pyspark.context import SparkContext
        >>> sc = SparkContext('local', 'test')

        >>> sc2 = SparkContext('local', 'test2') # doctest: +IGNORE_EXCEPTION_DETAIL
        Traceback (most recent call last):
            ...
        ValueError:...
        """
        self._callsite = first_spark_call() or CallSite(None, None, None)
        SparkContext._ensure_initialized(self, gateway=gateway, conf=conf)
        try:
            self._do_init(master, appName, sparkHome, pyFiles, environment, batchSize, serializer,
                          conf, jsc, profiler_cls)
        except:
            # If an error occurs, clean up in order to allow future SparkContext creation:
            self.stop()
            raise

    def _do_init(self, master, appName, sparkHome, pyFiles, environment, batchSize, serializer,
                 conf, jsc, profiler_cls):
        self.environment = environment or {}
        # java gateway must have been launched at this point.
        if conf is not None and conf._jconf is not None:
            # conf has been initialized in JVM properly, so use conf directly. This represent the
            # scenario that JVM has been launched before SparkConf is created (e.g. SparkContext is
            # created and then stopped, and we create a new SparkConf and new SparkContext again)
            self._conf = conf
        else:
            self._conf = SparkConf(_jvm=SparkContext._jvm)
            if conf is not None:
                for k, v in conf.getAll():
                    self._conf.set(k, v)

        self._batchSize = batchSize  # -1 represents an unlimited batch size
        self._unbatched_serializer = serializer
        if batchSize == 0:
            self.serializer = AutoBatchedSerializer(self._unbatched_serializer)
        else:
            self.serializer = BatchedSerializer(self._unbatched_serializer,
                                                batchSize)

        # Set any parameters passed directly to us on the conf
        if master:
            self._conf.setMaster(master)
        if appName:
            self._conf.setAppName(appName)
        if sparkHome:
            self._conf.setSparkHome(sparkHome)
        if environment:
            for key, value in environment.items():
                self._conf.setExecutorEnv(key, value)
        for key, value in DEFAULT_CONFIGS.items():
            self._conf.setIfMissing(key, value)

#.........这里部分代码省略.........
开发者ID:AllenShi,项目名称:spark,代码行数:101,代码来源:context.py



注:本文中的pyspark.conf.SparkConf类示例由纯净天空整理自Github/MSDocs等源码及文档管理平台,相关代码片段筛选自各路编程大神贡献的开源项目,源码版权归原作者所有,传播和使用请参考对应项目的License;未经允许,请勿转载。


鲜花

握手

雷人

路过

鸡蛋
该文章已有0人参与评论

请发表评论

全部评论

专题导读
上一篇:
Python context.SparkContext类代码示例发布时间:2022-05-26
下一篇:
Python cloudpickle.dumps函数代码示例发布时间:2022-05-26
热门推荐
阅读排行榜

扫描微信二维码

查看手机版网站

随时了解更新最新资讯

139-2527-9053

在线客服(服务时间 9:00~18:00)

在线QQ客服
地址:深圳市南山区西丽大学城创智工业园
电邮:jeky_zhao#qq.com
移动电话:139-2527-9053

Powered by 互联科技 X3.4© 2001-2213 极客世界.|Sitemap