• 设为首页
  • 点击收藏
  • 手机版
    手机扫一扫访问
    迪恩网络手机版
  • 关注官方公众号
    微信扫一扫关注
    迪恩网络公众号

Python setup_util.get_fwroot函数代码示例

原作者: [db:作者] 来自: [db:来源] 收藏 邀请

本文整理汇总了Python中setup.linux.setup_util.get_fwroot函数的典型用法代码示例。如果您正苦于以下问题:Python get_fwroot函数的具体用法?Python get_fwroot怎么用?Python get_fwroot使用的例子?那么恭喜您, 这里精选的函数代码示例或许可以为您提供帮助。



在下文中一共展示了get_fwroot函数的9个代码示例,这些例子默认根据受欢迎程度排序。您可以为喜欢或者感觉有用的代码点赞,您的评价将有助于我们的系统推荐出更棒的Python代码示例。

示例1: gather_langauges

def gather_langauges():
    '''
    Gathers all the known languages in the suite via the folder names
    beneath FWROOT.
    '''
    # Avoid setting up a circular import
    from setup.linux import setup_util

    lang_dir = os.path.join(setup_util.get_fwroot(), "frameworks")
    langs = []
    for dir in glob.glob(os.path.join(lang_dir, "*")):
        langs.append(dir.replace(lang_dir,"")[1:])
    return langs
开发者ID:abastardi,项目名称:FrameworkBenchmarks,代码行数:13,代码来源:utils.py


示例2: gather_tests

def gather_tests(include = [], exclude=[], benchmarker=None):
  '''
  Given test names as strings, returns a list of FrameworkTest objects. 
  For example, 'aspnet-mysql-raw' turns into a FrameworkTest object with
  variables for checking the test directory, the test database os, and 
  other useful items. 

  With no arguments, every test in this framework will be returned.  
  With include, only tests with this exact name will be returned. 
  With exclude, all tests but those excluded will be returned. 

  A benchmarker is needed to construct full FrameworkTest objects. If
  one is not provided, a default Benchmarker will be created. 
  '''

  # Avoid setting up a circular import
  from benchmark import framework_test
  from benchmark.benchmarker import Benchmarker
  from setup.linux import setup_util

  # Help callers out a bit
  if include is None:
    include = []
  if exclude is None:
    exclude = []
  
  # Setup default Benchmarker using example configuration
  if benchmarker is None:
    print "Creating Benchmarker from benchmark.cfg.example"
    default_config = setup_util.get_fwroot() + "/benchmark.cfg.example"
    config = ConfigParser.SafeConfigParser()
    config.readfp(open(default_config))
    defaults = dict(config.items("Defaults"))
    
    # Convert strings into proper python types
    for k,v in defaults.iteritems():
      try:
        defaults[k] = literal_eval(v)
      except:
        pass

    # Ensure we only run the __init__ method of Benchmarker
    defaults['install'] = None
    
    benchmarker = Benchmarker(defaults)

  
  # Search in both old and new directories
  fwroot = setup_util.get_fwroot() 
  config_files = glob.glob("%s/*/benchmark_config" % fwroot) 
  config_files.extend(glob.glob("%s/frameworks/*/*/benchmark_config" % fwroot))
  
  tests = []
  for config_file_name in config_files:
    config = None
    with open(config_file_name, 'r') as config_file:
      try:
        config = json.load(config_file)
      except ValueError:
        # User-friendly errors
        print("Error loading '%s'." % config_file_name)
        raise

    # Find all tests in the config file
    config_tests = framework_test.parse_config(config, 
      os.path.dirname(config_file_name), benchmarker)
    
    # Filter
    for test in config_tests:
      if test.name in exclude:
        continue
      elif len(include) is 0 or test.name in include:
        tests.append(test)

  tests.sort(key=lambda x: x.name)
  return tests
开发者ID:cuijiaxing,项目名称:FrameworkBenchmarks,代码行数:76,代码来源:utils.py


示例3: __init__


#.........这里部分代码省略.........
      # does the automerge, this is the second parent of FETCH_HEAD, and 
      # therefore we use FETCH_HEAD^2 below
      #
      # This may not work perfectly in situations where the user had advanced 
      # merging happening in their PR. We correctly handle them merging in 
      # from upstream, but if they do wild stuff then this will likely break
      # on that. However, it will also likely break by seeing a change in 
      # toolset and triggering a full run when a partial run would be 
      # acceptable
      #
      # ==== CURRENT SOLUTION FOR OWNED BRANCHES (e.g. master) ====
      #
      # This one is fairly simple. Find the commit or commit range, and 
      # examine the log of files changes. If you encounter any merges, 
      # then fully explode the two parent commits that made the merge
      # and look for the files changed there. This is an aggressive 
      # strategy to ensure that commits to master are always tested 
      # well
      log.debug("TRAVIS_COMMIT_RANGE: %s", os.environ['TRAVIS_COMMIT_RANGE'])
      log.debug("TRAVIS_COMMIT      : %s", os.environ['TRAVIS_COMMIT'])

      is_PR = (os.environ['TRAVIS_PULL_REQUEST'] != "false")
      if is_PR:
        log.debug('I am testing a pull request')
        first_commit = os.environ['TRAVIS_COMMIT_RANGE'].split('...')[0]
        last_commit = subprocess.check_output("git rev-list -n 1 FETCH_HEAD^2", shell=True).rstrip('\n')
        log.debug("Guessing that first commit in PR is : %s", first_commit)
        log.debug("Guessing that final commit in PR is : %s", last_commit)

        if first_commit == "":
          # Travis-CI is not yet passing a commit range for pull requests
          # so we must use the automerge's changed file list. This has the 
          # negative effect that new pushes to the PR will immediately 
          # start affecting any new jobs, regardless of the build they are on
          log.debug("No first commit, using Github's automerge commit")
          self.commit_range = "--first-parent -1 -m FETCH_HEAD"
        elif first_commit == last_commit:
          # There is only one commit in the pull request so far, 
          # or Travis-CI is not yet passing the commit range properly 
          # for pull requests. We examine just the one commit using -1
          #
          # On the oddball chance that it's a merge commit, we pray  
          # it's a merge from upstream and also pass --first-parent 
          log.debug("Only one commit in range, examining %s", last_commit)
          self.commit_range = "-m --first-parent -1 %s" % last_commit
        else: 
          # In case they merged in upstream, we only care about the first 
          # parent. For crazier merges, we hope
          self.commit_range = "--first-parent %s...%s" % (first_commit, last_commit)

      if not is_PR:
        log.debug('I am not testing a pull request')
        # If more than one commit was pushed, examine everything including 
        # all details on all merges
        self.commit_range = "-m %s" % os.environ['TRAVIS_COMMIT_RANGE']
        
        # If only one commit was pushed, examine that one. If it was a 
        # merge be sure to show all details
        if self.commit_range == "":
          self.commit_range = "-m -1 %s" % os.environ['TRAVIS_COMMIT']

    except KeyError:
      log.warning("I should only be used for automated integration tests e.g. Travis-CI")
      log.warning("Were you looking for run-tests.py?")
      self.commit_range = "-m HEAD^...HEAD"

    #
    # Find the one test from benchmark_config that we are going to run
    #

    tests = gather_tests()
    self.fwroot = setup_util.get_fwroot()
    target_dir = self.fwroot + '/frameworks/' + testdir
    log.debug("Target directory is %s", target_dir)
    dirtests = [t for t in tests if t.directory == target_dir]
    
    # Travis-CI is linux only
    osvalidtests = [t for t in dirtests if t.os.lower() == "linux"
                  and (t.database_os.lower() == "linux" or t.database_os.lower() == "none")]
    
    # Our Travis-CI only has some databases supported
    validtests = [t for t in osvalidtests if t.database.lower() == "mysql"
                  or t.database.lower() == "postgres"
                  or t.database.lower() == "mongodb"
                  or t.database.lower() == "none"]
    log.info("Found %s usable tests (%s valid for linux, %s valid for linux and {mysql,postgres,mongodb,none}) in directory '%s'", 
      len(dirtests), len(osvalidtests), len(validtests), '$FWROOT/frameworks/' + testdir)
    if len(validtests) == 0:
      log.critical("Found no test that is possible to run in Travis-CI! Aborting!")
      if len(osvalidtests) != 0:
        log.critical("Note: Found these tests that could run in Travis-CI if more databases were supported")
        log.critical("Note: %s", osvalidtests)
        databases_needed = [t.database for t in osvalidtests]
        databases_needed = list(set(databases_needed))
        log.critical("Note: Here are the needed databases:")
        log.critical("Note: %s", databases_needed)
      sys.exit(1)

    self.names = [t.name for t in validtests]
    log.info("Using tests %s to verify directory %s", self.names, '$FWROOT/frameworks/' + testdir)
开发者ID:mweibel,项目名称:FrameworkBenchmarks,代码行数:101,代码来源:run-ci.py


示例4: gather_tests

def gather_tests(include = [], exclude=[], benchmarker=None):
    '''
    Given test names as strings, returns a list of FrameworkTest objects.
    For example, 'aspnet-mysql-raw' turns into a FrameworkTest object with
    variables for checking the test directory, the test database os, and
    other useful items.

    With no arguments, every test in this framework will be returned.
    With include, only tests with this exact name will be returned.
    With exclude, all tests but those excluded will be returned.

    A benchmarker is needed to construct full FrameworkTest objects. If
    one is not provided, a default Benchmarker will be created.
    '''

    # Avoid setting up a circular import
    from benchmark import framework_test
    from benchmark.benchmarker import Benchmarker
    from setup.linux import setup_util

    # Help callers out a bit
    if include is None:
        include = []
    if exclude is None:
        exclude = []

    # Old, hacky method to exclude all tests was to
    # request a test known to not exist, such as ''.
    # If test '' was requested, short-circuit and return
    # nothing immediately
    if len(include) == 1 and '' in include:
        return []

    # Setup default Benchmarker using example configuration
    if benchmarker is None:
        default_config = setup_util.get_fwroot() + "/benchmark.cfg"
        config = ConfigParser.SafeConfigParser()
        config.readfp(open(default_config))
        defaults = dict(config.items("Defaults"))

        # Convert strings into proper python types
        for k,v in defaults.iteritems():
            try:
                defaults[k] = literal_eval(v)
            except Exception:
                pass

        # Ensure we only run the __init__ method of Benchmarker
        defaults['install'] = None
        defaults['results_name'] = "(unspecified, datetime = %Y-%m-%d %H:%M:%S)"
        defaults['results_environment'] = "My Server Environment"
        defaults['test_dir'] = None
        defaults['quiet'] = True

        benchmarker = Benchmarker(defaults)


    # Search for configuration files
    fwroot = setup_util.get_fwroot()
    config_files = []
    if benchmarker.test_dir:
        for test_dir in benchmarker.test_dir:
            dir_config_files = glob.glob("{!s}/frameworks/{!s}/benchmark_config.json".format(fwroot, test_dir))
            if len(dir_config_files):
                config_files.extend(dir_config_files)
            else:
                raise Exception("Unable to locate tests in test-dir: {!s}".format(test_dir))
    else:
        config_files.extend(glob.glob("{!s}/frameworks/*/*/benchmark_config.json".format(fwroot)))

    tests = []
    for config_file_name in config_files:
        config = None
        with open(config_file_name, 'r') as config_file:
            try:
                config = json.load(config_file)
            except ValueError:
                # User-friendly errors
                print("Error loading '{!s}'.".format(config_file_name))
                raise

        # Find all tests in the config file
        config_tests = framework_test.parse_config(config,
                                                   os.path.dirname(config_file_name), benchmarker)

        # Filter
        for test in config_tests:
            if len(include) is 0 and len(exclude) is 0:
                # No filters, we are running everything
                tests.append(test)
            elif test.name in exclude:
                continue
            elif test.name in include:
                tests.append(test)
            else:
                # An include list exists, but this test is
                # not listed there, so we ignore it
                pass

    # Ensure we were able to locate everything that was
#.........这里部分代码省略.........
开发者ID:abastardi,项目名称:FrameworkBenchmarks,代码行数:101,代码来源:utils.py


示例5: __init__

    def __init__(self, args):

        # Map type strings to their objects
        types = dict()
        types['json'] = JsonTestType()
        types['db'] = DBTestType()
        types['query'] = QueryTestType()
        types['fortune'] = FortuneTestType()
        types['update'] = UpdateTestType()
        types['plaintext'] = PlaintextTestType()

        # Turn type into a map instead of a string
        if args['type'] == 'all':
            args['types'] = types
        else:
            args['types'] = { args['type'] : types[args['type']] }
        del args['type']


        args['max_threads'] = args['threads']
        args['max_concurrency'] = max(args['concurrency_levels'])

        self.__dict__.update(args)
        # pprint(self.__dict__)

        self.quiet_out = QuietOutputStream(self.quiet)

        self.start_time = time.time()
        self.run_test_timeout_seconds = 7200

        # setup logging
        logging.basicConfig(stream=self.quiet_out, level=logging.INFO)

        # setup some additional variables
        if self.database_user == None: self.database_user = self.client_user
        if self.database_host == None: self.database_host = self.client_host
        if self.database_identity_file == None: self.database_identity_file = self.client_identity_file

        # Remember root directory
        self.fwroot = setup_util.get_fwroot()

        # setup current_benchmark.txt location
        self.current_benchmark = "/tmp/current_benchmark.txt"

        if hasattr(self, 'parse') and self.parse != None:
            self.timestamp = self.parse
        else:
            self.timestamp = time.strftime("%Y%m%d%H%M%S", time.localtime())

        # setup results and latest_results directories
        self.result_directory = os.path.join(self.fwroot, "results")
        if (args['clean'] or args['clean_all']) and os.path.exists(os.path.join(self.fwroot, "results")):
            shutil.rmtree(os.path.join(self.fwroot, "results"))

        # remove installs directories if --clean-all provided
        self.install_root = "%s/%s" % (self.fwroot, "installs")
        if args['clean_all']:
            os.system("sudo rm -rf " + self.install_root)
            os.mkdir(self.install_root)

        self.results = None
        try:
            with open(os.path.join(self.full_results_directory(), 'results.json'), 'r') as f:
                #Load json file into results object
                self.results = json.load(f)
        except IOError:
            logging.warn("results.json for test not found.")

        if self.results == None:
            self.results = dict()
            self.results['uuid'] = str(uuid.uuid4())
            self.results['name'] = datetime.now().strftime(self.results_name)
            self.results['environmentDescription'] = self.results_environment
            self.results['startTime'] = int(round(time.time() * 1000))
            self.results['completionTime'] = None
            self.results['concurrencyLevels'] = self.concurrency_levels
            self.results['queryIntervals'] = self.query_levels
            self.results['frameworks'] = [t.name for t in self.__gather_tests]
            self.results['duration'] = self.duration
            self.results['rawData'] = dict()
            self.results['rawData']['json'] = dict()
            self.results['rawData']['db'] = dict()
            self.results['rawData']['query'] = dict()
            self.results['rawData']['fortune'] = dict()
            self.results['rawData']['update'] = dict()
            self.results['rawData']['plaintext'] = dict()
            self.results['completed'] = dict()
            self.results['succeeded'] = dict()
            self.results['succeeded']['json'] = []
            self.results['succeeded']['db'] = []
            self.results['succeeded']['query'] = []
            self.results['succeeded']['fortune'] = []
            self.results['succeeded']['update'] = []
            self.results['succeeded']['plaintext'] = []
            self.results['failed'] = dict()
            self.results['failed']['json'] = []
            self.results['failed']['db'] = []
            self.results['failed']['query'] = []
            self.results['failed']['fortune'] = []
            self.results['failed']['update'] = []
#.........这里部分代码省略.........
开发者ID:kbrock,项目名称:FrameworkBenchmarks,代码行数:101,代码来源:benchmarker.py


示例6: main

def main(argv=None):
    ''' Runs the program. There are three ways to pass arguments 
    1) environment variables TFB_*
    2) configuration file benchmark.cfg
    3) command line flags
    In terms of precedence, 3 > 2 > 1, so config file trumps environment variables
    but command line flags have the final say
    '''
    # Do argv default this way, as doing it in the functional declaration sets it at compile time
    if argv is None:
        argv = sys.argv

    # Enable unbuffered output so messages will appear in the proper order with subprocess output.
    sys.stdout=Unbuffered(sys.stdout)

    # Update python environment
    # 1) Ensure the current directory (which should be the benchmark home directory) is in the path so that the tests can be imported.
    sys.path.append('.')
    # 2) Ensure toolset/setup/linux is in the path so that the tests can "import setup_util".
    sys.path.append('toolset/setup/linux')

    # Update environment for shell scripts
    fwroot = setup_util.get_fwroot()
    if not fwroot: 
        fwroot = os.getcwd()
    setup_util.replace_environ(config='config/benchmark_profile', root=fwroot)
    print "FWROOT is %s"%setup_util.get_fwroot()

    conf_parser = argparse.ArgumentParser(
        description=__doc__,
        formatter_class=argparse.RawDescriptionHelpFormatter,
        add_help=False)
    conf_parser.add_argument('--conf_file', default='benchmark.cfg', metavar='FILE', help='Optional configuration file to provide argument defaults. All config options can be overridden using the command line.')
    args, remaining_argv = conf_parser.parse_known_args()

    try:
        with open (args.conf_file):
            config = ConfigParser.SafeConfigParser()
            config.read([os.getcwd() + '/' + args.conf_file])
            defaults = dict(config.items("Defaults"))
            # Convert strings into proper python types
            for k,v in defaults.iteritems():
                try:
                    defaults[k] = literal_eval(v)
                except Exception:
                    pass
    except IOError:
        if args.conf_file != 'benchmark.cfg':
            print 'Configuration file not found!'
        defaults = { "client-host":"localhost"}

    ##########################################################
    # Set up default values
    ##########################################################        
    serverHost = os.environ.get('TFB_SERVER_HOST')
    clientHost = os.environ.get('TFB_CLIENT_HOST')
    clientUser = os.environ.get('TFB_CLIENT_USER')
    clientIden = os.environ.get('TFB_CLIENT_IDENTITY_FILE')
    databaHost = os.getenv('TFB_DATABASE_HOST', clientHost)
    databaUser = os.getenv('TFB_DATABASE_USER', clientUser)
    dbIdenFile = os.getenv('TFB_DATABASE_IDENTITY_FILE', clientIden)
    maxThreads = 8
    try:
        maxThreads = multiprocessing.cpu_count()
    except Exception:
        pass

    ##########################################################
    # Set up argument parser
    ##########################################################
    parser = argparse.ArgumentParser(description="Install or run the Framework Benchmarks test suite.",
        parents=[conf_parser],
        formatter_class=argparse.ArgumentDefaultsHelpFormatter,
        epilog='''If an argument includes (type int-sequence), then it accepts integer lists in multiple forms. 
        Using a single number e.g. 5 will create a list [5]. Using commas will create a list containing those 
        values e.g. 1,3,6 creates [1, 3, 6]. Using three colon-separated numbers of start:step:end will create a 
        list, using the semantics of python's range function, e.g. 1:3:15 creates [1, 4, 7, 10, 13] while 
        0:1:5 creates [0, 1, 2, 3, 4]
        ''')

    # SSH options
    parser.add_argument('-s', '--server-host', default=serverHost, help='The application server.')
    parser.add_argument('-c', '--client-host', default=clientHost, help='The client / load generation server.')
    parser.add_argument('-u', '--client-user', default=clientUser, help='The username to use for SSH to the client instance.')
    parser.add_argument('-i', '--client-identity-file', dest='client_identity_file', default=clientIden,
                        help='The key to use for SSH to the client instance.')
    parser.add_argument('-d', '--database-host', default=databaHost,
                        help='The database server.  If not provided, defaults to the value of --client-host.')
    parser.add_argument('--database-user', default=databaUser,
                        help='The username to use for SSH to the database instance.  If not provided, defaults to the value of --client-user.')
    parser.add_argument('--database-identity-file', default=dbIdenFile, dest='database_identity_file',
                        help='The key to use for SSH to the database instance.  If not provided, defaults to the value of --client-identity-file.')
    parser.add_argument('-p', dest='password_prompt', action='store_true', help='Prompt for password')
    
    
    # Install options
    parser.add_argument('--install', choices=['client', 'database', 'server', 'all'], default=None,
                        help='Runs installation script(s) before continuing on to execute the tests.')
    parser.add_argument('--install-error-action', choices=['abort', 'continue'], default='continue', help='action to take in case of error during installation')
    parser.add_argument('--install-strategy', choices=['unified', 'pertest'], default='unified', 
#.........这里部分代码省略.........
开发者ID:mnjstwins,项目名称:FrameworkBenchmarks,代码行数:101,代码来源:run-tests.py


示例7: main

def main(argv=None):
    ''' Runs the program. There are three ways to pass arguments
    1) environment variables TFB_*
    2) configuration file benchmark.cfg
    3) command line flags
    In terms of precedence, 3 > 2 > 1, so config file trumps environment variables
    but command line flags have the final say
    '''
    # Do argv default this way, as doing it in the functional declaration sets it at compile time
    if argv is None:
        argv = sys.argv

    # Enable unbuffered output so messages will appear in the proper order with subprocess output.
    sys.stdout=Unbuffered(sys.stdout)

    # Update python environment
    # 1) Ensure the current directory (which should be the benchmark home directory) is in the path so that the tests can be imported.
    sys.path.append('.')
    # 2) Ensure toolset/setup/linux is in the path so that the tests can "import setup_util".
    sys.path.append('toolset/setup/linux')

    # Update environment for shell scripts
    os.environ['FWROOT'] = setup_util.get_fwroot()
    os.environ['IROOT'] = os.environ['FWROOT'] + '/installs'
    # 'Ubuntu', '14.04', 'trusty' respectively
    os.environ['TFB_DISTRIB_ID'], os.environ['TFB_DISTRIB_RELEASE'], os.environ['TFB_DISTRIB_CODENAME'] = platform.linux_distribution()
    # App server cpu count
    os.environ['CPU_COUNT'] = str(multiprocessing.cpu_count())

    print("FWROOT is {!s}.".format(os.environ['FWROOT']))

    conf_parser = argparse.ArgumentParser(
        description=__doc__,
        formatter_class=argparse.RawDescriptionHelpFormatter,
        add_help=False)
    conf_parser.add_argument(
        '--conf_file', default='benchmark.cfg', metavar='FILE',
        help='Optional configuration file to provide argument defaults. All config options can be overridden using the command line.')
    args, remaining_argv = conf_parser.parse_known_args()

    defaults = {}
    try:
        if not os.path.exists(os.path.join(os.environ['FWROOT'], args.conf_file)) and not os.path.exists(os.path.join(os.environ['FWROOT'] + 'benchmark.cfg')):
            print("No config file found. Aborting!")
            exit(1)
        with open (os.path.join(os.environ['FWROOT'], args.conf_file)):
            config = ConfigParser.SafeConfigParser()
            config.read([os.path.join(os.environ['FWROOT'], args.conf_file)])
            defaults.update(dict(config.items("Defaults")))
            # Convert strings into proper python types
            for k, v in defaults.iteritems():
                try:
                    defaults[k] = literal_eval(v)
                except Exception:
                    pass
    except IOError:
        print("Configuration file not found!")
        exit(1)

    ##########################################################
    # Set up default values
    ##########################################################

    # Verify and massage options
    if defaults['client_user'] is None or defaults['client_host'] is None:
        print("client_user and client_host are required!")
        print("Please check your configuration file.")
        print("Aborting!")
        exit(1)

    if defaults['database_user'] is None:
        defaults['database_user'] = defaults['client_user']
    if defaults['database_host'] is None:
        defaults['database_host'] = defaults['client_host']
    if defaults['server_host'] is None:
        defaults['server_host'] = defaults['client_host']
    if defaults['ulimit'] is None:
        defaults['ulimit'] = 200000

    os.environ['ULIMIT'] = str(defaults['ulimit'])

    ##########################################################
    # Set up argument parser
    ##########################################################
    parser = argparse.ArgumentParser(description="Install or run the Framework Benchmarks test suite.",
                                     parents=[conf_parser],
                                     formatter_class=argparse.ArgumentDefaultsHelpFormatter,
                                     epilog='''If an argument includes (type int-sequence), then it accepts integer lists in multiple forms.
        Using a single number e.g. 5 will create a list [5]. Using commas will create a list containing those
        values e.g. 1,3,6 creates [1, 3, 6]. Using three colon-separated numbers of start:step:end will create a
        list, using the semantics of python's range function, e.g. 1:3:15 creates [1, 4, 7, 10, 13] while
        0:1:5 creates [0, 1, 2, 3, 4]
        ''')

    # Install options
    parser.add_argument('--clean', action='store_true', default=False, help='Removes the results directory')
    parser.add_argument('--clean-all', action='store_true', dest='clean_all', default=False, help='Removes the results and installs directories')

    # Test options
    parser.add_argument('--test', nargs='+', help='names of tests to run')
#.........这里部分代码省略.........
开发者ID:Jesterovskiy,项目名称:FrameworkBenchmarks,代码行数:101,代码来源:run-tests.py


示例8: open

          for line in out:
            log.info(line.rstrip('\n'))
      except IOError:
        log.error("No OUT file found")

    log.error("Running inside Travis-CI, so I will print a copy of the verification summary")

    results = None
    try:
      with open('results/ec2/latest/results.json', 'r') as f:
        results = json.load(f)
    except IOError:
      log.critical("No results.json found, unable to print verification summary") 
      sys.exit(retcode)

    target_dir = setup_util.get_fwroot() + '/frameworks/' + testdir
    dirtests = [t for t in gather_tests() if t.directory == target_dir]

    # Normally you don't have to use Fore.* before each line, but 
    # Travis-CI seems to reset color codes on newline (see travis-ci/travis-ci#2692)
    # or stream flush, so we have to ensure that the color code is printed repeatedly
    prefix = Fore.CYAN
    for line in header("Verification Summary", top='=', bottom='').split('\n'):
      print prefix + line

    for test in dirtests:
      print prefix + "| Test: %s" % test.name
      if test.name not in runner.names:
        print prefix + "|      " + Fore.YELLOW + "Unable to verify in Travis-CI"
      elif test.name in results['verify'].keys():
        for test_type, result in results['verify'][test.name].iteritems():
开发者ID:luizmineo,项目名称:FrameworkBenchmarks,代码行数:31,代码来源:run-ci.py


示例9: __init__

    def __init__(self, args):

        # Map type strings to their objects
        types = dict()
        types["json"] = JsonTestType()
        types["db"] = DBTestType()
        types["query"] = QueryTestType()
        types["fortune"] = FortuneTestType()
        types["update"] = UpdateTestType()
        types["plaintext"] = PlaintextTestType()

        # Turn type into a map instead of a string
        if args["type"] == "all":
            args["types"] = types
        else:
            args["types"] = {args["type"]: types[args["type"]]}
        del args["type"]

        args["max_threads"] = args["threads"]
        args["max_concurrency"] = max(args["concurrency_levels"])

        self.__dict__.update(args)
        # pprint(self.__dict__)

        self.start_time = time.time()
        self.run_test_timeout_seconds = 7200

        # setup logging
        logging.basicConfig(stream=sys.stderr, level=logging.INFO)

        # setup some additional variables
        if self.database_user == None:
            self.database_user = self.client_user
        if self.database_host == None:
            self.database_host = self.client_host
        if self.database_identity_file == None:
            self.database_identity_file = self.client_identity_file

        # Remember root directory
        self.fwroot = setup_util.get_fwroot()

        # setup results and latest_results directories
        self.result_directory = os.path.join("results", self.name)
        if args["clean"] or args["clean_all"]:
            shutil.rmtree(os.path.join(self.fwroot, "results"))
        self.latest_results_directory = self.latest_results_directory()

        # remove installs directories if --clean-all provided
        self.install_root = "%s/%s" % (self.fwroot, "installs")
        if args["clean_all"]:
            os.system("rm -rf " + self.install_root)
            os.mkdir(self.install_root)

        if hasattr(self, "parse") and self.parse != None:
            self.timestamp = self.parse
        else:
            self.timestamp = time.strftime("%Y%m%d%H%M%S", time.localtime())

        self.results = None
        try:
            with open(os.path.join(self.latest_results_directory, "results.json"), "r") as f:
                # Load json file into results object
                self.results = json.load(f)
        except IOError:
            logging.warn("results.json for test %s not found.", self.name)

        if self.results == None:
            self.results = dict()
            self.results["name"] = self.name
            self.results["concurrencyLevels"] = self.concurrency_levels
            self.results["queryIntervals"] = self.query_levels
            self.results["frameworks"] = [t.name for t in self.__gather_tests]
            self.results["duration"] = self.duration
            self.results["rawData"] = dict()
            self.results["rawData"]["json"] = dict()
            self.results["rawData"]["db"] = dict()
            self.results["rawData"]["query"] = dict()
            self.results["rawData"]["fortune"] = dict()
            self.results["rawData"]["update"] = dict()
            self.results["rawData"]["plaintext"] = dict()
            self.results["completed"] = dict()
            self.results["succeeded"] = dict()
            self.results["succeeded"]["json"] = []
            self.results["succeeded"]["db"] = []
            self.results["succeeded"]["query"] = []
            self.results["succeeded"]["fortune"] = []
            self.results["succeeded"]["update"] = []
            self.results["succeeded"]["plaintext"] = []
            self.results["failed"] = dict()
            self.results["failed"]["json"] = []
            self.results["failed"]["db"] = []
            self.results["failed"]["query"] = []
            self.results["failed"]["fortune"] = []
            self.results["failed"]["update"] = []
            self.results["failed"]["plaintext"] = []
            self.results["verify"] = dict()
        else:
            # for x in self.__gather_tests():
            #  if x.name not in self.results['frameworks']:
            #    self.results['frameworks'] = self.results['frameworks'] + [x.name]
#.........这里部分代码省略.........
开发者ID:fredrikwidlund,项目名称:FrameworkBenchmarks,代码行数:101,代码来源:benchmarker.py



注:本文中的setup.linux.setup_util.get_fwroot函数示例由纯净天空整理自Github/MSDocs等源码及文档管理平台,相关代码片段筛选自各路编程大神贡献的开源项目,源码版权归原作者所有,传播和使用请参考对应项目的License;未经允许,请勿转载。


鲜花

握手

雷人

路过

鸡蛋
该文章已有0人参与评论

请发表评论

全部评论

专题导读
上一篇:
Python utils.get_company_currency函数代码示例发布时间:2022-05-27
下一篇:
Python setup.setup函数代码示例发布时间:2022-05-27
热门推荐
阅读排行榜

扫描微信二维码

查看手机版网站

随时了解更新最新资讯

139-2527-9053

在线客服(服务时间 9:00~18:00)

在线QQ客服
地址:深圳市南山区西丽大学城创智工业园
电邮:jeky_zhao#qq.com
移动电话:139-2527-9053

Powered by 互联科技 X3.4© 2001-2213 极客世界.|Sitemap