• 设为首页
  • 点击收藏
  • 手机版
    手机扫一扫访问
    迪恩网络手机版
  • 关注官方公众号
    微信扫一扫关注
    迪恩网络公众号

Python script.Script类代码示例

原作者: [db:作者] 来自: [db:来源] 收藏 邀请

本文整理汇总了Python中resource_management.libraries.script.script.Script的典型用法代码示例。如果您正苦于以下问题:Python Script类的具体用法?Python Script怎么用?Python Script使用的例子?那么恭喜您, 这里精选的类代码示例或许可以为您提供帮助。



在下文中一共展示了Script类的20个代码示例,这些例子默认根据受欢迎程度排序。您可以为喜欢或者感觉有用的代码点赞,您的评价将有助于我们的系统推荐出更棒的Python代码示例。

示例1: get_hadoop_dir

def get_hadoop_dir(target, force_latest_on_upgrade=False):
  """
  Return the hadoop shared directory in the following override order
  1. Use default for 2.1 and lower
  2. If 2.2 and higher, use <stack-root>/current/hadoop-client/{target}
  3. If 2.2 and higher AND for an upgrade, use <stack-root>/<version>/hadoop/{target}.
  However, if the upgrade has not yet invoked <stack-selector-tool>, return the current
  version of the component.
  :target: the target directory
  :force_latest_on_upgrade: if True, then this will return the "current" directory
  without the stack version built into the path, such as <stack-root>/current/hadoop-client
  """
  stack_root = Script.get_stack_root()
  stack_version = Script.get_stack_version()

  if not target in HADOOP_DIR_DEFAULTS:
    raise Fail("Target {0} not defined".format(target))

  hadoop_dir = HADOOP_DIR_DEFAULTS[target]

  formatted_stack_version = format_stack_version(stack_version)
  if formatted_stack_version and  check_stack_feature(StackFeature.ROLLING_UPGRADE, formatted_stack_version):
    # home uses a different template
    if target == "home":
      hadoop_dir = HADOOP_HOME_DIR_TEMPLATE.format(stack_root, "current", "hadoop-client")
    else:
      hadoop_dir = HADOOP_DIR_TEMPLATE.format(stack_root, "current", "hadoop-client", target)

    # if we are not forcing "current" for HDP 2.2, then attempt to determine
    # if the exact version needs to be returned in the directory
    if not force_latest_on_upgrade:
      stack_info = _get_upgrade_stack()

      if stack_info is not None:
        stack_version = stack_info[1]

        # determine if <stack-selector-tool> has been run and if not, then use the current
        # hdp version until this component is upgraded
        current_stack_version = get_role_component_current_stack_version()
        if current_stack_version is not None and stack_version != current_stack_version:
          stack_version = current_stack_version

        if target == "home":
          # home uses a different template
          hadoop_dir = HADOOP_HOME_DIR_TEMPLATE.format(stack_root, stack_version, "hadoop")
        else:
          hadoop_dir = HADOOP_DIR_TEMPLATE.format(stack_root, stack_version, "hadoop", target)

  return hadoop_dir
开发者ID:maduhu,项目名称:HDP2.5-ambari,代码行数:49,代码来源:stack_select.py


示例2: setup_ranger_plugin_keystore

def setup_ranger_plugin_keystore(service_name, audit_db_is_enabled, stack_version, credential_file, xa_audit_db_password,
                                ssl_truststore_password, ssl_keystore_password, component_user, component_group, java_home):

  stack_root = Script.get_stack_root()
  service_name = str(service_name).lower()
  cred_lib_path = format('{stack_root}/{stack_version}/ranger-{service_name}-plugin/install/lib/*')
  cred_setup_prefix = (format('{stack_root}/{stack_version}/ranger-{service_name}-plugin/ranger_credential_helper.py'), '-l', cred_lib_path)

  if service_name == 'nifi':
    cred_lib_path = format('{stack_root}/{stack_version}/{service_name}/ext/ranger/install/lib/*')
    cred_setup_prefix = (format('{stack_root}/{stack_version}/{service_name}/ext/ranger/scripts/ranger_credential_helper.py'), '-l', cred_lib_path)

  if audit_db_is_enabled:
    cred_setup = cred_setup_prefix + ('-f', credential_file, '-k', 'auditDBCred', '-v', PasswordString(xa_audit_db_password), '-c', '1')
    Execute(cred_setup, environment={'JAVA_HOME': java_home}, logoutput=True, sudo=True)

  cred_setup = cred_setup_prefix + ('-f', credential_file, '-k', 'sslKeyStore', '-v', PasswordString(ssl_keystore_password), '-c', '1')
  Execute(cred_setup, environment={'JAVA_HOME': java_home}, logoutput=True, sudo=True)

  cred_setup = cred_setup_prefix + ('-f', credential_file, '-k', 'sslTrustStore', '-v', PasswordString(ssl_truststore_password), '-c', '1')
  Execute(cred_setup, environment={'JAVA_HOME': java_home}, logoutput=True, sudo=True)

  File(credential_file,
    owner = component_user,
    group = component_group,
    mode = 0640
  )
开发者ID:maduhu,项目名称:HDP2.5-ambari,代码行数:27,代码来源:setup_ranger_plugin_xml.py


示例3: get_tarball_paths

def get_tarball_paths(name, use_upgrading_version_during_upgrade=True, custom_source_file=None, custom_dest_file=None):
  """
  For a given tarball name, get the source and destination paths to use.
  :param name: Tarball name
  :param use_upgrading_version_during_upgrade:
  :param custom_source_file: If specified, use this source path instead of the default one from the map.
  :param custom_dest_file: If specified, use this destination path instead of the default one from the map.
  :return: A tuple of (success status, source path, destination path)
  """
  stack_name = Script.get_stack_name()

  if not stack_name:
    Logger.error("Cannot copy {0} tarball to HDFS because stack name could not be determined.".format(str(name)))
    return (False, None, None)

  stack_version = get_current_version(use_upgrading_version_during_upgrade)
  if not stack_version:
    Logger.error("Cannot copy {0} tarball to HDFS because stack version could be be determined.".format(str(name)))
    return (False, None, None)

  stack_root = Script.get_stack_root()
  if not stack_root:
    Logger.error("Cannot copy {0} tarball to HDFS because stack root could be be determined.".format(str(name)))
    return (False, None, None)

  if name is None or name.lower() not in TARBALL_MAP:
    Logger.error("Cannot copy tarball to HDFS because {0} is not supported in stack {1} for this operation.".format(str(name), str(stack_name)))
    return (False, None, None)
  (source_file, dest_file) = TARBALL_MAP[name.lower()]

  if custom_source_file is not None:
    source_file = custom_source_file

  if custom_dest_file is not None:
    dest_file = custom_dest_file

  source_file = source_file.replace(STACK_NAME_PATTERN, stack_name.lower())
  dest_file = dest_file.replace(STACK_NAME_PATTERN, stack_name.lower())

  source_file = source_file.replace(STACK_ROOT_PATTERN, stack_root.lower())
  dest_file = dest_file.replace(STACK_ROOT_PATTERN, stack_root.lower())

  source_file = source_file.replace(STACK_VERSION_PATTERN, stack_version)
  dest_file = dest_file.replace(STACK_VERSION_PATTERN, stack_version)

  return (True, source_file, dest_file)
开发者ID:maduhu,项目名称:HDP2.5-ambari,代码行数:46,代码来源:copy_tarball.py


示例4: setup_ranger_plugin_jar_symblink

def setup_ranger_plugin_jar_symblink(stack_version, service_name, component_list):

  stack_root = Script.get_stack_root()
  jar_files = os.listdir(format('{stack_root}/{stack_version}/ranger-{service_name}-plugin/lib'))

  for jar_file in jar_files:
    for component in component_list:
      Execute(('ln','-sf',format('{stack_root}/{stack_version}/ranger-{service_name}-plugin/lib/{jar_file}'),format('{stack_root}/current/{component}/lib/{jar_file}')),
      not_if=format('ls {stack_root}/current/{component}/lib/{jar_file}'),
      only_if=format('ls {stack_root}/{stack_version}/ranger-{service_name}-plugin/lib/{jar_file}'),
      sudo=True)
开发者ID:maduhu,项目名称:HDP2.5-ambari,代码行数:11,代码来源:setup_ranger_plugin_xml.py


示例5: hive

def hive(name=None):
  import params

  XmlConfig("hive-site.xml",
            conf_dir = params.hive_conf_dir,
            configurations = params.config['configurations']['hive-site'],
            owner=params.hive_user,
            configuration_attributes=params.config['configuration_attributes']['hive-site']
  )

  if name in ["hiveserver2","metastore"]:
    # Manually overriding service logon user & password set by the installation package
    service_name = params.service_map[name]
    ServiceConfig(service_name,
                  action="change_user",
                  username = params.hive_user,
                  password = Script.get_password(params.hive_user))
    Execute(format("cmd /c hadoop fs -mkdir -p {hive_warehouse_dir}"), logoutput=True, user=params.hadoop_user)

  if name == 'metastore':
    if params.init_metastore_schema:
      check_schema_created_cmd = format('cmd /c "{hive_bin}\\hive.cmd --service schematool -info '
                                        '-dbType {hive_metastore_db_type} '
                                        '-userName {hive_metastore_user_name} '
                                        '-passWord {hive_metastore_user_passwd!p}'
                                        '&set EXITCODE=%ERRORLEVEL%&exit /B %EXITCODE%"', #cmd "feature", propagate the process exit code manually
                                        hive_bin=params.hive_bin,
                                        hive_metastore_db_type=params.hive_metastore_db_type,
                                        hive_metastore_user_name=params.hive_metastore_user_name,
                                        hive_metastore_user_passwd=params.hive_metastore_user_passwd)
      try:
        Execute(check_schema_created_cmd)
      except Fail:
        create_schema_cmd = format('cmd /c {hive_bin}\\hive.cmd --service schematool -initSchema '
                                   '-dbType {hive_metastore_db_type} '
                                   '-userName {hive_metastore_user_name} '
                                   '-passWord {hive_metastore_user_passwd!p}',
                                   hive_bin=params.hive_bin,
                                   hive_metastore_db_type=params.hive_metastore_db_type,
                                   hive_metastore_user_name=params.hive_metastore_user_name,
                                   hive_metastore_user_passwd=params.hive_metastore_user_passwd)
        Execute(create_schema_cmd,
                user = params.hive_user,
                logoutput=True
        )

  if name == "hiveserver2":
    if params.hive_execution_engine == "tez":
      # Init the tez app dir in hadoop
      script_file = __file__.replace('/', os.sep)
      cmd_file = os.path.normpath(os.path.join(os.path.dirname(script_file), "..", "files", "hiveTezSetup.cmd"))

      Execute("cmd /c " + cmd_file, logoutput=True, user=params.hadoop_user)
开发者ID:OpenPOWER-BigData,项目名称:HDP-ambari,代码行数:53,代码来源:hive.py


示例6: pre_rolling_restart

  def pre_rolling_restart(self, env):
    import params
    env.set_params(params)

    # this function should not execute if the version can't be determined or
    # is not at least HDP 2.2.0.0
    if Script.is_hdp_stack_less_than("2.2"):
      return

    Logger.info("Executing Accumulo Client Rolling Upgrade pre-restart")
    conf_select.select(params.stack_name, "accumulo", params.version)
    hdp_select.select("accumulo-client", params.version)
开发者ID:andreysabitov,项目名称:ambari-mantl,代码行数:12,代码来源:accumulo_client.py


示例7: get_package_dirs

def get_package_dirs():
  """
  Get package dir mappings
  :return:
  """
  stack_root = Script.get_stack_root()
  package_dirs = copy.deepcopy(_PACKAGE_DIRS)
  for package_name, directories in package_dirs.iteritems():
    for dir in directories:
      current_dir = dir['current_dir']
      current_dir = current_dir.replace(STACK_ROOT_PATTERN, stack_root)
      dir['current_dir'] = current_dir
  return package_dirs
开发者ID:maduhu,项目名称:HDP2.5-ambari,代码行数:13,代码来源:conf_select.py


示例8: storm

def storm(name=None):
  import params
  yaml_config("storm.yaml",
              conf_dir=params.conf_dir,
              configurations=params.config['configurations']['storm-site'],
              owner=params.storm_user
  )

  if params.service_map.has_key(name):
    service_name = params.service_map[name]
    ServiceConfig(service_name,
                  action="change_user",
                  username = params.storm_user,
                  password = Script.get_password(params.storm_user))
开发者ID:zouzhberk,项目名称:ambaridemo,代码行数:14,代码来源:storm.py


示例9: get_hdfs_binary

def get_hdfs_binary(distro_component_name):
  """
  Get the hdfs binary to use depending on the stack and version.
  :param distro_component_name: e.g., hadoop-hdfs-namenode, hadoop-hdfs-datanode
  :return: The hdfs binary to use
  """
  import params
  hdfs_binary = "hdfs"
  if params.stack_name == "HDP":
    # This was used in HDP 2.1 and earlier
    hdfs_binary = "hdfs"
    if Script.is_hdp_stack_greater_or_equal("2.2"):
      hdfs_binary = "/usr/hdp/current/{0}/bin/hdfs".format(distro_component_name)

  return hdfs_binary
开发者ID:OpenPOWER-BigData,项目名称:HDP-ambari,代码行数:15,代码来源:utils.py


示例10: _install_lzo_support_if_needed

  def _install_lzo_support_if_needed(self, params):
    hadoop_classpath_prefix = self._expand_hadoop_classpath_prefix(params.hadoop_classpath_prefix_template, params.config['configurations']['tez-site'])

    hadoop_lzo_dest_path = extract_path_component(hadoop_classpath_prefix, "hadoop-lzo-")
    if hadoop_lzo_dest_path:
      hadoop_lzo_file = os.path.split(hadoop_lzo_dest_path)[1]

      config = Script.get_config()
      file_url = urlparse.urljoin(config['hostLevelParams']['jdk_location'], hadoop_lzo_file)
      hadoop_lzo_dl_path = os.path.join(config["hostLevelParams"]["agentCacheDir"], hadoop_lzo_file)
      download_file(file_url, hadoop_lzo_dl_path)
      #This is for protection against configuration changes. It will infect every new destination with the lzo jar,
      # but since the classpath points to the jar directly we're getting away with it.
      if not os.path.exists(hadoop_lzo_dest_path):
        copy_file(hadoop_lzo_dl_path, hadoop_lzo_dest_path)
开发者ID:OpenPOWER-BigData,项目名称:HDP-ambari,代码行数:15,代码来源:tez_client.py


示例11: get_hadoop_dir

def get_hadoop_dir(target, force_latest_on_upgrade=False):
  """
  Return the hadoop shared directory in the following override order
  1. Use default for 2.1 and lower
  2. If 2.2 and higher, use /usr/hdp/current/hadoop-client/{target}
  3. If 2.2 and higher AND for an upgrade, use /usr/hdp/<version>/hadoop/{target}.
  However, if the upgrade has not yet invoked hdp-select, return the current
  version of the component.
  :target: the target directory
  :force_latest_on_upgrade: if True, then this will return the "current" directory
  without the HDP version built into the path, such as /usr/hdp/current/hadoop-client
  """

  if not target in HADOOP_DIR_DEFAULTS:
    raise Fail("Target {0} not defined".format(target))

  hadoop_dir = HADOOP_DIR_DEFAULTS[target]

  if Script.is_hdp_stack_greater_or_equal("2.2"):
    # home uses a different template
    if target == "home":
      hadoop_dir = HADOOP_HOME_DIR_TEMPLATE.format("current", "hadoop-client")
    else:
      hadoop_dir = HADOOP_DIR_TEMPLATE.format("current", "hadoop-client", target)

    # if we are not forcing "current" for HDP 2.2, then attempt to determine
    # if the exact version needs to be returned in the directory
    if not force_latest_on_upgrade:
      stack_info = _get_upgrade_stack()

      if stack_info is not None:
        stack_version = stack_info[1]

        # determine if hdp-select has been run and if not, then use the current
        # hdp version until this component is upgraded
        current_hdp_version = get_role_component_current_hdp_version()
        if current_hdp_version is not None and stack_version != current_hdp_version:
          stack_version = current_hdp_version

        if target == "home":
          # home uses a different template
          hadoop_dir = HADOOP_HOME_DIR_TEMPLATE.format(stack_version, "hadoop")
        else:
          hadoop_dir = HADOOP_DIR_TEMPLATE.format(stack_version, "hadoop", target)

  return hadoop_dir
开发者ID:OpenPOWER-BigData,项目名称:HDP-ambari,代码行数:46,代码来源:hdp_select.py


示例12: oozie

def oozie(is_server=False):
  import params

  from status_params import oozie_server_win_service_name

  XmlConfig("oozie-site.xml",
            conf_dir=params.oozie_conf_dir,
            configurations=params.config['configurations']['oozie-site'],
            owner=params.oozie_user,
            mode='f',
            configuration_attributes=params.config['configuration_attributes']['oozie-site']
  )

  File(os.path.join(params.oozie_conf_dir, "oozie-env.cmd"),
       owner=params.oozie_user,
       content=InlineTemplate(params.oozie_env_cmd_template)
  )

  Directory(params.oozie_tmp_dir,
            owner=params.oozie_user,
            recursive = True,
  )

  if is_server:
    # Manually overriding service logon user & password set by the installation package
    ServiceConfig(oozie_server_win_service_name,
                  action="change_user",
                  username = params.oozie_user,
                  password = Script.get_password(params.oozie_user))

  download_file(os.path.join(params.config['hostLevelParams']['jdk_location'], "sqljdbc4.jar"),
                      os.path.join(params.oozie_root, "extra_libs", "sqljdbc4.jar")
  )
  webapps_sqljdbc_path = os.path.join(params.oozie_home, "oozie-server", "webapps", "oozie", "WEB-INF", "lib", "sqljdbc4.jar")
  if os.path.isfile(webapps_sqljdbc_path):
    download_file(os.path.join(params.config['hostLevelParams']['jdk_location'], "sqljdbc4.jar"),
                        webapps_sqljdbc_path
    )
  download_file(os.path.join(params.config['hostLevelParams']['jdk_location'], "sqljdbc4.jar"),
                      os.path.join(params.oozie_home, "share", "lib", "oozie", "sqljdbc4.jar")
  )
  download_file(os.path.join(params.config['hostLevelParams']['jdk_location'], "sqljdbc4.jar"),
                      os.path.join(params.oozie_home, "temp", "WEB-INF", "lib", "sqljdbc4.jar")
  )
开发者ID:OpenPOWER-BigData,项目名称:HDP-ambari,代码行数:44,代码来源:oozie.py


示例13: _get_single_version_from_hdp_select

def _get_single_version_from_hdp_select():
  """
  Call "hdp-select versions" and return the version string if only one version is available.
  :return: Returns a version string if successful, and None otherwise.
  """
  # Ubuntu returns: "stdin: is not a tty", as subprocess output, so must use a temporary file to store the output.
  tmpfile = tempfile.NamedTemporaryFile()
  tmp_dir = Script.get_tmp_dir()
  tmp_file = os.path.join(tmp_dir, "copy_tarball_out.txt")
  hdp_version = None

  out = None
  get_hdp_versions_cmd = "/usr/bin/hdp-select versions > {0}".format(tmp_file)
  try:
    code, stdoutdata = shell.call(get_hdp_versions_cmd, logoutput=True)
    with open(tmp_file, 'r+') as file:
      out = file.read()
  except Exception, e:
    Logger.logger.exception("Could not parse output of {0}. Error: {1}".format(str(tmp_file), str(e)))
开发者ID:OpenPOWER-BigData,项目名称:HDP-ambari,代码行数:19,代码来源:copy_tarball.py


示例14: get_stack_version_before_install

def get_stack_version_before_install(component_name):
  """
  Works in the similar way to '<stack-selector-tool> status component',
  but also works for not yet installed packages.
  
  Note: won't work if doing initial install.
  """
  stack_root = Script.get_stack_root()
  component_dir = HADOOP_HOME_DIR_TEMPLATE.format(stack_root, "current", component_name)
  stack_selector_name = stack_tools.get_stack_tool_name(stack_tools.STACK_SELECTOR_NAME)
  if os.path.islink(component_dir):
    stack_version = os.path.basename(os.path.dirname(os.readlink(component_dir)))
    match = re.match('[0-9]+.[0-9]+.[0-9]+.[0-9]+-[0-9]+', stack_version)
    if match is None:
      Logger.info('Failed to get extracted version with {0} in method get_stack_version_before_install'.format(stack_selector_name))
      return None # lazy fail
    return stack_version
  else:
    return None
开发者ID:maduhu,项目名称:HDP2.5-ambari,代码行数:19,代码来源:stack_select.py


示例15: select_all

def select_all(version_to_select):
  """
  Executes <stack-selector-tool> on every component for the specified version. If the value passed in is a
  stack version such as "2.3", then this will find the latest installed version which
  could be "2.3.0.0-9999". If a version is specified instead, such as 2.3.0.0-1234, it will use
  that exact version.
  :param version_to_select: the version to <stack-selector-tool> on, such as "2.3" or "2.3.0.0-1234"
  """
  stack_root = Script.get_stack_root()
  (stack_selector_name, stack_selector_path, stack_selector_package) = stack_tools.get_stack_tool(stack_tools.STACK_SELECTOR_NAME)
  # it's an error, but it shouldn't really stop anything from working
  if version_to_select is None:
    Logger.error(format("Unable to execute {stack_selector_name} after installing because there was no version specified"))
    return

  Logger.info("Executing {0} set all on {1}".format(stack_selector_name, version_to_select))

  command = format('{sudo} {stack_selector_path} set all `ambari-python-wrap {stack_selector_path} versions | grep ^{version_to_select} | tail -1`')
  only_if_command = format('ls -d {stack_root}/{version_to_select}*')
  Execute(command, only_if = only_if_command)
开发者ID:maduhu,项目名称:HDP2.5-ambari,代码行数:20,代码来源:stack_select.py


示例16: get_lzo_packages

def get_lzo_packages(stack_version_unformatted):
  lzo_packages = []
  script_instance = Script.get_instance()
  if OSCheck.is_suse_family() and int(OSCheck.get_os_major_version()) >= 12:
    lzo_packages += ["liblzo2-2", "hadoop-lzo-native"]
  elif OSCheck.is_redhat_family() or OSCheck.is_suse_family():
    lzo_packages += ["lzo", "hadoop-lzo-native"]
  elif OSCheck.is_ubuntu_family():
    lzo_packages += ["liblzo2-2"]

  if stack_version_unformatted and check_stack_feature(StackFeature.ROLLING_UPGRADE, stack_version_unformatted):
    if OSCheck.is_ubuntu_family():
      lzo_packages += [script_instance.format_package_name("hadooplzo-${stack_version}") ,
                       script_instance.format_package_name("hadooplzo-${stack_version}-native")]
    else:
      lzo_packages += [script_instance.format_package_name("hadooplzo_${stack_version}"),
                       script_instance.format_package_name("hadooplzo_${stack_version}-native")]
  else:
    lzo_packages += ["hadoop-lzo"]

  return lzo_packages
开发者ID:maduhu,项目名称:HDP2.5-ambari,代码行数:21,代码来源:get_lzo_packages.py


示例17: pre_rolling_restart

  def pre_rolling_restart(self, env):
    import params
    env.set_params(params)

    # this function should not execute if the version can't be determined or
    # is not at least HDP 2.2.0.0
    if Script.is_hdp_stack_less_than("2.2"):
      return

    if self.component not in self.COMPONENT_TO_HDP_SELECT_MAPPING:
      Logger.info("Unable to execute an upgrade for unknown component {0}".format(self.component))
      raise Fail("Unable to execute an upgrade for unknown component {0}".format(self.component))

    hdp_component = self.COMPONENT_TO_HDP_SELECT_MAPPING[self.component]

    Logger.info("Executing Accumulo Rolling Upgrade pre-restart for {0}".format(hdp_component))
    conf_select.select(params.stack_name, "accumulo", params.version)
    hdp_select.select(hdp_component, params.version)

    # some accumulo components depend on the client, so update that too
    hdp_select.select("accumulo-client", params.version)
开发者ID:andreysabitov,项目名称:ambari-mantl,代码行数:21,代码来源:accumulo_script.py


示例18: get_hadoop_dir_for_stack_version

def get_hadoop_dir_for_stack_version(target, stack_version):
  """
  Return the hadoop shared directory for the provided stack version. This is necessary
  when folder paths of downgrade-source stack-version are needed after hdp-select. 
  :target: the target directory
  :stack_version: stack version to get hadoop dir for
  """

  if not target in HADOOP_DIR_DEFAULTS:
    raise Fail("Target {0} not defined".format(target))

  hadoop_dir = HADOOP_DIR_DEFAULTS[target]

  formatted_stack_version = format_hdp_stack_version(stack_version)
  if Script.is_hdp_stack_greater_or_equal_to(formatted_stack_version, "2.2"):
    # home uses a different template
    if target == "home":
      hadoop_dir = HADOOP_HOME_DIR_TEMPLATE.format(stack_version, "hadoop")
    else:
      hadoop_dir = HADOOP_DIR_TEMPLATE.format(stack_version, "hadoop", target)

  return hadoop_dir
开发者ID:OpenPOWER-BigData,项目名称:HDP-ambari,代码行数:22,代码来源:hdp_select.py


示例19: knox

def knox():
  import params

  XmlConfig("gateway-site.xml",
            conf_dir=params.knox_conf_dir,
            configurations=params.config['configurations']['gateway-site'],
            configuration_attributes=params.config['configuration_attributes']['gateway-site'],
            owner=params.knox_user
  )

  # Manually overriding service logon user & password set by the installation package
  ServiceConfig(params.knox_gateway_win_service_name,
                action="change_user",
                username = params.knox_user,
                password = Script.get_password(params.knox_user))

  File(os.path.join(params.knox_conf_dir, "gateway-log4j.properties"),
       owner=params.knox_user,
       content=params.gateway_log4j
  )

  File(os.path.join(params.knox_conf_dir, "topologies", "default.xml"),
       group=params.knox_group,
       owner=params.knox_user,
       content=InlineTemplate(params.topology_template)
  )

  if params.security_enabled:
    TemplateConfig( os.path.join(params.knox_conf_dir, "krb5JAASLogin.conf"),
        owner = params.knox_user,
        template_tag = None
    )

  if not os.path.isfile(params.knox_master_secret_path):
    cmd = format('cmd /C {knox_client_bin} create-master --master {knox_master_secret!p}')
    Execute(cmd)
    cmd = format('cmd /C {knox_client_bin} create-cert --hostname {knox_host_name_in_cluster}')
    Execute(cmd)
开发者ID:andreysabitov,项目名称:ambari-mantl,代码行数:38,代码来源:knox.py


示例20: get_hadoop_dir_for_stack_version

def get_hadoop_dir_for_stack_version(target, stack_version):
  """
  Return the hadoop shared directory for the provided stack version. This is necessary
  when folder paths of downgrade-source stack-version are needed after <stack-selector-tool>.
  :target: the target directory
  :stack_version: stack version to get hadoop dir for
  """

  stack_root = Script.get_stack_root()
  if not target in HADOOP_DIR_DEFAULTS:
    raise Fail("Target {0} not defined".format(target))

  hadoop_dir = HADOOP_DIR_DEFAULTS[target]

  formatted_stack_version = format_stack_version(stack_version)
  if formatted_stack_version and  check_stack_feature(StackFeature.ROLLING_UPGRADE, formatted_stack_version):
    # home uses a different template
    if target == "home":
      hadoop_dir = HADOOP_HOME_DIR_TEMPLATE.format(stack_root, stack_version, "hadoop")
    else:
      hadoop_dir = HADOOP_DIR_TEMPLATE.format(stack_root, stack_version, "hadoop", target)

  return hadoop_dir
开发者ID:maduhu,项目名称:HDP2.5-ambari,代码行数:23,代码来源:stack_select.py



注:本文中的resource_management.libraries.script.script.Script类示例由纯净天空整理自Github/MSDocs等源码及文档管理平台,相关代码片段筛选自各路编程大神贡献的开源项目,源码版权归原作者所有,传播和使用请参考对应项目的License;未经允许,请勿转载。


鲜花

握手

雷人

路过

鸡蛋
该文章已有0人参与评论

请发表评论

全部评论

专题导读
上一篇:
Python resource_operations.get_resource函数代码示例发布时间:2022-05-26
下一篇:
Python script.Script类代码示例发布时间:2022-05-26
热门推荐
阅读排行榜

扫描微信二维码

查看手机版网站

随时了解更新最新资讯

139-2527-9053

在线客服(服务时间 9:00~18:00)

在线QQ客服
地址:深圳市南山区西丽大学城创智工业园
电邮:jeky_zhao#qq.com
移动电话:139-2527-9053

Powered by 互联科技 X3.4© 2001-2213 极客世界.|Sitemap