• 设为首页
  • 点击收藏
  • 手机版
    手机扫一扫访问
    迪恩网络手机版
  • 关注官方公众号
    微信扫一扫关注
    迪恩网络公众号

Java HttpFSServerWebApp类代码示例

原作者: [db:作者] 来自: [db:来源] 收藏 邀请

本文整理汇总了Java中org.apache.hadoop.fs.http.server.HttpFSServerWebApp的典型用法代码示例。如果您正苦于以下问题:Java HttpFSServerWebApp类的具体用法?Java HttpFSServerWebApp怎么用?Java HttpFSServerWebApp使用的例子?那么恭喜您, 这里精选的类代码示例或许可以为您提供帮助。



HttpFSServerWebApp类属于org.apache.hadoop.fs.http.server包,在下文中一共展示了HttpFSServerWebApp类的8个代码示例,这些例子默认根据受欢迎程度排序。您可以为喜欢或者感觉有用的代码点赞,您的评价将有助于我们的系统推荐出更棒的Java代码示例。

示例1: createToken

import org.apache.hadoop.fs.http.server.HttpFSServerWebApp; //导入依赖的package包/类
/**
 * Creates a delegation token.
 *
 * @param ugi UGI creating the token.
 * @param renewer token renewer.
 * @return new delegation token.
 * @throws DelegationTokenManagerException thrown if the token could not be
 * created.
 */
@Override
public Token<DelegationTokenIdentifier> createToken(UserGroupInformation ugi,
                                                    String renewer)
  throws DelegationTokenManagerException {
  renewer = (renewer == null) ? ugi.getShortUserName() : renewer;
  String user = ugi.getUserName();
  Text owner = new Text(user);
  Text realUser = null;
  if (ugi.getRealUser() != null) {
    realUser = new Text(ugi.getRealUser().getUserName());
  }
  DelegationTokenIdentifier tokenIdentifier =
    new DelegationTokenIdentifier(owner, new Text(renewer), realUser);
  Token<DelegationTokenIdentifier> token =
    new Token<DelegationTokenIdentifier>(tokenIdentifier, secretManager);
  try {
    SecurityUtil.setTokenService(token,
                                 HttpFSServerWebApp.get().getAuthority());
  } catch (ServerException ex) {
    throw new DelegationTokenManagerException(
      DelegationTokenManagerException.ERROR.DT04, ex.toString(), ex);
  }
  return token;
}
 
开发者ID:ict-carch,项目名称:hadoop-plus,代码行数:34,代码来源:DelegationTokenManagerService.java


示例2: createToken

import org.apache.hadoop.fs.http.server.HttpFSServerWebApp; //导入依赖的package包/类
/**
 * Creates a delegation token.
 *
 * @param ugi
 *     UGI creating the token.
 * @param renewer
 *     token renewer.
 * @return new delegation token.
 * @throws DelegationTokenManagerException
 *     thrown if the token could not be
 *     created.
 */
@Override
public Token<DelegationTokenIdentifier> createToken(UserGroupInformation ugi,
    String renewer) throws DelegationTokenManagerException {
  renewer = (renewer == null) ? ugi.getShortUserName() : renewer;
  String user = ugi.getUserName();
  Text owner = new Text(user);
  Text realUser = null;
  if (ugi.getRealUser() != null) {
    realUser = new Text(ugi.getRealUser().getUserName());
  }
  DelegationTokenIdentifier tokenIdentifier =
      new DelegationTokenIdentifier(owner, new Text(renewer), realUser);
  Token<DelegationTokenIdentifier> token =
      new Token<>(tokenIdentifier, secretManager);
  try {
    SecurityUtil
        .setTokenService(token, HttpFSServerWebApp.get().getAuthority());
  } catch (ServerException ex) {
    throw new DelegationTokenManagerException(
        DelegationTokenManagerException.ERROR.DT04, ex.toString(), ex);
  }
  return token;
}
 
开发者ID:hopshadoop,项目名称:hops,代码行数:36,代码来源:DelegationTokenManagerService.java


示例3: init

import org.apache.hadoop.fs.http.server.HttpFSServerWebApp; //导入依赖的package包/类
/**
 * Initializes the service.
 *
 * @throws ServiceException thrown if the service could not be initialized.
 */
@Override
protected void init() throws ServiceException {

  long updateInterval = getServiceConfig().getLong(UPDATE_INTERVAL, DAY);
  long maxLifetime = getServiceConfig().getLong(MAX_LIFETIME, 7 * DAY);
  long renewInterval = getServiceConfig().getLong(RENEW_INTERVAL, DAY);
  tokenKind = (HttpFSServerWebApp.get().isSslEnabled())
              ? SWebHdfsFileSystem.TOKEN_KIND : WebHdfsFileSystem.TOKEN_KIND;
  secretManager = new DelegationTokenSecretManager(tokenKind, updateInterval,
                                                   maxLifetime,
                                                   renewInterval, HOUR);
  try {
    secretManager.startThreads();
  } catch (IOException ex) {
    throw new ServiceException(ServiceException.ERROR.S12,
                               DelegationTokenManager.class.getSimpleName(),
                               ex.toString(), ex);
  }
}
 
开发者ID:Seagate,项目名称:hadoop-on-lustre2,代码行数:25,代码来源:DelegationTokenManagerService.java


示例4: createToken

import org.apache.hadoop.fs.http.server.HttpFSServerWebApp; //导入依赖的package包/类
/**
 * Creates a delegation token.
 *
 * @param ugi UGI creating the token.
 * @param renewer token renewer.
 * @return new delegation token.
 * @throws DelegationTokenManagerException thrown if the token could not be
 * created.
 */
@Override
public Token<DelegationTokenIdentifier> createToken(UserGroupInformation ugi,
                                                    String renewer)
  throws DelegationTokenManagerException {
  renewer = (renewer == null) ? ugi.getShortUserName() : renewer;
  String user = ugi.getUserName();
  Text owner = new Text(user);
  Text realUser = null;
  if (ugi.getRealUser() != null) {
    realUser = new Text(ugi.getRealUser().getUserName());
  }
  DelegationTokenIdentifier tokenIdentifier =
    new DelegationTokenIdentifier(tokenKind, owner, new Text(renewer), realUser);
  Token<DelegationTokenIdentifier> token =
    new Token<DelegationTokenIdentifier>(tokenIdentifier, secretManager);
  try {
    SecurityUtil.setTokenService(token,
                                 HttpFSServerWebApp.get().getAuthority());
  } catch (ServerException ex) {
    throw new DelegationTokenManagerException(
      DelegationTokenManagerException.ERROR.DT04, ex.toString(), ex);
  }
  return token;
}
 
开发者ID:Seagate,项目名称:hadoop-on-lustre2,代码行数:34,代码来源:DelegationTokenManagerService.java


示例5: service

import org.apache.hadoop.fs.http.server.HttpFSServerWebApp; //导入依赖的package包/类
@Test
@TestDir
public void service() throws Exception {
  String dir = TestDirHelper.getTestDir().getAbsolutePath();
  Configuration conf = new Configuration(false);
  conf.set("httpfs.services", StringUtils.join(",",
    Arrays.asList(InstrumentationService.class.getName(),
        SchedulerService.class.getName(),
        FileSystemAccessService.class.getName(),
        DelegationTokenManagerService.class.getName())));
  Server server = new HttpFSServerWebApp(dir, dir, dir, dir, conf);
  server.init();
  DelegationTokenManager tm = server.get(DelegationTokenManager.class);
  Assert.assertNotNull(tm);
  server.destroy();
}
 
开发者ID:Seagate,项目名称:hadoop-on-lustre2,代码行数:17,代码来源:TestDelegationTokenManagerService.java


示例6: createHttpFSServer

import org.apache.hadoop.fs.http.server.HttpFSServerWebApp; //导入依赖的package包/类
private void createHttpFSServer() throws Exception {
  File homeDir = TestDirHelper.getTestDir();
  Assert.assertTrue(new File(homeDir, "conf").mkdir());
  Assert.assertTrue(new File(homeDir, "log").mkdir());
  Assert.assertTrue(new File(homeDir, "temp").mkdir());
  HttpFSServerWebApp.setHomeDirForCurrentThread(homeDir.getAbsolutePath());

  File secretFile = new File(new File(homeDir, "conf"), "secret");
  Writer w = new FileWriter(secretFile);
  w.write("secret");
  w.close();

  //FileSystem being served by HttpFS
  String fsDefaultName = getProxiedFSURI();
  Configuration conf = new Configuration(false);
  conf.set(CommonConfigurationKeysPublic.FS_DEFAULT_NAME_KEY, fsDefaultName);
  File hdfsSite = new File(new File(homeDir, "conf"), "hdfs-site.xml");
  OutputStream os = new FileOutputStream(hdfsSite);
  conf.writeXml(os);
  os.close();

  //HTTPFS configuration
  conf = new Configuration(false);
  conf.set("httpfs.proxyuser." + HadoopUsersConfTestHelper.getHadoopProxyUser() + ".groups",
           HadoopUsersConfTestHelper.getHadoopProxyUserGroups());
  conf.set("httpfs.proxyuser." + HadoopUsersConfTestHelper.getHadoopProxyUser() + ".hosts",
           HadoopUsersConfTestHelper.getHadoopProxyUserHosts());
  conf.set("httpfs.authentication.signature.secret.file", secretFile.getAbsolutePath());
  File httpfsSite = new File(new File(homeDir, "conf"), "httpfs-site.xml");
  os = new FileOutputStream(httpfsSite);
  conf.writeXml(os);
  os.close();

  ClassLoader cl = Thread.currentThread().getContextClassLoader();
  URL url = cl.getResource("webapp");
  WebAppContext context = new WebAppContext(url.getPath(), "/webhdfs");
  Server server = TestJettyHelper.getJettyServer();
  server.addHandler(context);
  server.start();
}
 
开发者ID:ict-carch,项目名称:hadoop-plus,代码行数:41,代码来源:BaseTestHttpFSWith.java


示例7: createHttpFSServer

import org.apache.hadoop.fs.http.server.HttpFSServerWebApp; //导入依赖的package包/类
private void createHttpFSServer() throws Exception {
  File homeDir = TestDirHelper.getTestDir();
  Assert.assertTrue(new File(homeDir, "conf").mkdir());
  Assert.assertTrue(new File(homeDir, "log").mkdir());
  Assert.assertTrue(new File(homeDir, "temp").mkdir());
  HttpFSServerWebApp.setHomeDirForCurrentThread(homeDir.getAbsolutePath());

  File secretFile = new File(new File(homeDir, "conf"), "secret");
  Writer w = new FileWriter(secretFile);
  w.write("secret");
  w.close();

  //FileSystem being served by HttpFS
  String fsDefaultName = getProxiedFSURI();
  Configuration conf = new Configuration(false);
  conf.set(CommonConfigurationKeysPublic.FS_DEFAULT_NAME_KEY, fsDefaultName);
  conf.setBoolean(DFSConfigKeys.DFS_NAMENODE_ACLS_ENABLED_KEY, true);
  conf.setBoolean(DFSConfigKeys.DFS_NAMENODE_XATTRS_ENABLED_KEY, true);
  File hdfsSite = new File(new File(homeDir, "conf"), "hdfs-site.xml");
  OutputStream os = new FileOutputStream(hdfsSite);
  conf.writeXml(os);
  os.close();

  //HTTPFS configuration
  conf = new Configuration(false);
  conf.set("httpfs.proxyuser." + HadoopUsersConfTestHelper.getHadoopProxyUser() + ".groups",
           HadoopUsersConfTestHelper.getHadoopProxyUserGroups());
  conf.set("httpfs.proxyuser." + HadoopUsersConfTestHelper.getHadoopProxyUser() + ".hosts",
           HadoopUsersConfTestHelper.getHadoopProxyUserHosts());
  conf.set("httpfs.authentication.signature.secret.file", secretFile.getAbsolutePath());
  File httpfsSite = new File(new File(homeDir, "conf"), "httpfs-site.xml");
  os = new FileOutputStream(httpfsSite);
  conf.writeXml(os);
  os.close();

  ClassLoader cl = Thread.currentThread().getContextClassLoader();
  URL url = cl.getResource("webapp");
  WebAppContext context = new WebAppContext(url.getPath(), "/webhdfs");
  Server server = TestJettyHelper.getJettyServer();
  server.addHandler(context);
  server.start();
}
 
开发者ID:aliyun-beta,项目名称:aliyun-oss-hadoop-fs,代码行数:43,代码来源:BaseTestHttpFSWith.java


示例8: createHttpFSServer

import org.apache.hadoop.fs.http.server.HttpFSServerWebApp; //导入依赖的package包/类
private void createHttpFSServer() throws Exception {
  File homeDir = TestDirHelper.getTestDir();
  Assert.assertTrue(new File(homeDir, "conf").mkdir());
  Assert.assertTrue(new File(homeDir, "log").mkdir());
  Assert.assertTrue(new File(homeDir, "temp").mkdir());
  HttpFSServerWebApp.setHomeDirForCurrentThread(homeDir.getAbsolutePath());

  File secretFile = new File(new File(homeDir, "conf"), "secret");
  Writer w = new FileWriter(secretFile);
  w.write("secret");
  w.close();

  //FileSystem being served by HttpFS
  String fsDefaultName = getProxiedFSURI();
  Configuration conf = new Configuration(false);
  conf.set(CommonConfigurationKeysPublic.FS_DEFAULT_NAME_KEY, fsDefaultName);
  File hdfsSite = new File(new File(homeDir, "conf"), "hdfs-site.xml");
  OutputStream os = new FileOutputStream(hdfsSite);
  conf.writeXml(os);
  os.close();

  //HTTPFS configuration
  conf = new Configuration(false);
  conf.set(
      "httpfs.proxyuser." + HadoopUsersConfTestHelper.getHadoopProxyUser() +
          ".groups", HadoopUsersConfTestHelper.getHadoopProxyUserGroups());
  conf.set(
      "httpfs.proxyuser." + HadoopUsersConfTestHelper.getHadoopProxyUser() +
          ".hosts", HadoopUsersConfTestHelper.getHadoopProxyUserHosts());
  conf.set("httpfs.authentication.signature.secret.file",
      secretFile.getAbsolutePath());
  File httpfsSite = new File(new File(homeDir, "conf"), "httpfs-site.xml");
  os = new FileOutputStream(httpfsSite);
  conf.writeXml(os);
  os.close();

  ClassLoader cl = Thread.currentThread().getContextClassLoader();
  URL url = cl.getResource("webapp");
  WebAppContext context = new WebAppContext(url.getPath(), "/webhdfs");
  Server server = TestJettyHelper.getJettyServer();
  server.addHandler(context);
  server.start();
}
 
开发者ID:hopshadoop,项目名称:hops,代码行数:44,代码来源:BaseTestHttpFSWith.java



注:本文中的org.apache.hadoop.fs.http.server.HttpFSServerWebApp类示例整理自Github/MSDocs等源码及文档管理平台,相关代码片段筛选自各路编程大神贡献的开源项目,源码版权归原作者所有,传播和使用请参考对应项目的License;未经允许,请勿转载。


鲜花

握手

雷人

路过

鸡蛋
该文章已有0人参与评论

请发表评论

全部评论

专题导读
上一篇:
Java SlidrInterface类代码示例发布时间:2022-05-22
下一篇:
Java Nodes类代码示例发布时间:2022-05-22
热门推荐
阅读排行榜

扫描微信二维码

查看手机版网站

随时了解更新最新资讯

139-2527-9053

在线客服(服务时间 9:00~18:00)

在线QQ客服
地址:深圳市南山区西丽大学城创智工业园
电邮:jeky_zhao#qq.com
移动电话:139-2527-9053

Powered by 互联科技 X3.4© 2001-2213 极客世界.|Sitemap