• 设为首页
  • 点击收藏
  • 手机版
    手机扫一扫访问
    迪恩网络手机版
  • 关注官方公众号
    微信扫一扫关注
    迪恩网络公众号

Java HttpServer2类代码示例

原作者: [db:作者] 来自: [db:来源] 收藏 邀请

本文整理汇总了Java中org.apache.hadoop.http.HttpServer2的典型用法代码示例。如果您正苦于以下问题:Java HttpServer2类的具体用法?Java HttpServer2怎么用?Java HttpServer2使用的例子?那么恭喜您, 这里精选的类代码示例或许可以为您提供帮助。



HttpServer2类属于org.apache.hadoop.http包,在下文中一共展示了HttpServer2类的12个代码示例,这些例子默认根据受欢迎程度排序。您可以为喜欢或者感觉有用的代码点赞,您的评价将有助于我们的系统推荐出更棒的Java代码示例。

示例1: testNotificationOnLastRetryNormalShutdown

import org.apache.hadoop.http.HttpServer2; //导入依赖的package包/类
@Test
public void testNotificationOnLastRetryNormalShutdown() throws Exception {
  HttpServer2 server = startHttpServer();
  // Act like it is the second attempt. Default max attempts is 2
  MRApp app = spy(new MRAppWithCustomContainerAllocator(
      2, 2, true, this.getClass().getName(), true, 2, true));
  doNothing().when(app).sysexit();
  JobConf conf = new JobConf();
  conf.set(JobContext.MR_JOB_END_NOTIFICATION_URL,
      JobEndServlet.baseUrl + "jobend?jobid=$jobId&status=$jobStatus");
  JobImpl job = (JobImpl)app.submit(conf);
  app.waitForInternalState(job, JobStateInternal.SUCCEEDED);
  // Unregistration succeeds: successfullyUnregistered is set
  app.shutDownJob();
  Assert.assertTrue(app.isLastAMRetry());
  Assert.assertEquals(1, JobEndServlet.calledTimes);
  Assert.assertEquals("jobid=" + job.getID() + "&status=SUCCEEDED",
      JobEndServlet.requestUri.getQuery());
  Assert.assertEquals(JobState.SUCCEEDED.toString(),
    JobEndServlet.foundJobState);
  server.stop();
}
 
开发者ID:naver,项目名称:hadoop,代码行数:23,代码来源:TestJobEndNotifier.java


示例2: testAbsentNotificationOnNotLastRetryUnregistrationFailure

import org.apache.hadoop.http.HttpServer2; //导入依赖的package包/类
@Test
public void testAbsentNotificationOnNotLastRetryUnregistrationFailure()
    throws Exception {
  HttpServer2 server = startHttpServer();
  MRApp app = spy(new MRAppWithCustomContainerAllocator(2, 2, false,
      this.getClass().getName(), true, 1, false));
  doNothing().when(app).sysexit();
  JobConf conf = new JobConf();
  conf.set(JobContext.MR_JOB_END_NOTIFICATION_URL,
      JobEndServlet.baseUrl + "jobend?jobid=$jobId&status=$jobStatus");
  JobImpl job = (JobImpl)app.submit(conf);
  app.waitForState(job, JobState.RUNNING);
  app.getContext().getEventHandler()
    .handle(new JobEvent(app.getJobId(), JobEventType.JOB_AM_REBOOT));
  app.waitForInternalState(job, JobStateInternal.REBOOT);
  // Now shutdown.
  // Unregistration fails: isLastAMRetry is recalculated, this is not
  app.shutDownJob();
  // Not the last AM attempt. So user should that the job is still running.
  app.waitForState(job, JobState.RUNNING);
  Assert.assertFalse(app.isLastAMRetry());
  Assert.assertEquals(0, JobEndServlet.calledTimes);
  Assert.assertNull(JobEndServlet.requestUri);
  Assert.assertNull(JobEndServlet.foundJobState);
  server.stop();
}
 
开发者ID:naver,项目名称:hadoop,代码行数:27,代码来源:TestJobEndNotifier.java


示例3: startHttpServer

import org.apache.hadoop.http.HttpServer2; //导入依赖的package包/类
private static HttpServer2 startHttpServer() throws Exception {
  new File(System.getProperty(
      "build.webapps", "build/webapps") + "/test").mkdirs();
  HttpServer2 server = new HttpServer2.Builder().setName("test")
      .addEndpoint(URI.create("http://localhost:0"))
      .setFindPort(true).build();
  server.addServlet("jobend", "/jobend", JobEndServlet.class);
  server.start();

  JobEndServlet.calledTimes = 0;
  JobEndServlet.requestUri = null;
  JobEndServlet.baseUrl = "http://localhost:"
      + server.getConnectorAddress(0).getPort() + "/";
  JobEndServlet.foundJobState = null;
  return server;
}
 
开发者ID:naver,项目名称:hadoop,代码行数:17,代码来源:TestJobEndNotifier.java


示例4: setUp

import org.apache.hadoop.http.HttpServer2; //导入依赖的package包/类
public void setUp() throws Exception {
  new File(System.getProperty("build.webapps", "build/webapps") + "/test"
      ).mkdirs();
  server = new HttpServer2.Builder().setName("test")
      .addEndpoint(URI.create("http://localhost:0"))
      .setFindPort(true).build();
  server.addServlet("delay", "/delay", DelayServlet.class);
  server.addServlet("jobend", "/jobend", JobEndServlet.class);
  server.addServlet("fail", "/fail", FailServlet.class);
  server.start();
  int port = server.getConnectorAddress(0).getPort();
  baseUrl = new URL("http://localhost:" + port + "/");

  JobEndServlet.calledTimes = 0;
  JobEndServlet.requestUri = null;
  DelayServlet.calledTimes = 0;
  FailServlet.calledTimes = 0;
}
 
开发者ID:naver,项目名称:hadoop,代码行数:19,代码来源:TestJobEndNotifier.java


示例5: start

import org.apache.hadoop.http.HttpServer2; //导入依赖的package包/类
void start() throws IOException {
  final InetSocketAddress httpAddr = getHttpAddress(conf);

  final String httpsAddrString = conf.get(
      NfsConfigKeys.NFS_HTTPS_ADDRESS_KEY,
      NfsConfigKeys.NFS_HTTPS_ADDRESS_DEFAULT);
  InetSocketAddress httpsAddr = NetUtils.createSocketAddr(httpsAddrString);

  HttpServer2.Builder builder = DFSUtil.httpServerTemplateForNNAndJN(conf,
      httpAddr, httpsAddr, "nfs3",
      NfsConfigKeys.DFS_NFS_KERBEROS_PRINCIPAL_KEY,
      NfsConfigKeys.DFS_NFS_KEYTAB_FILE_KEY);

  this.httpServer = builder.build();
  this.httpServer.start();
  
  HttpConfig.Policy policy = DFSUtil.getHttpPolicy(conf);
  int connIdx = 0;
  if (policy.isHttpEnabled()) {
    infoPort = httpServer.getConnectorAddress(connIdx++).getPort();
  }

  if (policy.isHttpsEnabled()) {
    infoSecurePort = httpServer.getConnectorAddress(connIdx).getPort();
  }
}
 
开发者ID:naver,项目名称:hadoop,代码行数:27,代码来源:Nfs3HttpServer.java


示例6: initWebHdfs

import org.apache.hadoop.http.HttpServer2; //导入依赖的package包/类
private void initWebHdfs(Configuration conf) throws IOException {
  if (WebHdfsFileSystem.isEnabled(conf, HttpServer2.LOG)) {
    // set user pattern based on configuration file
    UserParam.setUserPattern(conf.get(
        DFSConfigKeys.DFS_WEBHDFS_USER_PATTERN_KEY,
        DFSConfigKeys.DFS_WEBHDFS_USER_PATTERN_DEFAULT));

    // add authentication filter for webhdfs
    final String className = conf.get(
        DFSConfigKeys.DFS_WEBHDFS_AUTHENTICATION_FILTER_KEY,
        DFSConfigKeys.DFS_WEBHDFS_AUTHENTICATION_FILTER_DEFAULT);
    final String name = className;

    final String pathSpec = WebHdfsFileSystem.PATH_PREFIX + "/*";
    Map<String, String> params = getAuthFilterParams(conf);
    HttpServer2.defineFilter(httpServer.getWebAppContext(), name, className,
        params, new String[] { pathSpec });
    HttpServer2.LOG.info("Added filter '" + name + "' (class=" + className
        + ")");

    // add webhdfs packages
    httpServer.addJerseyResourcePackage(NamenodeWebHdfsMethods.class
        .getPackage().getName() + ";" + Param.class.getPackage().getName(),
        pathSpec);
  }
}
 
开发者ID:naver,项目名称:hadoop,代码行数:27,代码来源:NameNodeHttpServer.java


示例7: setupServlets

import org.apache.hadoop.http.HttpServer2; //导入依赖的package包/类
private static void setupServlets(HttpServer2 httpServer, Configuration conf) {
  httpServer.addInternalServlet("startupProgress",
      StartupProgressServlet.PATH_SPEC, StartupProgressServlet.class);
  httpServer.addInternalServlet("getDelegationToken",
      GetDelegationTokenServlet.PATH_SPEC, 
      GetDelegationTokenServlet.class, true);
  httpServer.addInternalServlet("renewDelegationToken", 
      RenewDelegationTokenServlet.PATH_SPEC, 
      RenewDelegationTokenServlet.class, true);
  httpServer.addInternalServlet("cancelDelegationToken", 
      CancelDelegationTokenServlet.PATH_SPEC, 
      CancelDelegationTokenServlet.class, true);
  httpServer.addInternalServlet("fsck", "/fsck", FsckServlet.class,
      true);
  httpServer.addInternalServlet("imagetransfer", ImageServlet.PATH_SPEC,
      ImageServlet.class, true);
  httpServer.addInternalServlet("listPaths", "/listPaths/*",
      ListPathsServlet.class, false);
  httpServer.addInternalServlet("data", "/data/*",
      FileDataServlet.class, false);
  httpServer.addInternalServlet("checksum", "/fileChecksum/*",
      FileChecksumServlets.RedirectServlet.class, false);
  httpServer.addInternalServlet("contentSummary", "/contentSummary/*",
      ContentSummaryServlet.class, false);
}
 
开发者ID:naver,项目名称:hadoop,代码行数:26,代码来源:NameNodeHttpServer.java


示例8: start

import org.apache.hadoop.http.HttpServer2; //导入依赖的package包/类
void start() throws IOException {
  final InetSocketAddress httpAddr = getAddress(conf);

  final String httpsAddrString = conf.get(
      DFSConfigKeys.DFS_JOURNALNODE_HTTPS_ADDRESS_KEY,
      DFSConfigKeys.DFS_JOURNALNODE_HTTPS_ADDRESS_DEFAULT);
  InetSocketAddress httpsAddr = NetUtils.createSocketAddr(httpsAddrString);

  HttpServer2.Builder builder = DFSUtil.httpServerTemplateForNNAndJN(conf,
      httpAddr, httpsAddr, "journal",
      DFSConfigKeys.DFS_JOURNALNODE_KERBEROS_INTERNAL_SPNEGO_PRINCIPAL_KEY,
      DFSConfigKeys.DFS_JOURNALNODE_KEYTAB_FILE_KEY);

  httpServer = builder.build();
  httpServer.setAttribute(JN_ATTRIBUTE_KEY, localJournalNode);
  httpServer.setAttribute(JspHelper.CURRENT_CONF, conf);
  httpServer.addInternalServlet("getJournal", "/getJournal",
      GetJournalEditServlet.class, true);
  httpServer.start();
}
 
开发者ID:naver,项目名称:hadoop,代码行数:21,代码来源:JournalNodeHttpServer.java


示例9: testGetImageTimeout

import org.apache.hadoop.http.HttpServer2; //导入依赖的package包/类
/**
 * Test to verify the read timeout
 */
@Test(timeout = 5000)
public void testGetImageTimeout() throws Exception {
  HttpServer2 testServer = HttpServerFunctionalTest.createServer("hdfs");
  try {
    testServer.addServlet("ImageTransfer", ImageServlet.PATH_SPEC,
        TestImageTransferServlet.class);
    testServer.start();
    URL serverURL = HttpServerFunctionalTest.getServerURL(testServer);
    TransferFsImage.timeout = 2000;
    try {
      TransferFsImage.getFileClient(serverURL, "txid=1", null,
          null, false);
      fail("TransferImage Should fail with timeout");
    } catch (SocketTimeoutException e) {
      assertEquals("Read should timeout", "Read timed out", e.getMessage());
    }
  } finally {
    if (testServer != null) {
      testServer.stop();
    }
  }
}
 
开发者ID:naver,项目名称:hadoop,代码行数:26,代码来源:TestTransferFsImage.java


示例10: getFilterConfigMap

import org.apache.hadoop.http.HttpServer2; //导入依赖的package包/类
public static Map<String, String> getFilterConfigMap(Configuration conf,
    String prefix) {
  Map<String, String> filterConfig = new HashMap<String, String>();

  //setting the cookie path to root '/' so it is used for all resources.
  filterConfig.put(AuthenticationFilter.COOKIE_PATH, "/");

  for (Map.Entry<String, String> entry : conf) {
    String name = entry.getKey();
    if (name.startsWith(prefix)) {
      String value = conf.get(name);
      name = name.substring(prefix.length());
      filterConfig.put(name, value);
    }
  }

  //Resolve _HOST into bind address
  String bindAddress = conf.get(HttpServer2.BIND_ADDRESS);
  String principal = filterConfig.get(KerberosAuthenticationHandler.PRINCIPAL);
  if (principal != null) {
    try {
      principal = SecurityUtil.getServerPrincipal(principal, bindAddress);
    }
    catch (IOException ex) {
      throw new RuntimeException("Could not resolve Kerberos principal name: " + ex.toString(), ex);
    }
    filterConfig.put(KerberosAuthenticationHandler.PRINCIPAL, principal);
  }
  return filterConfig;
}
 
开发者ID:naver,项目名称:hadoop,代码行数:31,代码来源:AuthenticationFilterInitializer.java


示例11: loadSslConfiguration

import org.apache.hadoop.http.HttpServer2; //导入依赖的package包/类
/**
 * Load the SSL keystore / truststore into the HttpServer builder.
 * @param builder the HttpServer2.Builder to populate with ssl config
 * @param sslConf the Configuration instance to use during loading of SSL conf
 */
public static HttpServer2.Builder loadSslConfiguration(
    HttpServer2.Builder builder, Configuration sslConf) {
  if (sslConf == null) {
    sslConf = new Configuration(false);
  }
  boolean needsClientAuth = YarnConfiguration.YARN_SSL_CLIENT_HTTPS_NEED_AUTH_DEFAULT;
  sslConf.addResource(YarnConfiguration.YARN_SSL_SERVER_RESOURCE_DEFAULT);

  return builder
      .needsClientAuth(needsClientAuth)
      .keyPassword(getPassword(sslConf, WEB_APP_KEY_PASSWORD_KEY))
      .keyStore(sslConf.get("ssl.server.keystore.location"),
          getPassword(sslConf, WEB_APP_KEYSTORE_PASSWORD_KEY),
          sslConf.get("ssl.server.keystore.type", "jks"))
      .trustStore(sslConf.get("ssl.server.truststore.location"),
          getPassword(sslConf, WEB_APP_TRUSTSTORE_PASSWORD_KEY),
          sslConf.get("ssl.server.truststore.type", "jks"));
}
 
开发者ID:aliyun-beta,项目名称:aliyun-oss-hadoop-fs,代码行数:24,代码来源:WebAppUtils.java


示例12: initWebHdfs

import org.apache.hadoop.http.HttpServer2; //导入依赖的package包/类
private void initWebHdfs(Configuration conf) throws IOException {
  // set user pattern based on configuration file
  UserParam.setUserPattern(conf.get(
      HdfsClientConfigKeys.DFS_WEBHDFS_USER_PATTERN_KEY,
      HdfsClientConfigKeys.DFS_WEBHDFS_USER_PATTERN_DEFAULT));

  // add authentication filter for webhdfs
  final String className = conf.get(
      DFSConfigKeys.DFS_WEBHDFS_AUTHENTICATION_FILTER_KEY,
      DFSConfigKeys.DFS_WEBHDFS_AUTHENTICATION_FILTER_DEFAULT);
  final String name = className;

  final String pathSpec = WebHdfsFileSystem.PATH_PREFIX + "/*";
  Map<String, String> params = getAuthFilterParams(conf);
  HttpServer2.defineFilter(httpServer.getWebAppContext(), name, className,
      params, new String[] { pathSpec });
  HttpServer2.LOG.info("Added filter '" + name + "' (class=" + className
      + ")");

  // add webhdfs packages
  httpServer.addJerseyResourcePackage(NamenodeWebHdfsMethods.class
      .getPackage().getName() + ";" + Param.class.getPackage().getName(),
      pathSpec);
}
 
开发者ID:aliyun-beta,项目名称:aliyun-oss-hadoop-fs,代码行数:25,代码来源:NameNodeHttpServer.java



注:本文中的org.apache.hadoop.http.HttpServer2类示例整理自Github/MSDocs等源码及文档管理平台,相关代码片段筛选自各路编程大神贡献的开源项目,源码版权归原作者所有,传播和使用请参考对应项目的License;未经允许,请勿转载。


鲜花

握手

雷人

路过

鸡蛋
该文章已有0人参与评论

请发表评论

全部评论

专题导读
上一篇:
Java ProfilePictureView类代码示例发布时间:2022-05-22
下一篇:
Java SAXException类代码示例发布时间:2022-05-22
热门推荐
阅读排行榜

扫描微信二维码

查看手机版网站

随时了解更新最新资讯

139-2527-9053

在线客服(服务时间 9:00~18:00)

在线QQ客服
地址:深圳市南山区西丽大学城创智工业园
电邮:jeky_zhao#qq.com
移动电话:139-2527-9053

Powered by 互联科技 X3.4© 2001-2213 极客世界.|Sitemap