• 设为首页
  • 点击收藏
  • 手机版
    手机扫一扫访问
    迪恩网络手机版
  • 关注官方公众号
    微信扫一扫关注
    迪恩网络公众号

Java HftpFileSystem类代码示例

原作者: [db:作者] 来自: [db:来源] 收藏 邀请

本文整理汇总了Java中org.apache.hadoop.hdfs.web.HftpFileSystem的典型用法代码示例。如果您正苦于以下问题:Java HftpFileSystem类的具体用法?Java HftpFileSystem怎么用?Java HftpFileSystem使用的例子?那么恭喜您, 这里精选的类代码示例或许可以为您提供帮助。



HftpFileSystem类属于org.apache.hadoop.hdfs.web包,在下文中一共展示了HftpFileSystem类的18个代码示例,这些例子默认根据受欢迎程度排序。您可以为喜欢或者感觉有用的代码点赞,您的评价将有助于我们的系统推荐出更棒的Java代码示例。

示例1: createToken

import org.apache.hadoop.hdfs.web.HftpFileSystem; //导入依赖的package包/类
private static Token<DelegationTokenIdentifier> createToken(URI serviceUri) {
  byte[] pw = "hadoop".getBytes();
  byte[] ident = new DelegationTokenIdentifier(new Text("owner"), new Text(
      "renewer"), new Text("realuser")).getBytes();
  Text service = new Text(serviceUri.toString());
  return new Token<DelegationTokenIdentifier>(ident, pw,
      HftpFileSystem.TOKEN_KIND, service);
}
 
开发者ID:naver,项目名称:hadoop,代码行数:9,代码来源:TestDelegationTokenRemoteFetcher.java


示例2: getHftpFileSystem

import org.apache.hadoop.hdfs.web.HftpFileSystem; //导入依赖的package包/类
/**
 * @return a {@link HftpFileSystem} object.
 */
public HftpFileSystem getHftpFileSystem(int nnIndex) throws IOException {
  String uri = "hftp://"
      + nameNodes[nnIndex].conf
          .get(DFS_NAMENODE_HTTP_ADDRESS_KEY);
  try {
    return (HftpFileSystem)FileSystem.get(new URI(uri), conf);
  } catch (URISyntaxException e) {
    throw new IOException(e);
  }
}
 
开发者ID:naver,项目名称:hadoop,代码行数:14,代码来源:MiniDFSCluster.java


示例3: getHftpFileSystemAs

import org.apache.hadoop.hdfs.web.HftpFileSystem; //导入依赖的package包/类
/**
 *  @return a {@link HftpFileSystem} object as specified user. 
 */
public HftpFileSystem getHftpFileSystemAs(final String username,
    final Configuration conf, final int nnIndex, final String... groups)
    throws IOException, InterruptedException {
  final UserGroupInformation ugi = UserGroupInformation.createUserForTesting(
      username, groups);
  return ugi.doAs(new PrivilegedExceptionAction<HftpFileSystem>() {
    @Override
    public HftpFileSystem run() throws Exception {
      return getHftpFileSystem(nnIndex);
    }
  });
}
 
开发者ID:naver,项目名称:hadoop,代码行数:16,代码来源:MiniDFSCluster.java


示例4: getDTfromRemote

import org.apache.hadoop.hdfs.web.HftpFileSystem; //导入依赖的package包/类
static public Credentials getDTfromRemote(String nnAddr, String renewer)
    throws IOException {
  DataInputStream dis = null;
  InetSocketAddress serviceAddr = NetUtils.createSocketAddr(nnAddr);
  
  try {
    StringBuffer url = new StringBuffer();
    if (renewer != null) {
      url.append(nnAddr).append(GetDelegationTokenServlet.PATH_SPEC)
          .append("?").append(GetDelegationTokenServlet.RENEWER).append("=")
          .append(renewer);
    } else {
      url.append(nnAddr).append(GetDelegationTokenServlet.PATH_SPEC);
    }
    URL remoteURL = new URL(url.toString());
    URLConnection connection =
        SecurityUtil2.openSecureHttpConnection(remoteURL);
    InputStream in = connection.getInputStream();
    Credentials ts = new Credentials();
    dis = new DataInputStream(in);
    ts.readFields(dis);
    for (Token<?> token : ts.getAllTokens()) {
      token.setKind(HftpFileSystem.TOKEN_KIND);
      SecurityUtil2.setTokenService(token, serviceAddr);
    }
    return ts;
  } catch (Exception e) {
    throw new IOException("Unable to obtain remote token", e);
  } finally {
    if (dis != null) {
      dis.close();
    }
  }
}
 
开发者ID:hopshadoop,项目名称:hops,代码行数:35,代码来源:DelegationTokenFetcher.java


示例5: setUp

import org.apache.hadoop.hdfs.web.HftpFileSystem; //导入依赖的package包/类
@BeforeClass
public static void setUp() throws IOException {
  ((Log4JLogger) HftpFileSystem.LOG).getLogger().setLevel(Level.ALL);

  final long seed = RAN.nextLong();
  System.out.println("seed=" + seed);
  RAN.setSeed(seed);

  config = new Configuration();
  cluster = new MiniDFSCluster.Builder(config).numDataNodes(2).build();
  blockPoolId = cluster.getNamesystem().getBlockPoolId();
  hftpUri =
      "hftp://" + config.get(DFSConfigKeys.DFS_NAMENODE_HTTP_ADDRESS_KEY);
}
 
开发者ID:hopshadoop,项目名称:hops,代码行数:15,代码来源:TestHftpFileSystem.java


示例6: initFileSystems

import org.apache.hadoop.hdfs.web.HftpFileSystem; //导入依赖的package包/类
@Before
public void initFileSystems() throws IOException {
  hdfs = cluster.getFileSystem();
  hftpFs = (HftpFileSystem) new Path(hftpUri).getFileSystem(config);
  // clear out the namespace
  for (FileStatus stat : hdfs.listStatus(new Path("/"))) {
    hdfs.delete(stat.getPath(), true);
  }
}
 
开发者ID:hopshadoop,项目名称:hops,代码行数:10,代码来源:TestHftpFileSystem.java


示例7: testHftpDefaultPorts

import org.apache.hadoop.hdfs.web.HftpFileSystem; //导入依赖的package包/类
@Test
public void testHftpDefaultPorts() throws IOException {
  Configuration conf = new Configuration();
  URI uri = URI.create("hftp://localhost");
  HftpFileSystem fs = (HftpFileSystem) FileSystem.get(uri, conf);

  assertEquals(DFSConfigKeys.DFS_NAMENODE_HTTP_PORT_DEFAULT,
      fs.getDefaultPort());
  assertEquals(DFSConfigKeys.DFS_NAMENODE_HTTPS_PORT_DEFAULT,
      fs.getDefaultSecurePort());

  assertEquals(uri, fs.getUri());
  assertEquals("127.0.0.1:" + DFSConfigKeys.DFS_NAMENODE_HTTPS_PORT_DEFAULT,
      fs.getCanonicalServiceName());
}
 
开发者ID:hopshadoop,项目名称:hops,代码行数:16,代码来源:TestHftpFileSystem.java


示例8: testHftpCustomDefaultPorts

import org.apache.hadoop.hdfs.web.HftpFileSystem; //导入依赖的package包/类
@Test
public void testHftpCustomDefaultPorts() throws IOException {
  Configuration conf = new Configuration();
  conf.setInt("dfs.http.port", 123);
  conf.setInt("dfs.https.port", 456);

  URI uri = URI.create("hftp://localhost");
  HftpFileSystem fs = (HftpFileSystem) FileSystem.get(uri, conf);

  assertEquals(123, fs.getDefaultPort());
  assertEquals(456, fs.getDefaultSecurePort());
  
  assertEquals(uri, fs.getUri());
  assertEquals("127.0.0.1:456", fs.getCanonicalServiceName());
}
 
开发者ID:hopshadoop,项目名称:hops,代码行数:16,代码来源:TestHftpFileSystem.java


示例9: testHftpCustomUriPortWithDefaultPorts

import org.apache.hadoop.hdfs.web.HftpFileSystem; //导入依赖的package包/类
@Test
public void testHftpCustomUriPortWithDefaultPorts() throws IOException {
  Configuration conf = new Configuration();
  URI uri = URI.create("hftp://localhost:123");
  HftpFileSystem fs = (HftpFileSystem) FileSystem.get(uri, conf);

  assertEquals(DFSConfigKeys.DFS_NAMENODE_HTTP_PORT_DEFAULT,
      fs.getDefaultPort());
  assertEquals(DFSConfigKeys.DFS_NAMENODE_HTTPS_PORT_DEFAULT,
      fs.getDefaultSecurePort());

  assertEquals(uri, fs.getUri());
  assertEquals("127.0.0.1:" + DFSConfigKeys.DFS_NAMENODE_HTTPS_PORT_DEFAULT,
      fs.getCanonicalServiceName());
}
 
开发者ID:hopshadoop,项目名称:hops,代码行数:16,代码来源:TestHftpFileSystem.java


示例10: testHftpCustomUriPortWithCustomDefaultPorts

import org.apache.hadoop.hdfs.web.HftpFileSystem; //导入依赖的package包/类
@Test
public void testHftpCustomUriPortWithCustomDefaultPorts() throws IOException {
  Configuration conf = new Configuration();
  conf.setInt("dfs.http.port", 123);
  conf.setInt("dfs.https.port", 456);

  URI uri = URI.create("hftp://localhost:789");
  HftpFileSystem fs = (HftpFileSystem) FileSystem.get(uri, conf);

  assertEquals(123, fs.getDefaultPort());
  assertEquals(456, fs.getDefaultSecurePort());

  assertEquals(uri, fs.getUri());
  assertEquals("127.0.0.1:456", fs.getCanonicalServiceName());
}
 
开发者ID:hopshadoop,项目名称:hops,代码行数:16,代码来源:TestHftpFileSystem.java


示例11: testHdfsDelegationToken

import org.apache.hadoop.hdfs.web.HftpFileSystem; //导入依赖的package包/类
@Test
public void testHdfsDelegationToken() throws Exception {
  SecurityUtilTestHelper.setTokenServiceUseIp(true);

  final Configuration conf = new Configuration();
  conf.set(HADOOP_SECURITY_AUTHENTICATION, "kerberos");
  UserGroupInformation.setConfiguration(conf);
  UserGroupInformation user = UserGroupInformation
      .createUserForTesting("oom", new String[]{"memory"});
  Token<?> token = new Token<>(new byte[0], new byte[0],
      DelegationTokenIdentifier.HDFS_DELEGATION_KIND,
      new Text("127.0.0.1:8020"));
  user.addToken(token);
  Token<?> token2 =
      new Token<>(null, null, new Text("other token"),
          new Text("127.0.0.1:8021"));
  user.addToken(token2);
  assertEquals("wrong tokens in user", 2, user.getTokens().size());
  FileSystem fs = user.doAs(new PrivilegedExceptionAction<FileSystem>() {
    @Override
    public FileSystem run() throws Exception {
      return FileSystem.get(new URI("hftp://localhost:50470/"), conf);
    }
  });
  assertSame("wrong kind of file system", HftpFileSystem.class,
      fs.getClass());
  Field renewToken = HftpFileSystem.class.getDeclaredField("renewToken");
  renewToken.setAccessible(true);
  assertSame("wrong token", token, renewToken.get(fs));
}
 
开发者ID:hopshadoop,项目名称:hops,代码行数:31,代码来源:TestHftpDelegationToken.java


示例12: getHftpFileSystem

import org.apache.hadoop.hdfs.web.HftpFileSystem; //导入依赖的package包/类
/**
 * @return a {@link HftpFileSystem} object.
 */
public HftpFileSystem getHftpFileSystem(int nnIndex) throws IOException {
  String uri =
      "hftp://" + nameNodes[nnIndex].conf.get(DFS_NAMENODE_HTTP_ADDRESS_KEY);
  try {
    return (HftpFileSystem) FileSystem.get(new URI(uri), conf);
  } catch (URISyntaxException e) {
    throw new IOException(e);
  }
}
 
开发者ID:hopshadoop,项目名称:hops,代码行数:13,代码来源:MiniDFSCluster.java


示例13: getHftpFileSystemAs

import org.apache.hadoop.hdfs.web.HftpFileSystem; //导入依赖的package包/类
/**
 * @return a {@link HftpFileSystem} object as specified user.
 */
public HftpFileSystem getHftpFileSystemAs(final String username,
    final Configuration conf, final int nnIndex, final String... groups)
    throws IOException, InterruptedException {
  final UserGroupInformation ugi =
      UserGroupInformation.createUserForTesting(username, groups);
  return ugi.doAs(new PrivilegedExceptionAction<HftpFileSystem>() {
    @Override
    public HftpFileSystem run() throws Exception {
      return getHftpFileSystem(nnIndex);
    }
  });
}
 
开发者ID:hopshadoop,项目名称:hops,代码行数:16,代码来源:MiniDFSCluster.java


示例14: initialValue

import org.apache.hadoop.hdfs.web.HftpFileSystem; //导入依赖的package包/类
@Override
protected SimpleDateFormat initialValue() {
  return HftpFileSystem.getDateFormat();
}
 
开发者ID:naver,项目名称:hadoop,代码行数:5,代码来源:ListPathsServlet.java


示例15: getDTfromRemote

import org.apache.hadoop.hdfs.web.HftpFileSystem; //导入依赖的package包/类
static public Credentials getDTfromRemote(URLConnectionFactory factory,
    URI nnUri, String renewer, String proxyUser) throws IOException {
  StringBuilder buf = new StringBuilder(nnUri.toString())
      .append(GetDelegationTokenServlet.PATH_SPEC);
  String separator = "?";
  if (renewer != null) {
    buf.append("?").append(GetDelegationTokenServlet.RENEWER).append("=")
        .append(renewer);
    separator = "&";
  }
  if (proxyUser != null) {
    buf.append(separator).append("doas=").append(proxyUser);
  }

  boolean isHttps = nnUri.getScheme().equals("https");

  HttpURLConnection conn = null;
  DataInputStream dis = null;
  InetSocketAddress serviceAddr = NetUtils.createSocketAddr(nnUri
      .getAuthority());

  try {
    if(LOG.isDebugEnabled()) {
      LOG.debug("Retrieving token from: " + buf);
    }

    conn = run(factory, new URL(buf.toString()));
    InputStream in = conn.getInputStream();
    Credentials ts = new Credentials();
    dis = new DataInputStream(in);
    ts.readFields(dis);
    for (Token<?> token : ts.getAllTokens()) {
      token.setKind(isHttps ? HsftpFileSystem.TOKEN_KIND : HftpFileSystem.TOKEN_KIND);
      SecurityUtil.setTokenService(token, serviceAddr);
    }
    return ts;
  } catch (Exception e) {
    throw new IOException("Unable to obtain remote token", e);
  } finally {
    IOUtils.cleanup(LOG, dis);
    if (conn != null) {
      conn.disconnect();
    }
  }
}
 
开发者ID:yncxcw,项目名称:big-c,代码行数:46,代码来源:DelegationTokenFetcher.java


示例16: checkTokenSelection

import org.apache.hadoop.hdfs.web.HftpFileSystem; //导入依赖的package包/类
private void checkTokenSelection(HftpFileSystem fs, int port,
    Configuration conf) throws IOException {
  UserGroupInformation ugi = UserGroupInformation
      .createUserForTesting(fs.getUri().getAuthority(), new String[]{});

  // use ip-based tokens
  SecurityUtilTestHelper.setTokenServiceUseIp(true);

  // test fallback to hdfs token
  Token<?> hdfsToken = new Token<>(new byte[0], new byte[0],
      DelegationTokenIdentifier.HDFS_DELEGATION_KIND,
      new Text("127.0.0.1:8020"));
  ugi.addToken(hdfsToken);

  // test fallback to hdfs token
  Token<?> token = fs.selectDelegationToken(ugi);
  assertNotNull(token);
  assertEquals(hdfsToken, token);

  // test hftp is favored over hdfs
  Token<?> hftpToken = new Token<>(new byte[0], new byte[0],
      HftpFileSystem.TOKEN_KIND, new Text("127.0.0.1:" + port));
  ugi.addToken(hftpToken);
  token = fs.selectDelegationToken(ugi);
  assertNotNull(token);
  assertEquals(hftpToken, token);
  
  // switch to using host-based tokens, no token should match
  SecurityUtilTestHelper.setTokenServiceUseIp(false);
  token = fs.selectDelegationToken(ugi);
  assertNull(token);
  
  // test fallback to hdfs token
  hdfsToken = new Token<>(new byte[0], new byte[0],
      DelegationTokenIdentifier.HDFS_DELEGATION_KIND,
      new Text("localhost:8020"));
  ugi.addToken(hdfsToken);
  token = fs.selectDelegationToken(ugi);
  assertNotNull(token);
  assertEquals(hdfsToken, token);

  // test hftp is favored over hdfs
  hftpToken = new Token<>(new byte[0], new byte[0],
      HftpFileSystem.TOKEN_KIND, new Text("localhost:" + port));
  ugi.addToken(hftpToken);
  token = fs.selectDelegationToken(ugi);
  assertNotNull(token);
  assertEquals(hftpToken, token);
}
 
开发者ID:hopshadoop,项目名称:hops,代码行数:50,代码来源:TestHftpDelegationToken.java


示例17: testPropagatedClose

import org.apache.hadoop.hdfs.web.HftpFileSystem; //导入依赖的package包/类
@Test
public void testPropagatedClose() throws IOException {
  ByteRangeInputStream brs =
      spy(new HftpFileSystem.RangeHeaderInputStream(new URL("http://test/")));
  
  InputStream mockStream = mock(InputStream.class);
  doReturn(mockStream).when(brs).openInputStream();

  int brisOpens = 0;
  int brisCloses = 0;
  int isCloses = 0;
  
  // first open, shouldn't close underlying stream
  brs.getInputStream();
  verify(brs, times(++brisOpens)).openInputStream();
  verify(brs, times(brisCloses)).close();
  verify(mockStream, times(isCloses)).close();
  
  // stream is open, shouldn't close underlying stream
  brs.getInputStream();
  verify(brs, times(brisOpens)).openInputStream();
  verify(brs, times(brisCloses)).close();
  verify(mockStream, times(isCloses)).close();
  
  // seek forces a reopen, should close underlying stream
  brs.seek(1);
  brs.getInputStream();
  verify(brs, times(++brisOpens)).openInputStream();
  verify(brs, times(brisCloses)).close();
  verify(mockStream, times(++isCloses)).close();

  // verify that the underlying stream isn't closed after a seek
  // ie. the state was correctly updated
  brs.getInputStream();
  verify(brs, times(brisOpens)).openInputStream();
  verify(brs, times(brisCloses)).close();
  verify(mockStream, times(isCloses)).close();

  // seeking to same location should be a no-op
  brs.seek(1);
  brs.getInputStream();
  verify(brs, times(brisOpens)).openInputStream();
  verify(brs, times(brisCloses)).close();
  verify(mockStream, times(isCloses)).close();

  // close should of course close
  brs.close();
  verify(brs, times(++brisCloses)).close();
  verify(mockStream, times(++isCloses)).close();
  
  // it's already closed, underlying stream should not close
  brs.close();
  verify(brs, times(++brisCloses)).close();
  verify(mockStream, times(isCloses)).close();
  
  // it's closed, don't reopen it
  boolean errored = false;
  try {
    brs.getInputStream();
  } catch (IOException e) {
    errored = true;
    assertEquals("Stream closed", e.getMessage());
  } finally {
    assertTrue("Read a closed steam", errored);
  }
  verify(brs, times(brisOpens)).openInputStream();
  verify(brs, times(brisCloses)).close();
  verify(mockStream, times(isCloses)).close();
}
 
开发者ID:hopshadoop,项目名称:hops,代码行数:70,代码来源:TestByteRangeInputStream.java


示例18: testPropagatedClose

import org.apache.hadoop.hdfs.web.HftpFileSystem; //导入依赖的package包/类
@Test
public void testPropagatedClose() throws IOException {
  URLConnectionFactory factory = mock(URLConnectionFactory.class);

  ByteRangeInputStream brs = spy(new HftpFileSystem.RangeHeaderInputStream(
      factory, new URL("http://test/")));

  InputStream mockStream = mock(InputStream.class);
  doReturn(mockStream).when(brs).openInputStream();

  int brisOpens = 0;
  int brisCloses = 0;
  int isCloses = 0;

  // first open, shouldn't close underlying stream
  brs.getInputStream();
  verify(brs, times(++brisOpens)).openInputStream();
  verify(brs, times(brisCloses)).close();
  verify(mockStream, times(isCloses)).close();

  // stream is open, shouldn't close underlying stream
  brs.getInputStream();
  verify(brs, times(brisOpens)).openInputStream();
  verify(brs, times(brisCloses)).close();
  verify(mockStream, times(isCloses)).close();

  // seek forces a reopen, should close underlying stream
  brs.seek(1);
  brs.getInputStream();
  verify(brs, times(++brisOpens)).openInputStream();
  verify(brs, times(brisCloses)).close();
  verify(mockStream, times(++isCloses)).close();

  // verify that the underlying stream isn't closed after a seek
  // ie. the state was correctly updated
  brs.getInputStream();
  verify(brs, times(brisOpens)).openInputStream();
  verify(brs, times(brisCloses)).close();
  verify(mockStream, times(isCloses)).close();

  // seeking to same location should be a no-op
  brs.seek(1);
  brs.getInputStream();
  verify(brs, times(brisOpens)).openInputStream();
  verify(brs, times(brisCloses)).close();
  verify(mockStream, times(isCloses)).close();

  // close should of course close
  brs.close();
  verify(brs, times(++brisCloses)).close();
  verify(mockStream, times(++isCloses)).close();

  // it's already closed, underlying stream should not close
  brs.close();
  verify(brs, times(++brisCloses)).close();
  verify(mockStream, times(isCloses)).close();

  // it's closed, don't reopen it
  boolean errored = false;
  try {
    brs.getInputStream();
  } catch (IOException e) {
    errored = true;
    assertEquals("Stream closed", e.getMessage());
  } finally {
    assertTrue("Read a closed steam", errored);
  }
  verify(brs, times(brisOpens)).openInputStream();
  verify(brs, times(brisCloses)).close();
  verify(mockStream, times(isCloses)).close();
}
 
开发者ID:Seagate,项目名称:hadoop-on-lustre2,代码行数:72,代码来源:TestByteRangeInputStream.java



注:本文中的org.apache.hadoop.hdfs.web.HftpFileSystem类示例整理自Github/MSDocs等源码及文档管理平台,相关代码片段筛选自各路编程大神贡献的开源项目,源码版权归原作者所有,传播和使用请参考对应项目的License;未经允许,请勿转载。


鲜花

握手

雷人

路过

鸡蛋
该文章已有0人参与评论

请发表评论

全部评论

专题导读
上一篇:
Java MapJobExplorerFactoryBean类代码示例发布时间:2022-05-23
下一篇:
Java TagInfo类代码示例发布时间:2022-05-23
热门推荐
阅读排行榜

扫描微信二维码

查看手机版网站

随时了解更新最新资讯

139-2527-9053

在线客服(服务时间 9:00~18:00)

在线QQ客服
地址:深圳市南山区西丽大学城创智工业园
电邮:jeky_zhao#qq.com
移动电话:139-2527-9053

Powered by 互联科技 X3.4© 2001-2213 极客世界.|Sitemap