• 设为首页
  • 点击收藏
  • 手机版
    手机扫一扫访问
    迪恩网络手机版
  • 关注官方公众号
    微信扫一扫关注
    迪恩网络公众号

Java MountEntry类代码示例

原作者: [db:作者] 来自: [db:来源] 收藏 邀请

本文整理汇总了Java中org.apache.hadoop.mount.MountEntry的典型用法代码示例。如果您正苦于以下问题:Java MountEntry类的具体用法?Java MountEntry怎么用?Java MountEntry使用的例子?那么恭喜您, 这里精选的类代码示例或许可以为您提供帮助。



MountEntry类属于org.apache.hadoop.mount包,在下文中一共展示了MountEntry类的15个代码示例,这些例子默认根据受欢迎程度排序。您可以为喜欢或者感觉有用的代码点赞,您的评价将有助于我们的系统推荐出更棒的Java代码示例。

示例1: RpcProgramMountd

import org.apache.hadoop.mount.MountEntry; //导入依赖的package包/类
public RpcProgramMountd(NfsConfiguration config,
    DatagramSocket registrationSocket, boolean allowInsecurePorts)
    throws IOException {
  // Note that RPC cache is not enabled
  super("mountd", "localhost", config.getInt(
      NfsConfigKeys.DFS_NFS_MOUNTD_PORT_KEY,
      NfsConfigKeys.DFS_NFS_MOUNTD_PORT_DEFAULT), PROGRAM, VERSION_1,
      VERSION_3, registrationSocket, allowInsecurePorts);
  exports = new ArrayList<String>();
  exports.add(config.get(NfsConfigKeys.DFS_NFS_EXPORT_POINT_KEY,
      NfsConfigKeys.DFS_NFS_EXPORT_POINT_DEFAULT));
  this.hostsMatcher = NfsExports.getInstance(config);
  this.mounts = Collections.synchronizedList(new ArrayList<MountEntry>());
  UserGroupInformation.setConfiguration(config);
  SecurityUtil.login(config, NfsConfigKeys.DFS_NFS_KEYTAB_FILE_KEY,
      NfsConfigKeys.DFS_NFS_KERBEROS_PRINCIPAL_KEY);
  this.dfsClient = new DFSClient(NameNode.getAddress(config), config);
}
 
开发者ID:naver,项目名称:hadoop,代码行数:19,代码来源:RpcProgramMountd.java


示例2: RpcProgramMountd

import org.apache.hadoop.mount.MountEntry; //导入依赖的package包/类
public RpcProgramMountd(NfsConfiguration config,
    DatagramSocket registrationSocket, boolean allowInsecurePorts)
    throws IOException {
  // Note that RPC cache is not enabled
  super("mountd", "localhost", config.getInt(
      NfsConfigKeys.DFS_NFS_MOUNTD_PORT_KEY,
      NfsConfigKeys.DFS_NFS_MOUNTD_PORT_DEFAULT), PROGRAM, VERSION_1,
      VERSION_3, registrationSocket, allowInsecurePorts);
  exports = new ArrayList<String>();
  exports.add(config.get(NfsConfigKeys.DFS_NFS_EXPORT_POINT_KEY,
      NfsConfigKeys.DFS_NFS_EXPORT_POINT_DEFAULT));
  this.hostsMatcher = NfsExports.getInstance(config);
  this.mounts = Collections.synchronizedList(new ArrayList<MountEntry>());
  UserGroupInformation.setConfiguration(config);
  SecurityUtil.login(config, NfsConfigKeys.DFS_NFS_KEYTAB_FILE_KEY,
      NfsConfigKeys.DFS_NFS_KERBEROS_PRINCIPAL_KEY);
  this.dfsClient = new DFSClient(DFSUtilClient.getNNAddress(config), config);
}
 
开发者ID:aliyun-beta,项目名称:aliyun-oss-hadoop-fs,代码行数:19,代码来源:RpcProgramMountd.java


示例3: dump

import org.apache.hadoop.mount.MountEntry; //导入依赖的package包/类
@Override
public XDR dump(XDR out, int xid, InetAddress client) {
  if (LOG.isDebugEnabled()) {
    LOG.debug("MOUNT NULLOP : " + " client: " + client);
  }

  List<MountEntry> copy = new ArrayList<MountEntry>(mounts);
  MountResponse.writeMountList(out, xid, copy);
  return out;
}
 
开发者ID:naver,项目名称:hadoop,代码行数:11,代码来源:RpcProgramMountd.java


示例4: umnt

import org.apache.hadoop.mount.MountEntry; //导入依赖的package包/类
@Override
public XDR umnt(XDR xdr, XDR out, int xid, InetAddress client) {
  String path = xdr.readString();
  if (LOG.isDebugEnabled()) {
    LOG.debug("MOUNT UMNT path: " + path + " client: " + client);
  }
  
  String host = client.getHostName();
  mounts.remove(new MountEntry(host, path));
  RpcAcceptedReply.getAcceptInstance(xid, new VerifierNone()).write(out);
  return out;
}
 
开发者ID:naver,项目名称:hadoop,代码行数:13,代码来源:RpcProgramMountd.java


示例5: RpcProgramMountd

import org.apache.hadoop.mount.MountEntry; //导入依赖的package包/类
public RpcProgramMountd(List<String> exports, Configuration config)
    throws IOException {
  // Note that RPC cache is not enabled
  super("mountd", "localhost", PORT, PROGRAM, VERSION_1, VERSION_3, 0);
  this.mounts = Collections.synchronizedList(new ArrayList<MountEntry>());
  this.exports = Collections.unmodifiableList(exports);
  this.dfsClient = new DFSClient(NameNode.getAddress(config), config);
}
 
开发者ID:ict-carch,项目名称:hadoop-plus,代码行数:9,代码来源:RpcProgramMountd.java


示例6: mnt

import org.apache.hadoop.mount.MountEntry; //导入依赖的package包/类
public XDR mnt(XDR xdr, XDR out, int xid, InetAddress client) {
  String path = xdr.readString();
  if (LOG.isDebugEnabled()) {
    LOG.debug("MOUNT MNT path: " + path + " client: " + client);
  }

  String host = client.getHostName();
  if (LOG.isDebugEnabled()) {
    LOG.debug("Got host: " + host + " path: " + path);
  }
  if (!exports.contains(path)) {
    LOG.info("Path " + path + " is not shared.");
    MountResponse.writeMNTResponse(Nfs3Status.NFS3ERR_NOENT, out, xid, null);
    return out;
  }

  FileHandle handle = null;
  try {
    HdfsFileStatus exFileStatus = dfsClient.getFileInfo(path);
    
    handle = new FileHandle(exFileStatus.getFileId());
  } catch (IOException e) {
    LOG.error("Can't get handle for export:" + path + ", exception:" + e);
    MountResponse.writeMNTResponse(Nfs3Status.NFS3ERR_NOENT, out, xid, null);
    return out;
  }

  assert (handle != null);
  LOG.info("Giving handle (fileId:" + handle.getFileId()
      + ") to client for export " + path);
  mounts.add(new MountEntry(host, path));

  MountResponse.writeMNTResponse(Nfs3Status.NFS3_OK, out, xid,
      handle.getContent());
  return out;
}
 
开发者ID:ict-carch,项目名称:hadoop-plus,代码行数:37,代码来源:RpcProgramMountd.java


示例7: dump

import org.apache.hadoop.mount.MountEntry; //导入依赖的package包/类
public XDR dump(XDR out, int xid, InetAddress client) {
  if (LOG.isDebugEnabled()) {
    LOG.debug("MOUNT NULLOP : " + " client: " + client);
  }

  List<MountEntry> copy = new ArrayList<MountEntry>(mounts);
  MountResponse.writeMountList(out, xid, copy);
  return out;
}
 
开发者ID:ict-carch,项目名称:hadoop-plus,代码行数:10,代码来源:RpcProgramMountd.java


示例8: umnt

import org.apache.hadoop.mount.MountEntry; //导入依赖的package包/类
public XDR umnt(XDR xdr, XDR out, int xid, InetAddress client) {
  String path = xdr.readString();
  if (LOG.isDebugEnabled()) {
    LOG.debug("MOUNT UMNT path: " + path + " client: " + client);
  }
  
  String host = client.getHostName();
  mounts.remove(new MountEntry(host, path));
  RpcAcceptedReply.voidReply(out, xid);
  return out;
}
 
开发者ID:ict-carch,项目名称:hadoop-plus,代码行数:12,代码来源:RpcProgramMountd.java


示例9: RpcProgramMountd

import org.apache.hadoop.mount.MountEntry; //导入依赖的package包/类
public RpcProgramMountd(Configuration config) throws IOException {
  // Note that RPC cache is not enabled
  super("mountd", "localhost", config.getInt("nfs3.mountd.port", PORT),
      PROGRAM, VERSION_1, VERSION_3);
  exports = new ArrayList<String>();
  exports.add(config
      .get(Nfs3Constant.EXPORT_POINT, Nfs3Constant.EXPORT_POINT_DEFAULT));
  this.hostsMatcher = NfsExports.getInstance(config);
  this.mounts = Collections.synchronizedList(new ArrayList<MountEntry>());
  UserGroupInformation.setConfiguration(config);
  SecurityUtil.login(config, DFS_NFS_KEYTAB_FILE_KEY, DFS_NFS_USER_NAME_KEY);
  this.dfsClient = new DFSClient(NameNode.getAddress(config), config);
}
 
开发者ID:hopshadoop,项目名称:hops,代码行数:14,代码来源:RpcProgramMountd.java


示例10: RpcProgramMountd

import org.apache.hadoop.mount.MountEntry; //导入依赖的package包/类
public RpcProgramMountd(List<String> exports, Configuration config)
    throws IOException {
  // Note that RPC cache is not enabled
  super("mountd", "localhost", config.getInt("nfs3.mountd.port", PORT),
      PROGRAM, VERSION_1, VERSION_3);
  
  this.hostsMatcher = NfsExports.getInstance(config);
  this.mounts = Collections.synchronizedList(new ArrayList<MountEntry>());
  this.exports = Collections.unmodifiableList(exports);
  this.dfsClient = new DFSClient(NameNode.getAddress(config), config);
}
 
开发者ID:chendave,项目名称:hadoop-TCP,代码行数:12,代码来源:RpcProgramMountd.java


示例11: RpcProgramMountd

import org.apache.hadoop.mount.MountEntry; //导入依赖的package包/类
public RpcProgramMountd(Configuration config) throws IOException {
  // Note that RPC cache is not enabled
  super("mountd", "localhost", config.getInt("nfs3.mountd.port", PORT),
      PROGRAM, VERSION_1, VERSION_3);
  exports = new ArrayList<String>();
  exports.add(config.get(Nfs3Constant.EXPORT_POINT,
      Nfs3Constant.EXPORT_POINT_DEFAULT));
  this.hostsMatcher = NfsExports.getInstance(config);
  this.mounts = Collections.synchronizedList(new ArrayList<MountEntry>());
  UserGroupInformation.setConfiguration(config);
  SecurityUtil.login(config, DFS_NFS_KEYTAB_FILE_KEY,
          DFS_NFS_USER_NAME_KEY);
  this.dfsClient = new DFSClient(NameNode.getAddress(config), config);
}
 
开发者ID:Seagate,项目名称:hadoop-on-lustre2,代码行数:15,代码来源:RpcProgramMountd.java


示例12: mnt

import org.apache.hadoop.mount.MountEntry; //导入依赖的package包/类
@Override
public XDR mnt(XDR xdr, XDR out, int xid, InetAddress client) {
  if (hostsMatcher == null) {
    return MountResponse.writeMNTResponse(Nfs3Status.NFS3ERR_ACCES, out, xid,
        null);
  }
  AccessPrivilege accessPrivilege = hostsMatcher.getAccessPrivilege(client);
  if (accessPrivilege == AccessPrivilege.NONE) {
    return MountResponse.writeMNTResponse(Nfs3Status.NFS3ERR_ACCES, out, xid,
        null);
  }

  String path = xdr.readString();
  if (LOG.isDebugEnabled()) {
    LOG.debug("MOUNT MNT path: " + path + " client: " + client);
  }

  String host = client.getHostName();
  if (LOG.isDebugEnabled()) {
    LOG.debug("Got host: " + host + " path: " + path);
  }
  if (!exports.contains(path)) {
    LOG.info("Path " + path + " is not shared.");
    MountResponse.writeMNTResponse(Nfs3Status.NFS3ERR_NOENT, out, xid, null);
    return out;
  }

  FileHandle handle = null;
  try {
    HdfsFileStatus exFileStatus = dfsClient.getFileInfo(path);
    
    handle = new FileHandle(exFileStatus.getFileId());
  } catch (IOException e) {
    LOG.error("Can't get handle for export:" + path, e);
    MountResponse.writeMNTResponse(Nfs3Status.NFS3ERR_NOENT, out, xid, null);
    return out;
  }

  assert (handle != null);
  LOG.info("Giving handle (fileId:" + handle.getFileId()
      + ") to client for export " + path);
  mounts.add(new MountEntry(host, path));

  MountResponse.writeMNTResponse(Nfs3Status.NFS3_OK, out, xid,
      handle.getContent());
  return out;
}
 
开发者ID:aliyun-beta,项目名称:aliyun-oss-hadoop-fs,代码行数:48,代码来源:RpcProgramMountd.java


示例13: mnt

import org.apache.hadoop.mount.MountEntry; //导入依赖的package包/类
@Override
public XDR mnt(XDR xdr, XDR out, int xid, InetAddress client) {
  AccessPrivilege accessPrivilege = hostsMatcher.getAccessPrivilege(client);
  if (accessPrivilege == AccessPrivilege.NONE) {
    return MountResponse
        .writeMNTResponse(Nfs3Status.NFS3ERR_ACCES, out, xid, null);
  }

  String path = xdr.readString();
  if (LOG.isDebugEnabled()) {
    LOG.debug("MOUNT MNT path: " + path + " client: " + client);
  }

  String host = client.getHostName();
  if (LOG.isDebugEnabled()) {
    LOG.debug("Got host: " + host + " path: " + path);
  }
  if (!exports.contains(path)) {
    LOG.info("Path " + path + " is not shared.");
    MountResponse.writeMNTResponse(Nfs3Status.NFS3ERR_NOENT, out, xid, null);
    return out;
  }

  FileHandle handle = null;
  try {
    HdfsFileStatus exFileStatus = dfsClient.getFileInfo(path);
    
    handle = new FileHandle(exFileStatus.getFileId());
  } catch (IOException e) {
    LOG.error("Can't get handle for export:" + path, e);
    MountResponse.writeMNTResponse(Nfs3Status.NFS3ERR_NOENT, out, xid, null);
    return out;
  }

  assert (handle != null);
  LOG.info("Giving handle (fileId:" + handle.getFileId() +
      ") to client for export " + path);
  mounts.add(new MountEntry(host, path));

  MountResponse
      .writeMNTResponse(Nfs3Status.NFS3_OK, out, xid, handle.getContent());
  return out;
}
 
开发者ID:hopshadoop,项目名称:hops,代码行数:44,代码来源:RpcProgramMountd.java


示例14: mnt

import org.apache.hadoop.mount.MountEntry; //导入依赖的package包/类
@Override
public XDR mnt(XDR xdr, XDR out, int xid, InetAddress client) {
  AccessPrivilege accessPrivilege = hostsMatcher.getAccessPrivilege(client);
  if (accessPrivilege == AccessPrivilege.NONE) {
    return MountResponse.writeMNTResponse(Nfs3Status.NFS3ERR_ACCES, out, xid,
        null);
  }

  String path = xdr.readString();
  if (LOG.isDebugEnabled()) {
    LOG.debug("MOUNT MNT path: " + path + " client: " + client);
  }

  String host = client.getHostName();
  if (LOG.isDebugEnabled()) {
    LOG.debug("Got host: " + host + " path: " + path);
  }
  if (!exports.contains(path)) {
    LOG.info("Path " + path + " is not shared.");
    MountResponse.writeMNTResponse(Nfs3Status.NFS3ERR_NOENT, out, xid, null);
    return out;
  }

  FileHandle handle = null;
  try {
    HdfsFileStatus exFileStatus = dfsClient.getFileInfo(path);
    
    handle = new FileHandle(exFileStatus.getFileId());
  } catch (IOException e) {
    LOG.error("Can't get handle for export:" + path + ", exception:" + e);
    MountResponse.writeMNTResponse(Nfs3Status.NFS3ERR_NOENT, out, xid, null);
    return out;
  }

  assert (handle != null);
  LOG.info("Giving handle (fileId:" + handle.getFileId()
      + ") to client for export " + path);
  mounts.add(new MountEntry(host, path));

  MountResponse.writeMNTResponse(Nfs3Status.NFS3_OK, out, xid,
      handle.getContent());
  return out;
}
 
开发者ID:chendave,项目名称:hadoop-TCP,代码行数:44,代码来源:RpcProgramMountd.java


示例15: mnt

import org.apache.hadoop.mount.MountEntry; //导入依赖的package包/类
@Override
public XDR mnt(XDR xdr, XDR out, int xid, InetAddress client) {
  AccessPrivilege accessPrivilege = hostsMatcher.getAccessPrivilege(client);
  if (accessPrivilege == AccessPrivilege.NONE) {
    return MountResponse.writeMNTResponse(Nfs3Status.NFS3ERR_ACCES, out, xid,
        null);
  }

  String path = xdr.readString();
  if (LOG.isDebugEnabled()) {
    LOG.debug("MOUNT MNT path: " + path + " client: " + client);
  }

  String host = client.getHostName();
  if (LOG.isDebugEnabled()) {
    LOG.debug("Got host: " + host + " path: " + path);
  }
  if (!exports.contains(path)) {
    LOG.info("Path " + path + " is not shared.");
    MountResponse.writeMNTResponse(Nfs3Status.NFS3ERR_NOENT, out, xid, null);
    return out;
  }

  FileHandle handle = null;
  try {
    HdfsFileStatus exFileStatus = dfsClient.getFileInfo(path);
    
    handle = new FileHandle(exFileStatus.getFileId());
  } catch (IOException e) {
    LOG.error("Can't get handle for export:" + path, e);
    MountResponse.writeMNTResponse(Nfs3Status.NFS3ERR_NOENT, out, xid, null);
    return out;
  }

  assert (handle != null);
  LOG.info("Giving handle (fileId:" + handle.getFileId()
      + ") to client for export " + path);
  mounts.add(new MountEntry(host, path));

  MountResponse.writeMNTResponse(Nfs3Status.NFS3_OK, out, xid,
      handle.getContent());
  return out;
}
 
开发者ID:Seagate,项目名称:hadoop-on-lustre2,代码行数:44,代码来源:RpcProgramMountd.java



注:本文中的org.apache.hadoop.mount.MountEntry类示例整理自Github/MSDocs等源码及文档管理平台,相关代码片段筛选自各路编程大神贡献的开源项目,源码版权归原作者所有,传播和使用请参考对应项目的License;未经允许,请勿转载。


鲜花

握手

雷人

路过

鸡蛋
该文章已有0人参与评论

请发表评论

全部评论

专题导读
上一篇:
Java ImageTextButtonStyle类代码示例发布时间:2022-05-22
下一篇:
Java ConnectionDefinition类代码示例发布时间:2022-05-22
热门推荐
阅读排行榜

扫描微信二维码

查看手机版网站

随时了解更新最新资讯

139-2527-9053

在线客服(服务时间 9:00~18:00)

在线QQ客服
地址:深圳市南山区西丽大学城创智工业园
电邮:jeky_zhao#qq.com
移动电话:139-2527-9053

Powered by 互联科技 X3.4© 2001-2213 极客世界.|Sitemap