• 设为首页
  • 点击收藏
  • 手机版
    手机扫一扫访问
    迪恩网络手机版
  • 关注官方公众号
    微信扫一扫关注
    迪恩网络公众号

Java OperationCategory类代码示例

原作者: [db:作者] 来自: [db:来源] 收藏 邀请

本文整理汇总了Java中org.apache.hadoop.hdfs.server.namenode.NameNode.OperationCategory的典型用法代码示例。如果您正苦于以下问题:Java OperationCategory类的具体用法?Java OperationCategory怎么用?Java OperationCategory使用的例子?那么恭喜您, 这里精选的类代码示例或许可以为您提供帮助。



OperationCategory类属于org.apache.hadoop.hdfs.server.namenode.NameNode包,在下文中一共展示了OperationCategory类的20个代码示例,这些例子默认根据受欢迎程度排序。您可以为喜欢或者感觉有用的代码点赞,您的评价将有助于我们的系统推荐出更棒的Java代码示例。

示例1: retrievePassword

import org.apache.hadoop.hdfs.server.namenode.NameNode.OperationCategory; //导入依赖的package包/类
@Override
public byte[] retrievePassword(
    DelegationTokenIdentifier identifier) throws InvalidToken {
  try {
    // this check introduces inconsistency in the authentication to a
    // HA standby NN.  non-token auths are allowed into the namespace which
    // decides whether to throw a StandbyException.  tokens are a bit
    // different in that a standby may be behind and thus not yet know
    // of all tokens issued by the active NN.  the following check does
    // not allow ANY token auth, however it should allow known tokens in
    namesystem.checkOperation(OperationCategory.READ);
  } catch (StandbyException se) {
    // FIXME: this is a hack to get around changing method signatures by
    // tunneling a non-InvalidToken exception as the cause which the
    // RPC server will unwrap before returning to the client
    InvalidToken wrappedStandby = new InvalidToken("StandbyException");
    wrappedStandby.initCause(se);
    throw wrappedStandby;
  }
  return super.retrievePassword(identifier);
}
 
开发者ID:naver,项目名称:hadoop,代码行数:22,代码来源:DelegationTokenSecretManager.java


示例2: retriableRetrievePassword

import org.apache.hadoop.hdfs.server.namenode.NameNode.OperationCategory; //导入依赖的package包/类
@Override
public byte[] retriableRetrievePassword(DelegationTokenIdentifier identifier)
    throws InvalidToken, StandbyException, RetriableException, IOException {
  namesystem.checkOperation(OperationCategory.READ);
  try {
    return super.retrievePassword(identifier);
  } catch (InvalidToken it) {
    if (namesystem.inTransitionToActive()) {
      // if the namesystem is currently in the middle of transition to 
      // active state, let client retry since the corresponding editlog may 
      // have not been applied yet
      throw new RetriableException(it);
    } else {
      throw it;
    }
  }
}
 
开发者ID:naver,项目名称:hadoop,代码行数:18,代码来源:DelegationTokenSecretManager.java


示例3: metaSave

import org.apache.hadoop.hdfs.server.namenode.NameNode.OperationCategory; //导入依赖的package包/类
/**
 * Dump all metadata into specified file
 */
void metaSave(String filename) throws IOException {
  checkSuperuserPrivilege();
  checkOperation(OperationCategory.UNCHECKED);
  writeLock();
  try {
    checkOperation(OperationCategory.UNCHECKED);
    File file = new File(System.getProperty("hadoop.log.dir"), filename);
    PrintWriter out = new PrintWriter(new BufferedWriter(
        new OutputStreamWriter(new FileOutputStream(file), Charsets.UTF_8)));
    metaSave(out);
    out.flush();
    out.close();
  } finally {
    writeUnlock();
  }
}
 
开发者ID:naver,项目名称:hadoop,代码行数:20,代码来源:FSNamesystem.java


示例4: setPermission

import org.apache.hadoop.hdfs.server.namenode.NameNode.OperationCategory; //导入依赖的package包/类
/**
 * Set permissions for an existing file.
 * @throws IOException
 */
void setPermission(String src, FsPermission permission) throws IOException {
  HdfsFileStatus auditStat;
  checkOperation(OperationCategory.WRITE);
  writeLock();
  try {
    checkOperation(OperationCategory.WRITE);
    checkNameNodeSafeMode("Cannot set permission for " + src);
    auditStat = FSDirAttrOp.setPermission(dir, src, permission);
  } catch (AccessControlException e) {
    logAuditEvent(false, "setPermission", src);
    throw e;
  } finally {
    writeUnlock();
  }
  getEditLog().logSync();
  logAuditEvent(true, "setPermission", src, null, auditStat);
}
 
开发者ID:naver,项目名称:hadoop,代码行数:22,代码来源:FSNamesystem.java


示例5: setOwner

import org.apache.hadoop.hdfs.server.namenode.NameNode.OperationCategory; //导入依赖的package包/类
/**
 * Set owner for an existing file.
 * @throws IOException
 */
void setOwner(String src, String username, String group)
    throws IOException {
  HdfsFileStatus auditStat;
  checkOperation(OperationCategory.WRITE);
  writeLock();
  try {
    checkOperation(OperationCategory.WRITE);
    checkNameNodeSafeMode("Cannot set owner for " + src);
    auditStat = FSDirAttrOp.setOwner(dir, src, username, group);
  } catch (AccessControlException e) {
    logAuditEvent(false, "setOwner", src);
    throw e;
  } finally {
    writeUnlock();
  }
  getEditLog().logSync();
  logAuditEvent(true, "setOwner", src, null, auditStat);
}
 
开发者ID:naver,项目名称:hadoop,代码行数:23,代码来源:FSNamesystem.java


示例6: concat

import org.apache.hadoop.hdfs.server.namenode.NameNode.OperationCategory; //导入依赖的package包/类
/**
 * Moves all the blocks from {@code srcs} and appends them to {@code target}
 * To avoid rollbacks we will verify validity of ALL of the args
 * before we start actual move.
 * 
 * This does not support ".inodes" relative path
 * @param target target to concat into
 * @param srcs file that will be concatenated
 * @throws IOException on error
 */
void concat(String target, String [] srcs, boolean logRetryCache)
    throws IOException {
  checkOperation(OperationCategory.WRITE);
  waitForLoadingFSImage();
  HdfsFileStatus stat = null;
  boolean success = false;
  writeLock();
  try {
    checkOperation(OperationCategory.WRITE);
    checkNameNodeSafeMode("Cannot concat " + target);
    stat = FSDirConcatOp.concat(dir, target, srcs, logRetryCache);
    success = true;
  } finally {
    writeUnlock();
    if (success) {
      getEditLog().logSync();
    }
    logAuditEvent(success, "concat", Arrays.toString(srcs), target, stat);
  }
}
 
开发者ID:naver,项目名称:hadoop,代码行数:31,代码来源:FSNamesystem.java


示例7: setTimes

import org.apache.hadoop.hdfs.server.namenode.NameNode.OperationCategory; //导入依赖的package包/类
/**
 * stores the modification and access time for this inode. 
 * The access time is precise up to an hour. The transaction, if needed, is
 * written to the edits log but is not flushed.
 */
void setTimes(String src, long mtime, long atime) throws IOException {
  HdfsFileStatus auditStat;
  checkOperation(OperationCategory.WRITE);
  writeLock();
  try {
    checkOperation(OperationCategory.WRITE);
    checkNameNodeSafeMode("Cannot set times " + src);
    auditStat = FSDirAttrOp.setTimes(dir, src, mtime, atime);
  } catch (AccessControlException e) {
    logAuditEvent(false, "setTimes", src);
    throw e;
  } finally {
    writeUnlock();
  }
  getEditLog().logSync();
  logAuditEvent(true, "setTimes", src, null, auditStat);
}
 
开发者ID:naver,项目名称:hadoop,代码行数:23,代码来源:FSNamesystem.java


示例8: createSymlink

import org.apache.hadoop.hdfs.server.namenode.NameNode.OperationCategory; //导入依赖的package包/类
/**
 * Create a symbolic link.
 */
@SuppressWarnings("deprecation")
void createSymlink(String target, String link,
    PermissionStatus dirPerms, boolean createParent, boolean logRetryCache)
    throws IOException {
  if (!FileSystem.areSymlinksEnabled()) {
    throw new UnsupportedOperationException("Symlinks not supported");
  }
  HdfsFileStatus auditStat = null;
  checkOperation(OperationCategory.WRITE);
  writeLock();
  try {
    checkOperation(OperationCategory.WRITE);
    checkNameNodeSafeMode("Cannot create symlink " + link);
    auditStat = FSDirSymlinkOp.createSymlinkInt(this, target, link, dirPerms,
                                                createParent, logRetryCache);
  } catch (AccessControlException e) {
    logAuditEvent(false, "createSymlink", link, target, null);
    throw e;
  } finally {
    writeUnlock();
  }
  getEditLog().logSync();
  logAuditEvent(true, "createSymlink", link, target, auditStat);
}
 
开发者ID:naver,项目名称:hadoop,代码行数:28,代码来源:FSNamesystem.java


示例9: setReplication

import org.apache.hadoop.hdfs.server.namenode.NameNode.OperationCategory; //导入依赖的package包/类
/**
 * Set replication for an existing file.
 * 
 * The NameNode sets new replication and schedules either replication of 
 * under-replicated data blocks or removal of the excessive block copies 
 * if the blocks are over-replicated.
 * 
 * @see ClientProtocol#setReplication(String, short)
 * @param src file name
 * @param replication new replication
 * @return true if successful; 
 *         false if file does not exist or is a directory
 */
boolean setReplication(final String src, final short replication)
    throws IOException {
  boolean success = false;
  waitForLoadingFSImage();
  checkOperation(OperationCategory.WRITE);
  writeLock();
  try {
    checkOperation(OperationCategory.WRITE);
    checkNameNodeSafeMode("Cannot set replication for " + src);
    success = FSDirAttrOp.setReplication(dir, blockManager, src, replication);
  } catch (AccessControlException e) {
    logAuditEvent(false, "setReplication", src);
    throw e;
  } finally {
    writeUnlock();
  }
  if (success) {
    getEditLog().logSync();
    logAuditEvent(true, "setReplication", src);
  }
  return success;
}
 
开发者ID:naver,项目名称:hadoop,代码行数:36,代码来源:FSNamesystem.java


示例10: setStoragePolicy

import org.apache.hadoop.hdfs.server.namenode.NameNode.OperationCategory; //导入依赖的package包/类
/**
 * Set the storage policy for a file or a directory.
 *
 * @param src file/directory path
 * @param policyName storage policy name
 */
void setStoragePolicy(String src, String policyName) throws IOException {
  HdfsFileStatus auditStat;
  waitForLoadingFSImage();
  checkOperation(OperationCategory.WRITE);
  writeLock();
  try {
    checkOperation(OperationCategory.WRITE);
    checkNameNodeSafeMode("Cannot set storage policy for " + src);
    auditStat = FSDirAttrOp.setStoragePolicy(
        dir, blockManager, src, policyName);
  } catch (AccessControlException e) {
    logAuditEvent(false, "setStoragePolicy", src);
    throw e;
  } finally {
    writeUnlock();
  }
  getEditLog().logSync();
  logAuditEvent(true, "setStoragePolicy", src, null, auditStat);
}
 
开发者ID:naver,项目名称:hadoop,代码行数:26,代码来源:FSNamesystem.java


示例11: renameTo

import org.apache.hadoop.hdfs.server.namenode.NameNode.OperationCategory; //导入依赖的package包/类
/** 
 * Change the indicated filename. 
 * @deprecated Use {@link #renameTo(String, String, boolean,
 * Options.Rename...)} instead.
 */
@Deprecated
boolean renameTo(String src, String dst, boolean logRetryCache)
    throws IOException {
  waitForLoadingFSImage();
  checkOperation(OperationCategory.WRITE);
  FSDirRenameOp.RenameOldResult ret = null;
  writeLock();
  try {
    checkOperation(OperationCategory.WRITE);
    checkNameNodeSafeMode("Cannot rename " + src);
    ret = FSDirRenameOp.renameToInt(dir, src, dst, logRetryCache);
  } catch (AccessControlException e)  {
    logAuditEvent(false, "rename", src, dst, null);
    throw e;
  } finally {
    writeUnlock();
  }
  boolean success = ret != null && ret.success;
  if (success) {
    getEditLog().logSync();
  }
  logAuditEvent(success, "rename", src, dst,
      ret == null ? null : ret.auditStat);
  return success;
}
 
开发者ID:naver,项目名称:hadoop,代码行数:31,代码来源:FSNamesystem.java


示例12: getFileInfo

import org.apache.hadoop.hdfs.server.namenode.NameNode.OperationCategory; //导入依赖的package包/类
/**
 * Get the file info for a specific file.
 *
 * @param src The string representation of the path to the file
 * @param resolveLink whether to throw UnresolvedLinkException
 *        if src refers to a symlink
 *
 * @throws AccessControlException if access is denied
 * @throws UnresolvedLinkException if a symlink is encountered.
 *
 * @return object containing information regarding the file
 *         or null if file not found
 * @throws StandbyException
 */
HdfsFileStatus getFileInfo(final String src, boolean resolveLink)
  throws IOException {
  checkOperation(OperationCategory.READ);
  HdfsFileStatus stat = null;
  readLock();
  try {
    checkOperation(OperationCategory.READ);
    stat = FSDirStatAndListingOp.getFileInfo(dir, src, resolveLink);
  } catch (AccessControlException e) {
    logAuditEvent(false, "getfileinfo", src);
    throw e;
  } finally {
    readUnlock();
  }
  logAuditEvent(true, "getfileinfo", src);
  return stat;
}
 
开发者ID:naver,项目名称:hadoop,代码行数:32,代码来源:FSNamesystem.java


示例13: mkdirs

import org.apache.hadoop.hdfs.server.namenode.NameNode.OperationCategory; //导入依赖的package包/类
/**
 * Create all the necessary directories
 */
boolean mkdirs(String src, PermissionStatus permissions,
    boolean createParent) throws IOException {
  HdfsFileStatus auditStat = null;
  checkOperation(OperationCategory.WRITE);
  writeLock();
  try {
    checkOperation(OperationCategory.WRITE);
    checkNameNodeSafeMode("Cannot create directory " + src);
    auditStat = FSDirMkdirOp.mkdirs(this, src, permissions, createParent);
  } catch (AccessControlException e) {
    logAuditEvent(false, "mkdirs", src);
    throw e;
  } finally {
    writeUnlock();
  }
  getEditLog().logSync();
  logAuditEvent(true, "mkdirs", src, null, auditStat);
  return true;
}
 
开发者ID:naver,项目名称:hadoop,代码行数:23,代码来源:FSNamesystem.java


示例14: setQuota

import org.apache.hadoop.hdfs.server.namenode.NameNode.OperationCategory; //导入依赖的package包/类
/**
 * Set the namespace quota and storage space quota for a directory.
 * See {@link ClientProtocol#setQuota(String, long, long, StorageType)} for the
 * contract.
 * 
 * Note: This does not support ".inodes" relative path.
 */
void setQuota(String src, long nsQuota, long ssQuota, StorageType type)
    throws IOException {
  checkOperation(OperationCategory.WRITE);
  writeLock();
  boolean success = false;
  try {
    checkOperation(OperationCategory.WRITE);
    checkNameNodeSafeMode("Cannot set quota on " + src);
    FSDirAttrOp.setQuota(dir, src, nsQuota, ssQuota, type);
    success = true;
  } finally {
    writeUnlock();
    if (success) {
      getEditLog().logSync();
    }
    logAuditEvent(success, "setQuota", src);
  }
}
 
开发者ID:naver,项目名称:hadoop,代码行数:26,代码来源:FSNamesystem.java


示例15: getListing

import org.apache.hadoop.hdfs.server.namenode.NameNode.OperationCategory; //导入依赖的package包/类
/**
 * Get a partial listing of the indicated directory
 *
 * @param src the directory name
 * @param startAfter the name to start after
 * @param needLocation if blockLocations need to be returned
 * @return a partial listing starting after startAfter
 * 
 * @throws AccessControlException if access is denied
 * @throws UnresolvedLinkException if symbolic link is encountered
 * @throws IOException if other I/O error occurred
 */
DirectoryListing getListing(String src, byte[] startAfter,
    boolean needLocation) 
    throws IOException {
  checkOperation(OperationCategory.READ);
  DirectoryListing dl = null;
  readLock();
  try {
    checkOperation(NameNode.OperationCategory.READ);
    dl = FSDirStatAndListingOp.getListingInt(dir, src, startAfter,
        needLocation);
  } catch (AccessControlException e) {
    logAuditEvent(false, "listStatus", src);
    throw e;
  } finally {
    readUnlock();
  }
  logAuditEvent(true, "listStatus", src);
  return dl;
}
 
开发者ID:naver,项目名称:hadoop,代码行数:32,代码来源:FSNamesystem.java


示例16: datanodeReport

import org.apache.hadoop.hdfs.server.namenode.NameNode.OperationCategory; //导入依赖的package包/类
DatanodeInfo[] datanodeReport(final DatanodeReportType type
    ) throws AccessControlException, StandbyException {
  checkSuperuserPrivilege();
  checkOperation(OperationCategory.UNCHECKED);
  readLock();
  try {
    checkOperation(OperationCategory.UNCHECKED);
    final DatanodeManager dm = getBlockManager().getDatanodeManager();      
    final List<DatanodeDescriptor> results = dm.getDatanodeListForReport(type);

    DatanodeInfo[] arr = new DatanodeInfo[results.size()];
    for (int i=0; i<arr.length; i++) {
      arr[i] = new DatanodeInfo(results.get(i));
    }
    return arr;
  } finally {
    readUnlock();
  }
}
 
开发者ID:naver,项目名称:hadoop,代码行数:20,代码来源:FSNamesystem.java


示例17: getDatanodeStorageReport

import org.apache.hadoop.hdfs.server.namenode.NameNode.OperationCategory; //导入依赖的package包/类
DatanodeStorageReport[] getDatanodeStorageReport(final DatanodeReportType type
    ) throws AccessControlException, StandbyException {
  checkSuperuserPrivilege();
  checkOperation(OperationCategory.UNCHECKED);
  readLock();
  try {
    checkOperation(OperationCategory.UNCHECKED);
    final DatanodeManager dm = getBlockManager().getDatanodeManager();      
    final List<DatanodeDescriptor> datanodes = dm.getDatanodeListForReport(type);

    DatanodeStorageReport[] reports = new DatanodeStorageReport[datanodes.size()];
    for (int i = 0; i < reports.length; i++) {
      final DatanodeDescriptor d = datanodes.get(i);
      reports[i] = new DatanodeStorageReport(new DatanodeInfo(d),
          d.getStorageReports());
    }
    return reports;
  } finally {
    readUnlock();
  }
}
 
开发者ID:naver,项目名称:hadoop,代码行数:22,代码来源:FSNamesystem.java


示例18: saveNamespace

import org.apache.hadoop.hdfs.server.namenode.NameNode.OperationCategory; //导入依赖的package包/类
/**
 * Save namespace image.
 * This will save current namespace into fsimage file and empty edits file.
 * Requires superuser privilege and safe mode.
 * 
 * @throws AccessControlException if superuser privilege is violated.
 * @throws IOException if 
 */
void saveNamespace() throws AccessControlException, IOException {
  checkOperation(OperationCategory.UNCHECKED);
  checkSuperuserPrivilege();

  cpLock();  // Block if a checkpointing is in progress on standby.
  readLock();
  try {
    checkOperation(OperationCategory.UNCHECKED);

    if (!isInSafeMode()) {
      throw new IOException("Safe mode should be turned ON "
          + "in order to create namespace image.");
    }
    getFSImage().saveNamespace(this);
  } finally {
    readUnlock();
    cpUnlock();
  }
  LOG.info("New namespace image has been created");
}
 
开发者ID:naver,项目名称:hadoop,代码行数:29,代码来源:FSNamesystem.java


示例19: restoreFailedStorage

import org.apache.hadoop.hdfs.server.namenode.NameNode.OperationCategory; //导入依赖的package包/类
/**
 * Enables/Disables/Checks restoring failed storage replicas if the storage becomes available again.
 * Requires superuser privilege.
 * 
 * @throws AccessControlException if superuser privilege is violated.
 */
boolean restoreFailedStorage(String arg) throws AccessControlException,
    StandbyException {
  checkSuperuserPrivilege();
  checkOperation(OperationCategory.UNCHECKED);
  cpLock();  // Block if a checkpointing is in progress on standby.
  writeLock();
  try {
    checkOperation(OperationCategory.UNCHECKED);
    
    // if it is disabled - enable it and vice versa.
    if(arg.equals("check"))
      return getFSImage().getStorage().getRestoreFailedStorage();
    
    boolean val = arg.equals("true");  // false if not
    getFSImage().getStorage().setRestoreFailedStorage(val);
    
    return val;
  } finally {
    writeUnlock();
    cpUnlock();
  }
}
 
开发者ID:naver,项目名称:hadoop,代码行数:29,代码来源:FSNamesystem.java


示例20: startCheckpoint

import org.apache.hadoop.hdfs.server.namenode.NameNode.OperationCategory; //导入依赖的package包/类
NamenodeCommand startCheckpoint(NamenodeRegistration backupNode,
    NamenodeRegistration activeNamenode) throws IOException {
  checkOperation(OperationCategory.CHECKPOINT);
  writeLock();
  try {
    checkOperation(OperationCategory.CHECKPOINT);
    checkNameNodeSafeMode("Checkpoint not started");
    
    LOG.info("Start checkpoint for " + backupNode.getAddress());
    NamenodeCommand cmd = getFSImage().startCheckpoint(backupNode,
        activeNamenode);
    getEditLog().logSync();
    return cmd;
  } finally {
    writeUnlock();
  }
}
 
开发者ID:naver,项目名称:hadoop,代码行数:18,代码来源:FSNamesystem.java



注:本文中的org.apache.hadoop.hdfs.server.namenode.NameNode.OperationCategory类示例整理自Github/MSDocs等源码及文档管理平台,相关代码片段筛选自各路编程大神贡献的开源项目,源码版权归原作者所有,传播和使用请参考对应项目的License;未经允许,请勿转载。


鲜花

握手

雷人

路过

鸡蛋
该文章已有0人参与评论

请发表评论

全部评论

专题导读
上一篇:
Java Sort类代码示例发布时间:2022-05-22
下一篇:
Java DefaultLogger类代码示例发布时间:2022-05-22
热门推荐
阅读排行榜

扫描微信二维码

查看手机版网站

随时了解更新最新资讯

139-2527-9053

在线客服(服务时间 9:00~18:00)

在线QQ客服
地址:深圳市南山区西丽大学城创智工业园
电邮:jeky_zhao#qq.com
移动电话:139-2527-9053

Powered by 互联科技 X3.4© 2001-2213 极客世界.|Sitemap