• 设为首页
  • 点击收藏
  • 手机版
    手机扫一扫访问
    迪恩网络手机版
  • 关注官方公众号
    微信扫一扫关注
    迪恩网络公众号

Java ReadOnlyList类代码示例

原作者: [db:作者] 来自: [db:来源] 收藏 邀请

本文整理汇总了Java中org.apache.hadoop.hdfs.util.ReadOnlyList的典型用法代码示例。如果您正苦于以下问题:Java ReadOnlyList类的具体用法?Java ReadOnlyList怎么用?Java ReadOnlyList使用的例子?那么恭喜您, 这里精选的类代码示例或许可以为您提供帮助。



ReadOnlyList类属于org.apache.hadoop.hdfs.util包,在下文中一共展示了ReadOnlyList类的20个代码示例,这些例子默认根据受欢迎程度排序。您可以为喜欢或者感觉有用的代码点赞,您的评价将有助于我们的系统推荐出更棒的Java代码示例。

示例1: checkSubAccess

import org.apache.hadoop.hdfs.util.ReadOnlyList; //导入依赖的package包/类
/** Guarded by {@link FSNamesystem#readLock()} */
private void checkSubAccess(byte[][] pathByNameArr, int pathIdx, INode inode,
    int snapshotId, FsAction access, boolean ignoreEmptyDir)
    throws AccessControlException {
  if (inode == null || !inode.isDirectory()) {
    return;
  }

  Stack<INodeDirectory> directories = new Stack<INodeDirectory>();
  for(directories.push(inode.asDirectory()); !directories.isEmpty(); ) {
    INodeDirectory d = directories.pop();
    ReadOnlyList<INode> cList = d.getChildrenList(snapshotId);
    if (!(cList.isEmpty() && ignoreEmptyDir)) {
      //TODO have to figure this out with inodeattribute provider
      check(getINodeAttrs(pathByNameArr, pathIdx, d, snapshotId),
          inode.getFullPathName(), access);
    }

    for(INode child : cList) {
      if (child.isDirectory()) {
        directories.push(child.asDirectory());
      }
    }
  }
}
 
开发者ID:naver,项目名称:hadoop,代码行数:26,代码来源:FSPermissionChecker.java


示例2: saveChildren

import org.apache.hadoop.hdfs.util.ReadOnlyList; //导入依赖的package包/类
/**
 * Save children INodes.
 * @param children The list of children INodes
 * @param out The DataOutputStream to write
 * @param inSnapshot Whether the parent directory or its ancestor is in
 *                   the deleted list of some snapshot (caused by rename or
 *                   deletion)
 * @param counter Counter to increment for namenode startup progress
 * @return Number of children that are directory
 */
private int saveChildren(ReadOnlyList<INode> children,
    DataOutputStream out, boolean inSnapshot, Counter counter)
    throws IOException {
  // Write normal children INode.
  out.writeInt(children.size());
  int dirNum = 0;
  for(INode child : children) {
    // print all children first
    // TODO: for HDFS-5428, we cannot change the format/content of fsimage
    // here, thus even if the parent directory is in snapshot, we still
    // do not handle INodeUC as those stored in deleted list
    saveINode2Image(child, out, false, referenceMap, counter);
    if (child.isDirectory()) {
      dirNum++;
    } else if (inSnapshot && child.isFile()
        && child.asFile().isUnderConstruction()) {
      this.snapshotUCMap.put(child.getId(), child.asFile());
    }
    if (checkCancelCounter++ % CHECK_CANCEL_INTERVAL == 0) {
      context.checkCancelled();
    }
  }
  return dirNum;
}
 
开发者ID:naver,项目名称:hadoop,代码行数:35,代码来源:FSImageFormat.java


示例3: checkSnapshotList

import org.apache.hadoop.hdfs.util.ReadOnlyList; //导入依赖的package包/类
/**
 * Check the correctness of snapshot list within snapshottable dir
 */
private void checkSnapshotList(INodeDirectory srcRoot,
    String[] sortedNames, String[] names) {
  assertTrue(srcRoot.isSnapshottable());
  ReadOnlyList<Snapshot> listByName = srcRoot
      .getDirectorySnapshottableFeature().getSnapshotList();
  assertEquals(sortedNames.length, listByName.size());
  for (int i = 0; i < listByName.size(); i++) {
    assertEquals(sortedNames[i], listByName.get(i).getRoot().getLocalName());
  }
  List<DirectoryDiff> listByTime = srcRoot.getDiffs().asList();
  assertEquals(names.length, listByTime.size());
  for (int i = 0; i < listByTime.size(); i++) {
    Snapshot s = srcRoot.getDirectorySnapshottableFeature().getSnapshotById(
        listByTime.get(i).getSnapshotId());
    assertEquals(names[i], s.getRoot().getLocalName());
  }
}
 
开发者ID:naver,项目名称:hadoop,代码行数:21,代码来源:TestSnapshotRename.java


示例4: checkSubAccess

import org.apache.hadoop.hdfs.util.ReadOnlyList; //导入依赖的package包/类
/**
 * Guarded by {@link FSNamesystem#readLock()}
 */
private void checkSubAccess(String user, Set<String> groups, INode inode,
    int snapshotId, FsAction access, boolean ignoreEmptyDir)
    throws AccessControlException {
  if (inode == null || !inode.isDirectory()) {
    return;
  }

  Stack<INodeDirectory> directories = new Stack<INodeDirectory>();
  for (directories.push(inode.asDirectory()); !directories.isEmpty(); ) {
    INodeDirectory d = directories.pop();
    ReadOnlyList<INode> cList = d.getChildrenList(snapshotId);
    if (!(cList.isEmpty() && ignoreEmptyDir)) {
      check(user, groups, d, snapshotId, access);
    }

    for (INode child : cList) {
      if (child.isDirectory()) {
        directories.push(child.asDirectory());
      }
    }
  }
}
 
开发者ID:Nextzero,项目名称:hadoop-2.6.0-cdh5.4.3,代码行数:26,代码来源:DefaultAuthorizationProvider.java


示例5: getSnapshotsListing

import org.apache.hadoop.hdfs.util.ReadOnlyList; //导入依赖的package包/类
/**
 * Get a listing of all the snapshots of a snapshottable directory
 */
private DirectoryListing getSnapshotsListing(String src, byte[] startAfter)
    throws UnresolvedLinkException, IOException {
  Preconditions.checkState(hasReadLock());
  Preconditions.checkArgument(
      src.endsWith(HdfsConstants.SEPARATOR_DOT_SNAPSHOT_DIR), 
      "%s does not end with %s", src, HdfsConstants.SEPARATOR_DOT_SNAPSHOT_DIR);
  
  final String dirPath = normalizePath(src.substring(0,
      src.length() - HdfsConstants.DOT_SNAPSHOT_DIR.length()));
  
  final INode node = this.getINode(dirPath);
  final INodeDirectorySnapshottable dirNode = INodeDirectorySnapshottable
      .valueOf(node, dirPath);
  final ReadOnlyList<Snapshot> snapshots = dirNode.getSnapshotList();
  int skipSize = ReadOnlyList.Util.binarySearch(snapshots, startAfter);
  skipSize = skipSize < 0 ? -skipSize - 1 : skipSize + 1;
  int numOfListing = Math.min(snapshots.size() - skipSize, this.lsLimit);
  final HdfsFileStatus listing[] = new HdfsFileStatus[numOfListing];
  for (int i = 0; i < numOfListing; i++) {
    Root sRoot = snapshots.get(i + skipSize).getRoot();
    listing[i] = createFileStatus(sRoot.getLocalNameBytes(), sRoot, null);
  }
  return new DirectoryListing(
      listing, snapshots.size() - skipSize - numOfListing);
}
 
开发者ID:ict-carch,项目名称:hadoop-plus,代码行数:29,代码来源:FSDirectory.java


示例6: computeQuotaUsage

import org.apache.hadoop.hdfs.util.ReadOnlyList; //导入依赖的package包/类
@Override
public final Quota.Counts computeQuotaUsage(Quota.Counts counts,
    boolean useCache, int lastSnapshotId) {
  if ((useCache && isQuotaSet()) || lastSnapshotId == Snapshot.INVALID_ID) {
    return super.computeQuotaUsage(counts, useCache, lastSnapshotId);
  }
  
  Snapshot lastSnapshot = diffs.getSnapshotById(lastSnapshotId);
  
  ReadOnlyList<INode> childrenList = getChildrenList(lastSnapshot);
  for (INode child : childrenList) {
    child.computeQuotaUsage(counts, useCache, lastSnapshotId);
  }
  
  counts.add(Quota.NAMESPACE, 1);
  return counts;
}
 
开发者ID:ict-carch,项目名称:hadoop-plus,代码行数:18,代码来源:INodeDirectoryWithSnapshot.java


示例7: saveChildren

import org.apache.hadoop.hdfs.util.ReadOnlyList; //导入依赖的package包/类
/**
 * Save children INodes.
 * @param children The list of children INodes
 * @param out The DataOutputStream to write
 * @param counter Counter to increment for namenode startup progress
 * @return Number of children that are directory
 */
private int saveChildren(ReadOnlyList<INode> children, DataOutputStream out,
    Counter counter) throws IOException {
  // Write normal children INode. 
  out.writeInt(children.size());
  int dirNum = 0;
  int i = 0;
  for(INode child : children) {
    // print all children first
    saveINode2Image(child, out, false, referenceMap, counter);
    if (child.isDirectory()) {
      dirNum++;
    }
    if (i++ % 50 == 0) {
      context.checkCancelled();
    }
  }
  return dirNum;
}
 
开发者ID:ict-carch,项目名称:hadoop-plus,代码行数:26,代码来源:FSImageFormat.java


示例8: checkSubAccess

import org.apache.hadoop.hdfs.util.ReadOnlyList; //导入依赖的package包/类
/** Guarded by {@link FSNamesystem#readLock()} */
private void checkSubAccess(INode inode, int snapshotId, FsAction access,
    boolean ignoreEmptyDir) throws AccessControlException {
  if (inode == null || !inode.isDirectory()) {
    return;
  }

  Stack<INodeDirectory> directories = new Stack<INodeDirectory>();
  for(directories.push(inode.asDirectory()); !directories.isEmpty(); ) {
    INodeDirectory d = directories.pop();
    ReadOnlyList<INode> cList = d.getChildrenList(snapshotId);
    if (!(cList.isEmpty() && ignoreEmptyDir)) {
      check(d, snapshotId, access);
    }

    for(INode child : cList) {
      if (child.isDirectory()) {
        directories.push(child.asDirectory());
      }
    }
  }
}
 
开发者ID:yncxcw,项目名称:FlexMap,代码行数:23,代码来源:FSPermissionChecker.java


示例9: saveChildren

import org.apache.hadoop.hdfs.util.ReadOnlyList; //导入依赖的package包/类
/**
 * Save children INodes.
 * @param children The list of children INodes
 * @param out The DataOutputStream to write
 * @param inSnapshot Whether the parent directory or its ancestor is in
 *                   the deleted list of some snapshot (caused by rename or
 *                   deletion)
 * @param counter Counter to increment for namenode startup progress
 * @return Number of children that are directory
 */
private int saveChildren(ReadOnlyList<INode> children,
    DataOutputStream out, boolean inSnapshot, Counter counter)
    throws IOException {
  // Write normal children INode.
  out.writeInt(children.size());
  int dirNum = 0;
  int i = 0;
  for(INode child : children) {
    // print all children first
    // TODO: for HDFS-5428, we cannot change the format/content of fsimage
    // here, thus even if the parent directory is in snapshot, we still
    // do not handle INodeUC as those stored in deleted list
    saveINode2Image(child, out, false, referenceMap, counter);
    if (child.isDirectory()) {
      dirNum++;
    } else if (inSnapshot && child.isFile()
        && child.asFile().isUnderConstruction()) {
      this.snapshotUCMap.put(child.getId(), child.asFile());
    }
    if (i++ % 50 == 0) {
      context.checkCancelled();
    }
  }
  return dirNum;
}
 
开发者ID:yncxcw,项目名称:FlexMap,代码行数:36,代码来源:FSImageFormat.java


示例10: getSnapshotsListing

import org.apache.hadoop.hdfs.util.ReadOnlyList; //导入依赖的package包/类
/**
 * Get a listing of all the snapshots of a snapshottable directory
 */
private static DirectoryListing getSnapshotsListing(
    FSDirectory fsd, String src, byte[] startAfter)
    throws IOException {
  Preconditions.checkState(fsd.hasReadLock());
  Preconditions.checkArgument(
      src.endsWith(HdfsConstants.SEPARATOR_DOT_SNAPSHOT_DIR),
      "%s does not end with %s", src, HdfsConstants.SEPARATOR_DOT_SNAPSHOT_DIR);

  final String dirPath = FSDirectory.normalizePath(src.substring(0,
      src.length() - HdfsConstants.DOT_SNAPSHOT_DIR.length()));

  final INode node = fsd.getINode(dirPath);
  final INodeDirectory dirNode = INodeDirectory.valueOf(node, dirPath);
  final DirectorySnapshottableFeature sf = dirNode.getDirectorySnapshottableFeature();
  if (sf == null) {
    throw new SnapshotException(
        "Directory is not a snapshottable directory: " + dirPath);
  }
  final ReadOnlyList<Snapshot> snapshots = sf.getSnapshotList();
  int skipSize = ReadOnlyList.Util.binarySearch(snapshots, startAfter);
  skipSize = skipSize < 0 ? -skipSize - 1 : skipSize + 1;
  int numOfListing = Math.min(snapshots.size() - skipSize, fsd.getLsLimit());
  final HdfsFileStatus listing[] = new HdfsFileStatus[numOfListing];
  for (int i = 0; i < numOfListing; i++) {
    Snapshot.Root sRoot = snapshots.get(i + skipSize).getRoot();
    listing[i] = createFileStatus(fsd, src, sRoot.getLocalNameBytes(), sRoot,
        BlockStoragePolicySuite.ID_UNSPECIFIED, Snapshot.CURRENT_STATE_ID,
        false, INodesInPath.fromINode(sRoot));
  }
  return new DirectoryListing(
      listing, snapshots.size() - skipSize - numOfListing);
}
 
开发者ID:naver,项目名称:hadoop,代码行数:36,代码来源:FSDirStatAndListingOp.java


示例11: validateOverwrite

import org.apache.hadoop.hdfs.util.ReadOnlyList; //导入依赖的package包/类
private static void validateOverwrite(
    String src, String dst, boolean overwrite, INode srcInode, INode dstInode)
    throws IOException {
  String error;// It's OK to rename a file to a symlink and vice versa
  if (dstInode.isDirectory() != srcInode.isDirectory()) {
    error = "Source " + src + " and destination " + dst
        + " must both be directories";
    NameNode.stateChangeLog.warn("DIR* FSDirectory.unprotectedRenameTo: "
        + error);
    throw new IOException(error);
  }
  if (!overwrite) { // If destination exists, overwrite flag must be true
    error = "rename destination " + dst + " already exists";
    NameNode.stateChangeLog.warn("DIR* FSDirectory.unprotectedRenameTo: "
        + error);
    throw new FileAlreadyExistsException(error);
  }
  if (dstInode.isDirectory()) {
    final ReadOnlyList<INode> children = dstInode.asDirectory()
        .getChildrenList(Snapshot.CURRENT_STATE_ID);
    if (!children.isEmpty()) {
      error = "rename destination directory is not empty: " + dst;
      NameNode.stateChangeLog.warn("DIR* FSDirectory.unprotectedRenameTo: "
          + error);
      throw new IOException(error);
    }
  }
}
 
开发者ID:naver,项目名称:hadoop,代码行数:29,代码来源:FSDirRenameOp.java


示例12: serializeINodeDirectorySection

import org.apache.hadoop.hdfs.util.ReadOnlyList; //导入依赖的package包/类
void serializeINodeDirectorySection(OutputStream out) throws IOException {
  Iterator<INodeWithAdditionalFields> iter = fsn.getFSDirectory()
      .getINodeMap().getMapIterator();
  final ArrayList<INodeReference> refList = parent.getSaverContext()
      .getRefList();
  int i = 0;
  while (iter.hasNext()) {
    INodeWithAdditionalFields n = iter.next();
    if (!n.isDirectory()) {
      continue;
    }

    ReadOnlyList<INode> children = n.asDirectory().getChildrenList(
        Snapshot.CURRENT_STATE_ID);
    if (children.size() > 0) {
      INodeDirectorySection.DirEntry.Builder b = INodeDirectorySection.
          DirEntry.newBuilder().setParent(n.getId());
      for (INode inode : children) {
        if (!inode.isReference()) {
          b.addChildren(inode.getId());
        } else {
          refList.add(inode.asReference());
          b.addRefChildren(refList.size() - 1);
        }
      }
      INodeDirectorySection.DirEntry e = b.build();
      e.writeDelimitedTo(out);
    }

    ++i;
    if (i % FSImageFormatProtobuf.Saver.CHECK_CANCEL_INTERVAL == 0) {
      context.checkCancelled();
    }
  }
  parent.commitSection(summary,
      FSImageFormatProtobuf.SectionName.INODE_DIR);
}
 
开发者ID:naver,项目名称:hadoop,代码行数:38,代码来源:FSImageFormatPBINode.java


示例13: computeNeeded

import org.apache.hadoop.hdfs.util.ReadOnlyList; //导入依赖的package包/类
/**
 * Computes the needed number of bytes and files for a path.
 * @return CacheDirectiveStats describing the needed stats for this path
 */
private CacheDirectiveStats computeNeeded(String path, short replication) {
  FSDirectory fsDir = namesystem.getFSDirectory();
  INode node;
  long requestedBytes = 0;
  long requestedFiles = 0;
  CacheDirectiveStats.Builder builder = new CacheDirectiveStats.Builder();
  try {
    node = fsDir.getINode(path);
  } catch (UnresolvedLinkException e) {
    // We don't cache through symlinks
    return builder.build();
  }
  if (node == null) {
    return builder.build();
  }
  if (node.isFile()) {
    requestedFiles = 1;
    INodeFile file = node.asFile();
    requestedBytes = file.computeFileSize();
  } else if (node.isDirectory()) {
    INodeDirectory dir = node.asDirectory();
    ReadOnlyList<INode> children = dir
        .getChildrenList(Snapshot.CURRENT_STATE_ID);
    requestedFiles = children.size();
    for (INode child : children) {
      if (child.isFile()) {
        requestedBytes += child.asFile().computeFileSize();
      }
    }
  }
  return new CacheDirectiveStats.Builder()
      .setBytesNeeded(requestedBytes)
      .setFilesCached(requestedFiles)
      .build();
}
 
开发者ID:naver,项目名称:hadoop,代码行数:40,代码来源:CacheManager.java


示例14: saveSnapshots

import org.apache.hadoop.hdfs.util.ReadOnlyList; //导入依赖的package包/类
/**
 * Save snapshots and snapshot quota for a snapshottable directory.
 * @param current The directory that the snapshots belongs to.
 * @param out The {@link DataOutput} to write.
 * @throws IOException
 */
public static void saveSnapshots(INodeDirectory current, DataOutput out)
    throws IOException {
  DirectorySnapshottableFeature sf = current.getDirectorySnapshottableFeature();
  Preconditions.checkArgument(sf != null);
  // list of snapshots in snapshotsByNames
  ReadOnlyList<Snapshot> snapshots = sf.getSnapshotList();
  out.writeInt(snapshots.size());
  for (Snapshot s : snapshots) {
    // write the snapshot id
    out.writeInt(s.getId());
  }
  // snapshot quota
  out.writeInt(sf.getSnapshotQuota());
}
 
开发者ID:naver,项目名称:hadoop,代码行数:21,代码来源:SnapshotFSImageFormat.java


示例15: getChildrenList

import org.apache.hadoop.hdfs.util.ReadOnlyList; //导入依赖的package包/类
/**
 * @return The children list of a directory in a snapshot.
 *         Since the snapshot is read-only, the logical view of the list is
 *         never changed although the internal data structure may mutate.
 */
private ReadOnlyList<INode> getChildrenList(final INodeDirectory currentDir) {
  return new ReadOnlyList<INode>() {
    private List<INode> children = null;

    private List<INode> initChildren() {
      if (children == null) {
        final ChildrenDiff combined = new ChildrenDiff();
        for (DirectoryDiff d = DirectoryDiff.this; d != null; 
            d = d.getPosterior()) {
          combined.combinePosterior(d.diff, null);
        }
        children = combined.apply2Current(ReadOnlyList.Util.asList(
            currentDir.getChildrenList(Snapshot.CURRENT_STATE_ID)));
      }
      return children;
    }

    @Override
    public Iterator<INode> iterator() {
      return initChildren().iterator();
    }

    @Override
    public boolean isEmpty() {
      return childrenSize == 0;
    }

    @Override
    public int size() {
      return childrenSize;
    }

    @Override
    public INode get(int i) {
      return initChildren().get(i);
    }
  };
}
 
开发者ID:naver,项目名称:hadoop,代码行数:44,代码来源:DirectoryWithSnapshotFeature.java


示例16: getChild

import org.apache.hadoop.hdfs.util.ReadOnlyList; //导入依赖的package包/类
/**
 * @param name the name of the child
 * @param snapshotId
 *          if it is not {@link Snapshot#CURRENT_STATE_ID}, get the result
 *          from the corresponding snapshot; otherwise, get the result from
 *          the current directory.
 * @return the child inode.
 */
public INode getChild(byte[] name, int snapshotId) {
  DirectoryWithSnapshotFeature sf;
  if (snapshotId == Snapshot.CURRENT_STATE_ID || 
      (sf = getDirectoryWithSnapshotFeature()) == null) {
    ReadOnlyList<INode> c = getCurrentChildrenList();
    final int i = ReadOnlyList.Util.binarySearch(c, name);
    return i < 0 ? null : c.get(i);
  }
  
  return sf.getChild(this, name, snapshotId);
}
 
开发者ID:naver,项目名称:hadoop,代码行数:20,代码来源:INodeDirectory.java


示例17: nextChild

import org.apache.hadoop.hdfs.util.ReadOnlyList; //导入依赖的package包/类
/**
 * Given a child's name, return the index of the next child
 *
 * @param name a child's name
 * @return the index of the next child
 */
static int nextChild(ReadOnlyList<INode> children, byte[] name) {
  if (name.length == 0) { // empty name
    return 0;
  }
  int nextPos = ReadOnlyList.Util.binarySearch(children, name) + 1;
  if (nextPos >= 0) {
    return nextPos;
  }
  return -nextPos;
}
 
开发者ID:naver,项目名称:hadoop,代码行数:17,代码来源:INodeDirectory.java


示例18: computeQuotaUsage

import org.apache.hadoop.hdfs.util.ReadOnlyList; //导入依赖的package包/类
@Override
public QuotaCounts computeQuotaUsage(BlockStoragePolicySuite bsps,
    byte blockStoragePolicyId, QuotaCounts counts, boolean useCache,
    int lastSnapshotId) {
  final DirectoryWithSnapshotFeature sf = getDirectoryWithSnapshotFeature();

  // we are computing the quota usage for a specific snapshot here, i.e., the
  // computation only includes files/directories that exist at the time of the
  // given snapshot
  if (sf != null && lastSnapshotId != Snapshot.CURRENT_STATE_ID
      && !(useCache && isQuotaSet())) {
    ReadOnlyList<INode> childrenList = getChildrenList(lastSnapshotId);
    for (INode child : childrenList) {
      final byte childPolicyId = child.getStoragePolicyIDForQuota(blockStoragePolicyId);
      child.computeQuotaUsage(bsps, childPolicyId, counts, useCache,
          lastSnapshotId);
    }
    counts.addNameSpace(1);
    return counts;
  }
  
  // compute the quota usage in the scope of the current directory tree
  final DirectoryWithQuotaFeature q = getDirectoryWithQuotaFeature();
  if (useCache && q != null && q.isQuotaSet()) { // use the cached quota
    return q.AddCurrentSpaceUsage(counts);
  } else {
    useCache = q != null && !q.isQuotaSet() ? false : useCache;
    return computeDirectoryQuotaUsage(bsps, blockStoragePolicyId, counts,
        useCache, lastSnapshotId);
  }
}
 
开发者ID:naver,项目名称:hadoop,代码行数:32,代码来源:INodeDirectory.java


示例19: computeDirectoryContentSummary

import org.apache.hadoop.hdfs.util.ReadOnlyList; //导入依赖的package包/类
protected ContentSummaryComputationContext computeDirectoryContentSummary(
    ContentSummaryComputationContext summary, int snapshotId) {
  ReadOnlyList<INode> childrenList = getChildrenList(snapshotId);
  // Explicit traversing is done to enable repositioning after relinquishing
  // and reacquiring locks.
  for (int i = 0;  i < childrenList.size(); i++) {
    INode child = childrenList.get(i);
    byte[] childName = child.getLocalNameBytes();

    long lastYieldCount = summary.getYieldCount();
    child.computeContentSummary(summary);

    // Check whether the computation was paused in the subtree.
    // The counts may be off, but traversing the rest of children
    // should be made safe.
    if (lastYieldCount == summary.getYieldCount()) {
      continue;
    }
    // The locks were released and reacquired. Check parent first.
    if (getParent() == null) {
      // Stop further counting and return whatever we have so far.
      break;
    }
    // Obtain the children list again since it may have been modified.
    childrenList = getChildrenList(snapshotId);
    // Reposition in case the children list is changed. Decrement by 1
    // since it will be incremented when loops.
    i = nextChild(childrenList, childName) - 1;
  }

  // Increment the directory count for this directory.
  summary.getCounts().addContent(Content.DIRECTORY, 1);
  // Relinquish and reacquire locks if necessary.
  summary.yield();
  return summary;
}
 
开发者ID:naver,项目名称:hadoop,代码行数:37,代码来源:INodeDirectory.java


示例20: getSnapshotsListing

import org.apache.hadoop.hdfs.util.ReadOnlyList; //导入依赖的package包/类
/**
 * Get a listing of all the snapshots of a snapshottable directory
 */
private static DirectoryListing getSnapshotsListing(
    FSDirectory fsd, String src, byte[] startAfter)
    throws IOException {
  Preconditions.checkState(fsd.hasReadLock());
  Preconditions.checkArgument(
      src.endsWith(HdfsConstants.SEPARATOR_DOT_SNAPSHOT_DIR),
      "%s does not end with %s", src, HdfsConstants.SEPARATOR_DOT_SNAPSHOT_DIR);

  final String dirPath = FSDirectory.normalizePath(src.substring(0,
      src.length() - HdfsConstants.DOT_SNAPSHOT_DIR.length()));

  final INode node = fsd.getINode(dirPath);
  final INodeDirectory dirNode = INodeDirectory.valueOf(node, dirPath);
  final DirectorySnapshottableFeature sf = dirNode.getDirectorySnapshottableFeature();
  if (sf == null) {
    throw new SnapshotException(
        "Directory is not a snapshottable directory: " + dirPath);
  }
  final ReadOnlyList<Snapshot> snapshots = sf.getSnapshotList();
  int skipSize = ReadOnlyList.Util.binarySearch(snapshots, startAfter);
  skipSize = skipSize < 0 ? -skipSize - 1 : skipSize + 1;
  int numOfListing = Math.min(snapshots.size() - skipSize, fsd.getLsLimit());
  final HdfsFileStatus listing[] = new HdfsFileStatus[numOfListing];
  for (int i = 0; i < numOfListing; i++) {
    Snapshot.Root sRoot = snapshots.get(i + skipSize).getRoot();
    INodeAttributes nodeAttrs = getINodeAttributes(
        fsd, src, sRoot.getLocalNameBytes(),
        node, Snapshot.CURRENT_STATE_ID);
    listing[i] = createFileStatus(
        fsd, sRoot.getLocalNameBytes(),
        sRoot, nodeAttrs,
        HdfsConstants.BLOCK_STORAGE_POLICY_ID_UNSPECIFIED,
        Snapshot.CURRENT_STATE_ID, false,
        INodesInPath.fromINode(sRoot));
  }
  return new DirectoryListing(
      listing, snapshots.size() - skipSize - numOfListing);
}
 
开发者ID:aliyun-beta,项目名称:aliyun-oss-hadoop-fs,代码行数:42,代码来源:FSDirStatAndListingOp.java



注:本文中的org.apache.hadoop.hdfs.util.ReadOnlyList类示例整理自Github/MSDocs等源码及文档管理平台,相关代码片段筛选自各路编程大神贡献的开源项目,源码版权归原作者所有,传播和使用请参考对应项目的License;未经允许,请勿转载。


鲜花

握手

雷人

路过

鸡蛋
该文章已有0人参与评论

请发表评论

全部评论

专题导读
上一篇:
Java ScriptModule类代码示例发布时间:2022-05-22
下一篇:
Java CodeSetsComponent类代码示例发布时间:2022-05-22
热门推荐
阅读排行榜

扫描微信二维码

查看手机版网站

随时了解更新最新资讯

139-2527-9053

在线客服(服务时间 9:00~18:00)

在线QQ客服
地址:深圳市南山区西丽大学城创智工业园
电邮:jeky_zhao#qq.com
移动电话:139-2527-9053

Powered by 互联科技 X3.4© 2001-2213 极客世界.|Sitemap