• 设为首页
  • 点击收藏
  • 手机版
    手机扫一扫访问
    迪恩网络手机版
  • 关注官方公众号
    微信扫一扫关注
    迪恩网络公众号

Java IdUserGroup类代码示例

原作者: [db:作者] 来自: [db:来源] 收藏 邀请

本文整理汇总了Java中org.apache.hadoop.nfs.nfs3.IdUserGroup的典型用法代码示例。如果您正苦于以下问题:Java IdUserGroup类的具体用法?Java IdUserGroup怎么用?Java IdUserGroup使用的例子?那么恭喜您, 这里精选的类代码示例或许可以为您提供帮助。



IdUserGroup类属于org.apache.hadoop.nfs.nfs3包,在下文中一共展示了IdUserGroup类的20个代码示例,这些例子默认根据受欢迎程度排序。您可以为喜欢或者感觉有用的代码点赞,您的评价将有助于我们的系统推荐出更棒的Java代码示例。

示例1: RpcProgramNfs3

import org.apache.hadoop.nfs.nfs3.IdUserGroup; //导入依赖的package包/类
public RpcProgramNfs3(List<String> exports, Configuration config)
    throws IOException {
  super("NFS3", "localhost", Nfs3Constant.PORT, Nfs3Constant.PROGRAM,
      Nfs3Constant.VERSION, Nfs3Constant.VERSION, 100);
 
  config.set(FsPermission.UMASK_LABEL, "000");
  iug = new IdUserGroup();
  writeManager = new WriteManager(iug, config);
  clientCache = new DFSClientCache(config);
  superUserClient = new DFSClient(NameNode.getAddress(config), config);
  replication = (short) config.getInt(DFSConfigKeys.DFS_REPLICATION_KEY,
      DFSConfigKeys.DFS_REPLICATION_DEFAULT);
  blockSize = config.getLong(DFSConfigKeys.DFS_BLOCK_SIZE_KEY,
      DFSConfigKeys.DFS_BLOCK_SIZE_DEFAULT);
  bufferSize = config.getInt("io.file.buffer.size", 4096);
  
  writeDumpDir = config.get("dfs.nfs3.dump.dir", "/tmp/.hdfs-nfs");    
  boolean enableDump = config.getBoolean("dfs.nfs3.enableDump", true);
  if (!enableDump) {
    writeDumpDir = null;
  } else {
    clearDirectory(writeDumpDir);
  }
}
 
开发者ID:ict-carch,项目名称:hadoop-plus,代码行数:25,代码来源:RpcProgramNfs3.java


示例2: WriteManager

import org.apache.hadoop.nfs.nfs3.IdUserGroup; //导入依赖的package包/类
WriteManager(IdUserGroup iug, final Configuration config) {
  this.iug = iug;
  this.config = config;
  streamTimeout = config.getLong(Nfs3Constant.OUTPUT_STREAM_TIMEOUT,
      Nfs3Constant.OUTPUT_STREAM_TIMEOUT_DEFAULT);
  LOG.info("Stream timeout is " + streamTimeout + "ms.");
  if (streamTimeout < Nfs3Constant.OUTPUT_STREAM_TIMEOUT_MIN_DEFAULT) {
    LOG.info("Reset stream timeout to minimum value " +
        Nfs3Constant.OUTPUT_STREAM_TIMEOUT_MIN_DEFAULT + "ms.");
    streamTimeout = Nfs3Constant.OUTPUT_STREAM_TIMEOUT_MIN_DEFAULT;
  }
  maxStreams = config.getInt(Nfs3Constant.MAX_OPEN_FILES,
      Nfs3Constant.MAX_OPEN_FILES_DEFAULT);
  LOG.info("Maximum open streams is " + maxStreams);
  this.fileContextCache = new OpenFileCtxCache(config, streamTimeout);
}
 
开发者ID:hopshadoop,项目名称:hops,代码行数:17,代码来源:WriteManager.java


示例3: WriteManager

import org.apache.hadoop.nfs.nfs3.IdUserGroup; //导入依赖的package包/类
WriteManager(IdUserGroup iug, final Configuration config) {
  this.iug = iug;
  this.config = config;
  streamTimeout = config.getLong(Nfs3Constant.OUTPUT_STREAM_TIMEOUT,
      Nfs3Constant.OUTPUT_STREAM_TIMEOUT_DEFAULT);
  LOG.info("Stream timeout is " + streamTimeout + "ms.");
  if (streamTimeout < Nfs3Constant.OUTPUT_STREAM_TIMEOUT_MIN_DEFAULT) {
    LOG.info("Reset stream timeout to minimum value "
        + Nfs3Constant.OUTPUT_STREAM_TIMEOUT_MIN_DEFAULT + "ms.");
    streamTimeout = Nfs3Constant.OUTPUT_STREAM_TIMEOUT_MIN_DEFAULT;
  }
  maxStreams = config.getInt(Nfs3Constant.MAX_OPEN_FILES,
      Nfs3Constant.MAX_OPEN_FILES_DEFAULT);
  LOG.info("Maximum open streams is "+ maxStreams);
  this.fileContextCache = new OpenFileCtxCache(config, streamTimeout);
}
 
开发者ID:Seagate,项目名称:hadoop-on-lustre2,代码行数:17,代码来源:WriteManager.java


示例4: WriteManager

import org.apache.hadoop.nfs.nfs3.IdUserGroup; //导入依赖的package包/类
WriteManager(IdUserGroup iug, final Configuration config) {
  this.iug = iug;
  
  streamTimeout = config.getLong("dfs.nfs3.stream.timeout",
      DEFAULT_STREAM_TIMEOUT);
  LOG.info("Stream timeout is " + streamTimeout + "ms.");
  if (streamTimeout < MINIMIUM_STREAM_TIMEOUT) {
    LOG.info("Reset stream timeout to minimum value "
        + MINIMIUM_STREAM_TIMEOUT + "ms.");
    streamTimeout = MINIMIUM_STREAM_TIMEOUT;
  }
  
  this.streamMonitor = new StreamMonitor();
}
 
开发者ID:ict-carch,项目名称:hadoop-plus,代码行数:15,代码来源:WriteManager.java


示例5: getFileAttr

import org.apache.hadoop.nfs.nfs3.IdUserGroup; //导入依赖的package包/类
/**
 * If the file is in cache, update the size based on the cached data size
 */
Nfs3FileAttributes getFileAttr(DFSClient client, FileHandle fileHandle,
    IdUserGroup iug) throws IOException {
  String fileIdPath = Nfs3Utils.getFileIdPath(fileHandle);
  Nfs3FileAttributes attr = Nfs3Utils.getFileAttr(client, fileIdPath, iug);
  if (attr != null) {
    OpenFileCtx openFileCtx = openFileMap.get(fileHandle);
    if (openFileCtx != null) {
      attr.setSize(openFileCtx.getNextOffset());
      attr.setUsed(openFileCtx.getNextOffset());
    }
  }
  return attr;
}
 
开发者ID:ict-carch,项目名称:hadoop-plus,代码行数:17,代码来源:WriteManager.java


示例6: getNfs3FileAttrFromFileStatus

import org.apache.hadoop.nfs.nfs3.IdUserGroup; //导入依赖的package包/类
public static Nfs3FileAttributes getNfs3FileAttrFromFileStatus(
    HdfsFileStatus fs, IdUserGroup iug) {
  /**
   * Some 32bit Linux client has problem with 64bit fileId: it seems the 32bit
   * client takes only the lower 32bit of the fileId and treats it as signed
   * int. When the 32th bit is 1, the client considers it invalid.
   */
  return new Nfs3FileAttributes(fs.isDir(), fs.getChildrenNum(), fs
      .getPermission().toShort(), iug.getUidAllowingUnknown(fs.getOwner()),
      iug.getGidAllowingUnknown(fs.getGroup()), fs.getLen(), 0 /* fsid */,
      fs.getFileId(), fs.getModificationTime(), fs.getAccessTime());
}
 
开发者ID:ict-carch,项目名称:hadoop-plus,代码行数:13,代码来源:Nfs3Utils.java


示例7: OpenFileCtx

import org.apache.hadoop.nfs.nfs3.IdUserGroup; //导入依赖的package包/类
OpenFileCtx(HdfsDataOutputStream fos, Nfs3FileAttributes latestAttr,
    String dumpFilePath, DFSClient client, IdUserGroup iug) {
  this.fos = fos;
  this.latestAttr = latestAttr;
  // We use the ReverseComparatorOnMin as the comparator of the map. In this
  // way, we first dump the data with larger offset. In the meanwhile, we
  // retrieve the last element to write back to HDFS.
  pendingWrites = new ConcurrentSkipListMap<OffsetRange, WriteCtx>(
      OffsetRange.ReverseComparatorOnMin);
  
  pendingCommits = new ConcurrentSkipListMap<Long, CommitCtx>();
  
  updateLastAccessTime();
  activeState = true;
  asyncStatus = false;
  asyncWriteBackStartOffset = 0;
  dumpOut = null;
  raf = null;
  nonSequentialWriteInMemory = new AtomicLong(0);

  this.dumpFilePath = dumpFilePath;
  enabledDump = dumpFilePath == null ? false : true;
  nextOffset = new AtomicLong();
  nextOffset.set(latestAttr.getSize());
  try {
    assert (nextOffset.get() == this.fos.getPos());
  } catch (IOException e) {
  }
  dumpThread = null;
  this.client = client;
  this.iug = iug;
}
 
开发者ID:hopshadoop,项目名称:hops,代码行数:33,代码来源:OpenFileCtx.java


示例8: getFileAttr

import org.apache.hadoop.nfs.nfs3.IdUserGroup; //导入依赖的package包/类
/**
 * If the file is in cache, update the size based on the cached data size
 */
Nfs3FileAttributes getFileAttr(DFSClient client, FileHandle fileHandle,
    IdUserGroup iug) throws IOException {
  String fileIdPath = Nfs3Utils.getFileIdPath(fileHandle);
  Nfs3FileAttributes attr = Nfs3Utils.getFileAttr(client, fileIdPath, iug);
  if (attr != null) {
    OpenFileCtx openFileCtx = fileContextCache.get(fileHandle);
    if (openFileCtx != null) {
      attr.setSize(openFileCtx.getNextOffset());
      attr.setUsed(openFileCtx.getNextOffset());
    }
  }
  return attr;
}
 
开发者ID:hopshadoop,项目名称:hops,代码行数:17,代码来源:WriteManager.java


示例9: RpcProgramNfs3

import org.apache.hadoop.nfs.nfs3.IdUserGroup; //导入依赖的package包/类
public RpcProgramNfs3(Configuration config) throws IOException {
  super("NFS3", "localhost", config.getInt(Nfs3Constant.NFS3_SERVER_PORT,
          Nfs3Constant.NFS3_SERVER_PORT_DEFAULT), Nfs3Constant.PROGRAM,
      Nfs3Constant.VERSION, Nfs3Constant.VERSION);

  config.set(FsPermission.UMASK_LABEL, "000");
  iug = new IdUserGroup();
  
  exports = NfsExports.getInstance(config);
  writeManager = new WriteManager(iug, config);
  clientCache = new DFSClientCache(config);
  superUserClient = new DFSClient(NameNode.getAddress(config), config);
  replication = (short) config.getInt(DFSConfigKeys.DFS_REPLICATION_KEY,
      DFSConfigKeys.DFS_REPLICATION_DEFAULT);
  blockSize = config.getLong(DFSConfigKeys.DFS_BLOCK_SIZE_KEY,
      DFSConfigKeys.DFS_BLOCK_SIZE_DEFAULT);
  bufferSize = config
      .getInt(CommonConfigurationKeysPublic.IO_FILE_BUFFER_SIZE_KEY,
          CommonConfigurationKeysPublic.IO_FILE_BUFFER_SIZE_DEFAULT);
  
  writeDumpDir = config.get(Nfs3Constant.FILE_DUMP_DIR_KEY,
      Nfs3Constant.FILE_DUMP_DIR_DEFAULT);
  boolean enableDump = config.getBoolean(Nfs3Constant.ENABLE_FILE_DUMP_KEY,
      Nfs3Constant.ENABLE_FILE_DUMP_DEFAULT);
  UserGroupInformation.setConfiguration(config);
  SecurityUtil.login(config, DFS_NFS_KEYTAB_FILE_KEY, DFS_NFS_USER_NAME_KEY);

  if (!enableDump) {
    writeDumpDir = null;
  } else {
    clearDirectory(writeDumpDir);
  }

  rpcCallCache = new RpcCallCache("NFS3", 256);
}
 
开发者ID:hopshadoop,项目名称:hops,代码行数:36,代码来源:RpcProgramNfs3.java


示例10: getNfs3FileAttrFromFileStatus

import org.apache.hadoop.nfs.nfs3.IdUserGroup; //导入依赖的package包/类
public static Nfs3FileAttributes getNfs3FileAttrFromFileStatus(
    HdfsFileStatus fs, IdUserGroup iug) {
  /**
   * Some 32bit Linux client has problem with 64bit fileId: it seems the 32bit
   * client takes only the lower 32bit of the fileId and treats it as signed
   * int. When the 32th bit is 1, the client considers it invalid.
   */
  NfsFileType fileType = fs.isDir() ? NfsFileType.NFSDIR : NfsFileType.NFSREG;
  fileType = fs.isSymlink() ? NfsFileType.NFSLNK : fileType;
  
  return new Nfs3FileAttributes(fileType, fs.getChildrenNum(),
      fs.getPermission().toShort(), iug.getUidAllowingUnknown(fs.getOwner()),
      iug.getGidAllowingUnknown(fs.getGroup()), fs.getLen(), 0 /* fsid */,
      fs.getFileId(), fs.getModificationTime(), fs.getAccessTime());
}
 
开发者ID:hopshadoop,项目名称:hops,代码行数:16,代码来源:Nfs3Utils.java


示例11: receivedNewWriteInternal

import org.apache.hadoop.nfs.nfs3.IdUserGroup; //导入依赖的package包/类
private void receivedNewWriteInternal(DFSClient dfsClient,
    WRITE3Request request, Channel channel, int xid,
    AsyncDataService asyncDataService, IdUserGroup iug) {
  WriteStableHow stableHow = request.getStableHow();
  WccAttr preOpAttr = latestAttr.getWccAttr();
  int count = request.getCount();

  WriteCtx writeCtx = addWritesToCache(request, channel, xid);
  if (writeCtx == null) {
    // offset < nextOffset
    processOverWrite(dfsClient, request, channel, xid, iug);
  } else {
    // The writes is added to pendingWrites.
    // Check and start writing back if necessary
    boolean startWriting = checkAndStartWrite(asyncDataService, writeCtx);
    if (!startWriting) {
      // offset > nextOffset. check if we need to dump data
      checkDump();
      
      // In test, noticed some Linux client sends a batch (e.g., 1MB)
      // of reordered writes and won't send more writes until it gets
      // responses of the previous batch. So here send response immediately
      // for unstable non-sequential write
      if (request.getStableHow() == WriteStableHow.UNSTABLE) {
        if (LOG.isDebugEnabled()) {
          LOG.debug("UNSTABLE write request, send response for offset: "
              + writeCtx.getOffset());
        }
        WccData fileWcc = new WccData(preOpAttr, latestAttr);
        WRITE3Response response = new WRITE3Response(Nfs3Status.NFS3_OK,
            fileWcc, count, stableHow, Nfs3Constant.WRITE_COMMIT_VERF);
        Nfs3Utils
            .writeChannel(channel, response.writeHeaderAndResponse(new XDR(),
                xid, new VerifierNone()), xid);
        writeCtx.setReplied(true);
      }
    }
  }
}
 
开发者ID:chendave,项目名称:hadoop-TCP,代码行数:40,代码来源:OpenFileCtx.java


示例12: WriteManager

import org.apache.hadoop.nfs.nfs3.IdUserGroup; //导入依赖的package包/类
WriteManager(IdUserGroup iug, final Configuration config) {
  this.iug = iug;
  this.config = config;
  
  streamTimeout = config.getLong("dfs.nfs3.stream.timeout",
      DEFAULT_STREAM_TIMEOUT);
  LOG.info("Stream timeout is " + streamTimeout + "ms.");
  if (streamTimeout < MINIMIUM_STREAM_TIMEOUT) {
    LOG.info("Reset stream timeout to minimum value "
        + MINIMIUM_STREAM_TIMEOUT + "ms.");
    streamTimeout = MINIMIUM_STREAM_TIMEOUT;
  }
  
  this.streamMonitor = new StreamMonitor();
}
 
开发者ID:chendave,项目名称:hadoop-TCP,代码行数:16,代码来源:WriteManager.java


示例13: RpcProgramNfs3

import org.apache.hadoop.nfs.nfs3.IdUserGroup; //导入依赖的package包/类
public RpcProgramNfs3(Configuration config) throws IOException {
  super("NFS3", "localhost", Nfs3Constant.PORT, Nfs3Constant.PROGRAM,
      Nfs3Constant.VERSION, Nfs3Constant.VERSION);
 
  config.set(FsPermission.UMASK_LABEL, "000");
  iug = new IdUserGroup();
  
  exports = NfsExports.getInstance(config);
  writeManager = new WriteManager(iug, config);
  clientCache = new DFSClientCache(config);
  superUserClient = new DFSClient(NameNode.getAddress(config), config);
  replication = (short) config.getInt(DFSConfigKeys.DFS_REPLICATION_KEY,
      DFSConfigKeys.DFS_REPLICATION_DEFAULT);
  blockSize = config.getLong(DFSConfigKeys.DFS_BLOCK_SIZE_KEY,
      DFSConfigKeys.DFS_BLOCK_SIZE_DEFAULT);
  bufferSize = config.getInt(
      CommonConfigurationKeysPublic.IO_FILE_BUFFER_SIZE_KEY,
      CommonConfigurationKeysPublic.IO_FILE_BUFFER_SIZE_DEFAULT);
  
  writeDumpDir = config.get(Nfs3Constant.FILE_DUMP_DIR_KEY,
      Nfs3Constant.FILE_DUMP_DIR_DEFAULT);
  boolean enableDump = config.getBoolean(Nfs3Constant.ENABLE_FILE_DUMP_KEY,
      Nfs3Constant.ENABLE_FILE_DUMP_DEFAULT);
  if (!enableDump) {
    writeDumpDir = null;
  } else {
    clearDirectory(writeDumpDir);
  }

  rpcCallCache = new RpcCallCache("NFS3", 256);
}
 
开发者ID:chendave,项目名称:hadoop-TCP,代码行数:32,代码来源:RpcProgramNfs3.java


示例14: getNfs3FileAttrFromFileStatus

import org.apache.hadoop.nfs.nfs3.IdUserGroup; //导入依赖的package包/类
public static Nfs3FileAttributes getNfs3FileAttrFromFileStatus(
    HdfsFileStatus fs, IdUserGroup iug) {
  /**
   * Some 32bit Linux client has problem with 64bit fileId: it seems the 32bit
   * client takes only the lower 32bit of the fileId and treats it as signed
   * int. When the 32th bit is 1, the client considers it invalid.
   */
  NfsFileType fileType = fs.isDir() ? NfsFileType.NFSDIR : NfsFileType.NFSREG;
  fileType = fs.isSymlink() ? NfsFileType.NFSLNK : fileType;
  
  return new Nfs3FileAttributes(fileType, fs.getChildrenNum(), fs
      .getPermission().toShort(), iug.getUidAllowingUnknown(fs.getOwner()),
      iug.getGidAllowingUnknown(fs.getGroup()), fs.getLen(), 0 /* fsid */,
      fs.getFileId(), fs.getModificationTime(), fs.getAccessTime());
}
 
开发者ID:chendave,项目名称:hadoop-TCP,代码行数:16,代码来源:Nfs3Utils.java


示例15: OpenFileCtx

import org.apache.hadoop.nfs.nfs3.IdUserGroup; //导入依赖的package包/类
OpenFileCtx(HdfsDataOutputStream fos, Nfs3FileAttributes latestAttr,
    String dumpFilePath, DFSClient client, IdUserGroup iug) {
  this.fos = fos;
  this.latestAttr = latestAttr;
  // We use the ReverseComparatorOnMin as the comparator of the map. In this
  // way, we first dump the data with larger offset. In the meanwhile, we
  // retrieve the last element to write back to HDFS.
  pendingWrites = new ConcurrentSkipListMap<OffsetRange, WriteCtx>(
      OffsetRange.ReverseComparatorOnMin);
  
  pendingCommits = new ConcurrentSkipListMap<Long, CommitCtx>();
  
  updateLastAccessTime();
  activeState = true;
  asyncStatus = false;
  asyncWriteBackStartOffset = 0;
  dumpOut = null;
  raf = null;
  nonSequentialWriteInMemory = new AtomicLong(0);

  this.dumpFilePath = dumpFilePath;  
  enabledDump = dumpFilePath == null ? false: true;
  nextOffset = new AtomicLong();
  nextOffset.set(latestAttr.getSize());
  try {	
    assert(nextOffset.get() == this.fos.getPos());
  } catch (IOException e) {}
  dumpThread = null;
  this.client = client;
  this.iug = iug;
}
 
开发者ID:Seagate,项目名称:hadoop-on-lustre2,代码行数:32,代码来源:OpenFileCtx.java


示例16: RpcProgramNfs3

import org.apache.hadoop.nfs.nfs3.IdUserGroup; //导入依赖的package包/类
public RpcProgramNfs3(Configuration config) throws IOException {
  super("NFS3", "localhost", config.getInt(Nfs3Constant.NFS3_SERVER_PORT,
      Nfs3Constant.NFS3_SERVER_PORT_DEFAULT), Nfs3Constant.PROGRAM,
      Nfs3Constant.VERSION, Nfs3Constant.VERSION);
 
  config.set(FsPermission.UMASK_LABEL, "000");
  iug = new IdUserGroup();
  
  exports = NfsExports.getInstance(config);
  writeManager = new WriteManager(iug, config);
  clientCache = new DFSClientCache(config);
  superUserClient = new DFSClient(NameNode.getAddress(config), config);
  replication = (short) config.getInt(DFSConfigKeys.DFS_REPLICATION_KEY,
      DFSConfigKeys.DFS_REPLICATION_DEFAULT);
  blockSize = config.getLong(DFSConfigKeys.DFS_BLOCK_SIZE_KEY,
      DFSConfigKeys.DFS_BLOCK_SIZE_DEFAULT);
  bufferSize = config.getInt(
      CommonConfigurationKeysPublic.IO_FILE_BUFFER_SIZE_KEY,
      CommonConfigurationKeysPublic.IO_FILE_BUFFER_SIZE_DEFAULT);
  
  writeDumpDir = config.get(Nfs3Constant.FILE_DUMP_DIR_KEY,
      Nfs3Constant.FILE_DUMP_DIR_DEFAULT);
  boolean enableDump = config.getBoolean(Nfs3Constant.ENABLE_FILE_DUMP_KEY,
      Nfs3Constant.ENABLE_FILE_DUMP_DEFAULT);
  UserGroupInformation.setConfiguration(config);
  SecurityUtil.login(config, DFS_NFS_KEYTAB_FILE_KEY,
          DFS_NFS_USER_NAME_KEY);

  if (!enableDump) {
    writeDumpDir = null;
  } else {
    clearDirectory(writeDumpDir);
  }

  rpcCallCache = new RpcCallCache("NFS3", 256);
}
 
开发者ID:Seagate,项目名称:hadoop-on-lustre2,代码行数:37,代码来源:RpcProgramNfs3.java


示例17: getFileAttr

import org.apache.hadoop.nfs.nfs3.IdUserGroup; //导入依赖的package包/类
public static Nfs3FileAttributes getFileAttr(DFSClient client,
    String fileIdPath, IdUserGroup iug) throws IOException {
  HdfsFileStatus fs = getFileStatus(client, fileIdPath);
  return fs == null ? null : getNfs3FileAttrFromFileStatus(fs, iug);
}
 
开发者ID:ict-carch,项目名称:hadoop-plus,代码行数:6,代码来源:Nfs3Utils.java


示例18: createWccData

import org.apache.hadoop.nfs.nfs3.IdUserGroup; //导入依赖的package包/类
public static WccData createWccData(final WccAttr preOpAttr,
    DFSClient dfsClient, final String fileIdPath, final IdUserGroup iug)
    throws IOException {
  Nfs3FileAttributes postOpDirAttr = getFileAttr(dfsClient, fileIdPath, iug);
  return new WccData(preOpAttr, postOpDirAttr);
}
 
开发者ID:ict-carch,项目名称:hadoop-plus,代码行数:7,代码来源:Nfs3Utils.java


示例19: receivedNewWriteInternal

import org.apache.hadoop.nfs.nfs3.IdUserGroup; //导入依赖的package包/类
private void receivedNewWriteInternal(DFSClient dfsClient,
    WRITE3Request request, Channel channel, int xid,
    AsyncDataService asyncDataService, IdUserGroup iug) {
  WriteStableHow stableHow = request.getStableHow();
  WccAttr preOpAttr = latestAttr.getWccAttr();
  int count = request.getCount();

  WriteCtx writeCtx = addWritesToCache(request, channel, xid);
  if (writeCtx == null) {
    // offset < nextOffset
    processOverWrite(dfsClient, request, channel, xid, iug);
  } else {
    // The writes is added to pendingWrites.
    // Check and start writing back if necessary
    boolean startWriting = checkAndStartWrite(asyncDataService, writeCtx);
    if (!startWriting) {
      // offset > nextOffset. check if we need to dump data
      checkDump();
      
      // In test, noticed some Linux client sends a batch (e.g., 1MB)
      // of reordered writes and won't send more writes until it gets
      // responses of the previous batch. So here send response immediately
      // for unstable non-sequential write
      if (stableHow != WriteStableHow.UNSTABLE) {
        LOG.info("Have to change stable write to unstable write:" +
            request.getStableHow());
        stableHow = WriteStableHow.UNSTABLE;
      }

      if (LOG.isDebugEnabled()) {
        LOG.debug("UNSTABLE write request, send response for offset: " +
            writeCtx.getOffset());
      }
      WccData fileWcc = new WccData(preOpAttr, latestAttr);
      WRITE3Response response =
          new WRITE3Response(Nfs3Status.NFS3_OK, fileWcc, count, stableHow,
              Nfs3Constant.WRITE_COMMIT_VERF);
      Nfs3Utils.writeChannel(channel,
          response.writeHeaderAndResponse(new XDR(), xid, new VerifierNone()),
          xid);
      writeCtx.setReplied(true);
    }
  }
}
 
开发者ID:hopshadoop,项目名称:hops,代码行数:45,代码来源:OpenFileCtx.java


示例20: testEviction

import org.apache.hadoop.nfs.nfs3.IdUserGroup; //导入依赖的package包/类
@Test
public void testEviction() throws IOException, InterruptedException {
  Configuration conf = new Configuration();

  // Only two entries will be in the cache
  conf.setInt(Nfs3Constant.MAX_OPEN_FILES, 2);

  DFSClient dfsClient = Mockito.mock(DFSClient.class);
  Nfs3FileAttributes attr = new Nfs3FileAttributes();
  HdfsDataOutputStream fos = Mockito.mock(HdfsDataOutputStream.class);
  Mockito.when(fos.getPos()).thenReturn((long) 0);

  OpenFileCtx context1 =
      new OpenFileCtx(fos, attr, "/dumpFilePath", dfsClient,
          new IdUserGroup());
  OpenFileCtx context2 =
      new OpenFileCtx(fos, attr, "/dumpFilePath", dfsClient,
          new IdUserGroup());
  OpenFileCtx context3 =
      new OpenFileCtx(fos, attr, "/dumpFilePath", dfsClient,
          new IdUserGroup());
  OpenFileCtx context4 =
      new OpenFileCtx(fos, attr, "/dumpFilePath", dfsClient,
          new IdUserGroup());
  OpenFileCtx context5 =
      new OpenFileCtx(fos, attr, "/dumpFilePath", dfsClient,
          new IdUserGroup());

  OpenFileCtxCache cache = new OpenFileCtxCache(conf, 10 * 60 * 100);

  boolean ret = cache.put(new FileHandle(1), context1);
  assertTrue(ret);
  Thread.sleep(1000);
  ret = cache.put(new FileHandle(2), context2);
  assertTrue(ret);
  ret = cache.put(new FileHandle(3), context3);
  assertFalse(ret);
  assertTrue(cache.size() == 2);

  // Wait for the oldest stream to be evict-able, insert again
  Thread.sleep(Nfs3Constant.OUTPUT_STREAM_TIMEOUT_MIN_DEFAULT);
  assertTrue(cache.size() == 2);

  ret = cache.put(new FileHandle(3), context3);
  assertTrue(ret);
  assertTrue(cache.size() == 2);
  assertTrue(cache.get(new FileHandle(1)) == null);

  // Test inactive entry is evicted immediately
  context3.setActiveStatusForTest(false);
  ret = cache.put(new FileHandle(4), context4);
  assertTrue(ret);

  // Now the cache has context2 and context4
  // Test eviction failure if all entries have pending work.
  context2.getPendingWritesForTest().put(new OffsetRange(0, 100),
      new WriteCtx(null, 0, 0, 0, null, null, null, 0, false, null));
  context4.getPendingCommitsForTest()
      .put(new Long(100), new CommitCtx(0, null, 0, attr));
  Thread.sleep(Nfs3Constant.OUTPUT_STREAM_TIMEOUT_MIN_DEFAULT);
  ret = cache.put(new FileHandle(5), context5);
  assertFalse(ret);
}
 
开发者ID:hopshadoop,项目名称:hops,代码行数:64,代码来源:TestOpenFileCtxCache.java



注:本文中的org.apache.hadoop.nfs.nfs3.IdUserGroup类示例整理自Github/MSDocs等源码及文档管理平台,相关代码片段筛选自各路编程大神贡献的开源项目,源码版权归原作者所有,传播和使用请参考对应项目的License;未经允许,请勿转载。


鲜花

握手

雷人

路过

鸡蛋
该文章已有0人参与评论

请发表评论

全部评论

专题导读
上一篇:
Java IItemColorProvider类代码示例发布时间:2022-05-22
下一篇:
Java RecordMetaData类代码示例发布时间:2022-05-22
热门推荐
阅读排行榜

扫描微信二维码

查看手机版网站

随时了解更新最新资讯

139-2527-9053

在线客服(服务时间 9:00~18:00)

在线QQ客服
地址:深圳市南山区西丽大学城创智工业园
电邮:jeky_zhao#qq.com
移动电话:139-2527-9053

Powered by 互联科技 X3.4© 2001-2213 极客世界.|Sitemap