• 设为首页
  • 点击收藏
  • 手机版
    手机扫一扫访问
    迪恩网络手机版
  • 关注官方公众号
    微信扫一扫关注
    迪恩网络公众号

Java NodeType类代码示例

原作者: [db:作者] 来自: [db:来源] 收藏 邀请

本文整理汇总了Java中org.apache.hadoop.hdfs.server.common.HdfsServerConstants.NodeType的典型用法代码示例。如果您正苦于以下问题:Java NodeType类的具体用法?Java NodeType怎么用?Java NodeType使用的例子?那么恭喜您, 这里精选的类代码示例或许可以为您提供帮助。



NodeType类属于org.apache.hadoop.hdfs.server.common.HdfsServerConstants包,在下文中一共展示了NodeType类的17个代码示例,这些例子默认根据受欢迎程度排序。您可以为喜欢或者感觉有用的代码点赞,您的评价将有助于我们的系统推荐出更棒的Java代码示例。

示例1: log

import org.apache.hadoop.hdfs.server.common.HdfsServerConstants.NodeType; //导入依赖的package包/类
/**
 * Writes an INFO log message containing the parameters.
 */
void log(String label, NodeType nodeType, Integer testCase,
    StorageData sd) {
  String testCaseLine = "";
  if (testCase != null) {
    testCaseLine = " testCase="+testCase;
  }
  LOG.info("============================================================");
  LOG.info("***TEST*** " + label + ":"
           + testCaseLine
           + " nodeType="+nodeType
           + " layoutVersion="+sd.storageInfo.getLayoutVersion()
           + " namespaceID="+sd.storageInfo.getNamespaceID()
           + " fsscTime="+sd.storageInfo.getCTime()
           + " clusterID="+sd.storageInfo.getClusterID()
           + " BlockPoolID="+sd.blockPoolId);
}
 
开发者ID:naver,项目名称:hadoop,代码行数:20,代码来源:TestDFSStartupVersions.java


示例2: testConvertNamenodeRegistration

import org.apache.hadoop.hdfs.server.common.HdfsServerConstants.NodeType; //导入依赖的package包/类
@Test
public void testConvertNamenodeRegistration() {
  StorageInfo info = getStorageInfo(NodeType.NAME_NODE);
  NamenodeRegistration reg = new NamenodeRegistration("address:999",
      "http:1000", info, NamenodeRole.NAMENODE);
  NamenodeRegistrationProto regProto = PBHelper.convert(reg);
  NamenodeRegistration reg2 = PBHelper.convert(regProto);
  assertEquals(reg.getAddress(), reg2.getAddress());
  assertEquals(reg.getClusterID(), reg2.getClusterID());
  assertEquals(reg.getCTime(), reg2.getCTime());
  assertEquals(reg.getHttpAddress(), reg2.getHttpAddress());
  assertEquals(reg.getLayoutVersion(), reg2.getLayoutVersion());
  assertEquals(reg.getNamespaceID(), reg2.getNamespaceID());
  assertEquals(reg.getRegistrationID(), reg2.getRegistrationID());
  assertEquals(reg.getRole(), reg2.getRole());
  assertEquals(reg.getVersion(), reg2.getVersion());

}
 
开发者ID:naver,项目名称:hadoop,代码行数:19,代码来源:TestPBHelper.java


示例3: checkResult

import org.apache.hadoop.hdfs.server.common.HdfsServerConstants.NodeType; //导入依赖的package包/类
/**
 * Verify that the new current directory is the old previous.  
 * It is assumed that the server has recovered and rolled back.
 */
void checkResult(NodeType nodeType, String[] baseDirs) throws Exception {
  List<File> curDirs = Lists.newArrayList();
  for (String baseDir : baseDirs) {
    File curDir = new File(baseDir, "current");
    curDirs.add(curDir);
    switch (nodeType) {
    case NAME_NODE:
      FSImageTestUtil.assertReasonableNameCurrentDir(curDir);
      break;
    case DATA_NODE:
      assertEquals(
          UpgradeUtilities.checksumContents(nodeType, curDir, false),
          UpgradeUtilities.checksumMasterDataNodeContents());
      break;
    }
  }
  
  FSImageTestUtil.assertParallelFilesAreIdentical(
      curDirs, Collections.<String>emptySet());

  for (int i = 0; i < baseDirs.length; i++) {
    assertFalse(new File(baseDirs[i],"previous").isDirectory());
  }
}
 
开发者ID:naver,项目名称:hadoop,代码行数:29,代码来源:TestDFSRollback.java


示例4: createBPRegistration

import org.apache.hadoop.hdfs.server.common.HdfsServerConstants.NodeType; //导入依赖的package包/类
/**
 * Create a DatanodeRegistration for a specific block pool.
 * @param nsInfo the namespace info from the first part of the NN handshake
 */
DatanodeRegistration createBPRegistration(NamespaceInfo nsInfo) {
  StorageInfo storageInfo = storage.getBPStorage(nsInfo.getBlockPoolID());
  if (storageInfo == null) {
    // it's null in the case of SimulatedDataSet
    storageInfo = new StorageInfo(
        DataNodeLayoutVersion.CURRENT_LAYOUT_VERSION,
        nsInfo.getNamespaceID(), nsInfo.clusterID, nsInfo.getCTime(),
        NodeType.DATA_NODE);
  }

  DatanodeID dnId = new DatanodeID(
      streamingAddr.getAddress().getHostAddress(), hostName, 
      storage.getDatanodeUuid(), getXferPort(), getInfoPort(),
          infoSecurePort, getIpcPort());
  return new DatanodeRegistration(dnId, storageInfo, 
      new ExportedBlockKeys(), VersionInfo.getVersion());
}
 
开发者ID:Nextzero,项目名称:hadoop-2.6.0-cdh5.4.3,代码行数:22,代码来源:DataNode.java


示例5: checkResult

import org.apache.hadoop.hdfs.server.common.HdfsServerConstants.NodeType; //导入依赖的package包/类
/**
 * Verify that the new current directory is the old previous.  
 * It is assumed that the server has recovered and rolled back.
 */
void checkResult(NodeType nodeType, String[] baseDirs) throws Exception {
  List<File> curDirs = Lists.newArrayList();
  for (String baseDir : baseDirs) {
    File curDir = new File(baseDir, "current");
    curDirs.add(curDir);
    switch (nodeType) {
    case NAME_NODE:
      FSImageTestUtil.assertReasonableNameCurrentDir(curDir);
      break;
    case DATA_NODE:
      assertEquals(
          UpgradeUtilities.checksumContents(nodeType, curDir),
          UpgradeUtilities.checksumMasterDataNodeContents());
      break;
    }
  }
  
  FSImageTestUtil.assertParallelFilesAreIdentical(
      curDirs, Collections.<String>emptySet());

  for (int i = 0; i < baseDirs.length; i++) {
    assertFalse(new File(baseDirs[i],"previous").isDirectory());
  }
}
 
开发者ID:ict-carch,项目名称:hadoop-plus,代码行数:29,代码来源:TestDFSRollback.java


示例6: NNStorage

import org.apache.hadoop.hdfs.server.common.HdfsServerConstants.NodeType; //导入依赖的package包/类
/**
 * Construct the NNStorage.
 * @param conf Namenode configuration.
 * @param imageDirs Directories the image can be stored in.
 * @param editsDirs Directories the editlog can be stored in.
 * @throws IOException if any directories are inaccessible.
 */
public NNStorage(Configuration conf, 
                 Collection<URI> imageDirs, Collection<URI> editsDirs) 
    throws IOException {
  super(NodeType.NAME_NODE);

  storageDirs = new CopyOnWriteArrayList<StorageDirectory>();
  
  // this may modify the editsDirs, so copy before passing in
  setStorageDirectories(imageDirs, 
                        Lists.newArrayList(editsDirs),
                        FSNamesystem.getSharedEditsDirs(conf));
}
 
开发者ID:naver,项目名称:hadoop,代码行数:20,代码来源:NNStorage.java


示例7: NamespaceInfo

import org.apache.hadoop.hdfs.server.common.HdfsServerConstants.NodeType; //导入依赖的package包/类
public NamespaceInfo(int nsID, String clusterID, String bpID,
    long cT, String buildVersion, String softwareVersion,
    long capabilities) {
  super(HdfsConstants.NAMENODE_LAYOUT_VERSION, nsID, clusterID, cT,
      NodeType.NAME_NODE);
  blockPoolID = bpID;
  this.buildVersion = buildVersion;
  this.softwareVersion = softwareVersion;
  this.capabilities = capabilities;
}
 
开发者ID:naver,项目名称:hadoop,代码行数:11,代码来源:NamespaceInfo.java


示例8: BlockPoolSliceStorage

import org.apache.hadoop.hdfs.server.common.HdfsServerConstants.NodeType; //导入依赖的package包/类
BlockPoolSliceStorage(int namespaceID, String bpID, long cTime,
    String clusterId) {
  super(NodeType.DATA_NODE);
  this.namespaceID = namespaceID;
  this.blockpoolID = bpID;
  this.cTime = cTime;
  this.clusterID = clusterId;
  storagesWithRollingUpgradeMarker = Collections.newSetFromMap(
      new ConcurrentHashMap<String, Boolean>());
  storagesWithoutRollingUpgradeMarker = Collections.newSetFromMap(
      new ConcurrentHashMap<String, Boolean>());
}
 
开发者ID:naver,项目名称:hadoop,代码行数:13,代码来源:BlockPoolSliceStorage.java


示例9: StorageInfo

import org.apache.hadoop.hdfs.server.common.HdfsServerConstants.NodeType; //导入依赖的package包/类
public StorageInfo(int layoutV, int nsID, String cid, long cT, NodeType type) {
  layoutVersion = layoutV;
  clusterID = cid;
  namespaceID = nsID;
  cTime = cT;
  storageType = type;
}
 
开发者ID:naver,项目名称:hadoop,代码行数:8,代码来源:StorageInfo.java


示例10: checkStorageType

import org.apache.hadoop.hdfs.server.common.HdfsServerConstants.NodeType; //导入依赖的package包/类
/** Validate and set storage type from {@link Properties}*/
protected void checkStorageType(Properties props, StorageDirectory sd)
    throws InconsistentFSStateException {
  if (storageType == null) { //don't care about storage type
    return;
  }
  NodeType type = NodeType.valueOf(getProperty(props, sd, "storageType"));
  if (!storageType.equals(type)) {
    throw new InconsistentFSStateException(sd.root,
        "Incompatible node types: storageType=" + storageType
        + " but StorageDirectory type=" + type);
  }
}
 
开发者ID:naver,项目名称:hadoop,代码行数:14,代码来源:StorageInfo.java


示例11: JNStorage

import org.apache.hadoop.hdfs.server.common.HdfsServerConstants.NodeType; //导入依赖的package包/类
/**
 * @param conf Configuration object
 * @param logDir the path to the directory in which data will be stored
 * @param errorReporter a callback to report errors
 * @throws IOException 
 */
protected JNStorage(Configuration conf, File logDir, StartupOption startOpt,
    StorageErrorReporter errorReporter) throws IOException {
  super(NodeType.JOURNAL_NODE);
  
  sd = new StorageDirectory(logDir);
  this.addStorageDir(sd);
  this.fjm = new FileJournalManager(conf, sd, errorReporter);

  analyzeAndRecoverStorage(startOpt);
}
 
开发者ID:naver,项目名称:hadoop,代码行数:17,代码来源:JNStorage.java


示例12: doUpgrade

import org.apache.hadoop.hdfs.server.common.HdfsServerConstants.NodeType; //导入依赖的package包/类
@Override
public DoUpgradeResponseProto doUpgrade(RpcController controller,
    DoUpgradeRequestProto request) throws ServiceException {
  StorageInfo si = PBHelper.convert(request.getSInfo(), NodeType.JOURNAL_NODE);
  try {
    impl.doUpgrade(convert(request.getJid()), si);
    return DoUpgradeResponseProto.getDefaultInstance();
  } catch (IOException e) {
    throw new ServiceException(e);
  }
}
 
开发者ID:naver,项目名称:hadoop,代码行数:12,代码来源:QJournalProtocolServerSideTranslatorPB.java


示例13: canRollBack

import org.apache.hadoop.hdfs.server.common.HdfsServerConstants.NodeType; //导入依赖的package包/类
@Override
public CanRollBackResponseProto canRollBack(RpcController controller,
    CanRollBackRequestProto request) throws ServiceException {
  try {
    StorageInfo si = PBHelper.convert(request.getStorage(), NodeType.JOURNAL_NODE);
    Boolean result = impl.canRollBack(convert(request.getJid()), si,
        PBHelper.convert(request.getPrevStorage(), NodeType.JOURNAL_NODE),
        request.getTargetLayoutVersion());
    return CanRollBackResponseProto.newBuilder()
        .setCanRollBack(result)
        .build();
  } catch (IOException e) {
    throw new ServiceException(e);
  }
}
 
开发者ID:naver,项目名称:hadoop,代码行数:16,代码来源:QJournalProtocolServerSideTranslatorPB.java


示例14: testConvertStoragInfo

import org.apache.hadoop.hdfs.server.common.HdfsServerConstants.NodeType; //导入依赖的package包/类
@Test
public void testConvertStoragInfo() {
  StorageInfo info = getStorageInfo(NodeType.NAME_NODE);
  StorageInfoProto infoProto = PBHelper.convert(info);
  StorageInfo info2 = PBHelper.convert(infoProto, NodeType.NAME_NODE);
  assertEquals(info.getClusterID(), info2.getClusterID());
  assertEquals(info.getCTime(), info2.getCTime());
  assertEquals(info.getLayoutVersion(), info2.getLayoutVersion());
  assertEquals(info.getNamespaceID(), info2.getNamespaceID());
}
 
开发者ID:naver,项目名称:hadoop,代码行数:11,代码来源:TestPBHelper.java


示例15: testConvertCheckpointSignature

import org.apache.hadoop.hdfs.server.common.HdfsServerConstants.NodeType; //导入依赖的package包/类
@Test
public void testConvertCheckpointSignature() {
  CheckpointSignature s = new CheckpointSignature(
      getStorageInfo(NodeType.NAME_NODE), "bpid", 100, 1);
  CheckpointSignatureProto sProto = PBHelper.convert(s);
  CheckpointSignature s1 = PBHelper.convert(sProto);
  assertEquals(s.getBlockpoolID(), s1.getBlockpoolID());
  assertEquals(s.getClusterID(), s1.getClusterID());
  assertEquals(s.getCTime(), s1.getCTime());
  assertEquals(s.getCurSegmentTxId(), s1.getCurSegmentTxId());
  assertEquals(s.getLayoutVersion(), s1.getLayoutVersion());
  assertEquals(s.getMostRecentCheckpointTxId(),
      s1.getMostRecentCheckpointTxId());
  assertEquals(s.getNamespaceID(), s1.getNamespaceID());
}
 
开发者ID:naver,项目名称:hadoop,代码行数:16,代码来源:TestPBHelper.java


示例16: testConvertDatanodeRegistration

import org.apache.hadoop.hdfs.server.common.HdfsServerConstants.NodeType; //导入依赖的package包/类
@Test
public void testConvertDatanodeRegistration() {
  DatanodeID dnId = DFSTestUtil.getLocalDatanodeID();
  BlockKey[] keys = new BlockKey[] { getBlockKey(2), getBlockKey(3) };
  ExportedBlockKeys expKeys = new ExportedBlockKeys(true, 9, 10,
      getBlockKey(1), keys);
  DatanodeRegistration reg = new DatanodeRegistration(dnId,
      new StorageInfo(NodeType.DATA_NODE), expKeys, "3.0.0");
  DatanodeRegistrationProto proto = PBHelper.convert(reg);
  DatanodeRegistration reg2 = PBHelper.convert(proto);
  compare(reg.getStorageInfo(), reg2.getStorageInfo());
  compare(reg.getExportedKeys(), reg2.getExportedKeys());
  compare(reg, reg2);
  assertEquals(reg.getSoftwareVersion(), reg2.getSoftwareVersion());
}
 
开发者ID:naver,项目名称:hadoop,代码行数:16,代码来源:TestPBHelper.java


示例17: NNStorage

import org.apache.hadoop.hdfs.server.common.HdfsServerConstants.NodeType; //导入依赖的package包/类
/**
 * Construct the NNStorage.
 * @param conf Namenode configuration.
 * @param imageDirs Directories the image can be stored in.
 * @param editsDirs Directories the editlog can be stored in.
 * @throws IOException if any directories are inaccessible.
 */
public NNStorage(Configuration conf, 
                 Collection<URI> imageDirs, Collection<URI> editsDirs) 
    throws IOException {
  super(NodeType.NAME_NODE);

  storageDirs = new CopyOnWriteArrayList<StorageDirectory>();
  
  // this may modify the editsDirs, so copy before passing in
  setStorageDirectories(imageDirs, 
                        Lists.newArrayList(editsDirs),
                        FSNamesystem.getSharedEditsDirs(conf));
  //Update NameDirSize metric value after NN start
  updateNameDirSize();
}
 
开发者ID:aliyun-beta,项目名称:aliyun-oss-hadoop-fs,代码行数:22,代码来源:NNStorage.java



注:本文中的org.apache.hadoop.hdfs.server.common.HdfsServerConstants.NodeType类示例整理自Github/MSDocs等源码及文档管理平台,相关代码片段筛选自各路编程大神贡献的开源项目,源码版权归原作者所有,传播和使用请参考对应项目的License;未经允许,请勿转载。


鲜花

握手

雷人

路过

鸡蛋
该文章已有0人参与评论

请发表评论

全部评论

专题导读
上一篇:
Java AlwaysFailFencer类代码示例发布时间:2022-05-22
下一篇:
Java AutoTransition类代码示例发布时间:2022-05-22
热门推荐
阅读排行榜

扫描微信二维码

查看手机版网站

随时了解更新最新资讯

139-2527-9053

在线客服(服务时间 9:00~18:00)

在线QQ客服
地址:深圳市南山区西丽大学城创智工业园
电邮:jeky_zhao#qq.com
移动电话:139-2527-9053

Powered by 互联科技 X3.4© 2001-2213 极客世界.|Sitemap