本文整理汇总了Java中org.apache.hadoop.hdfs.server.protocol.JournalInfo类的典型用法代码示例。如果您正苦于以下问题:Java JournalInfo类的具体用法?Java JournalInfo怎么用?Java JournalInfo使用的例子?那么恭喜您, 这里精选的类代码示例或许可以为您提供帮助。
JournalInfo类属于org.apache.hadoop.hdfs.server.protocol包,在下文中一共展示了JournalInfo类的9个代码示例,这些例子默认根据受欢迎程度排序。您可以为喜欢或者感觉有用的代码点赞,您的评价将有助于我们的系统推荐出更棒的Java代码示例。
示例1: verifyJournalRequest
import org.apache.hadoop.hdfs.server.protocol.JournalInfo; //导入依赖的package包/类
/**
* Verifies a journal request
*/
private void verifyJournalRequest(JournalInfo journalInfo)
throws IOException {
verifyLayoutVersion(journalInfo.getLayoutVersion());
String errorMsg = null;
int expectedNamespaceID = namesystem.getNamespaceInfo().getNamespaceID();
if (journalInfo.getNamespaceId() != expectedNamespaceID) {
errorMsg = "Invalid namespaceID in journal request - expected " + expectedNamespaceID
+ " actual " + journalInfo.getNamespaceId();
LOG.warn(errorMsg);
throw new UnregisteredNodeException(journalInfo);
}
if (!journalInfo.getClusterId().equals(namesystem.getClusterId())) {
errorMsg = "Invalid clusterId in journal request - expected "
+ journalInfo.getClusterId() + " actual " + namesystem.getClusterId();
LOG.warn(errorMsg);
throw new UnregisteredNodeException(journalInfo);
}
}
开发者ID:naver,项目名称:hadoop,代码行数:22,代码来源:BackupNode.java
示例2: EditLogBackupOutputStream
import org.apache.hadoop.hdfs.server.protocol.JournalInfo; //导入依赖的package包/类
EditLogBackupOutputStream(NamenodeRegistration bnReg, // backup node
JournalInfo journalInfo) // active name-node
throws IOException {
super();
this.bnRegistration = bnReg;
this.journalInfo = journalInfo;
InetSocketAddress bnAddress =
NetUtils.createSocketAddr(bnRegistration.getAddress());
try {
this.backupNode = NameNodeProxies.createNonHAProxy(new HdfsConfiguration(),
bnAddress, JournalProtocol.class, UserGroupInformation.getCurrentUser(),
true).getProxy();
} catch(IOException e) {
Storage.LOG.error("Error connecting to: " + bnAddress, e);
throw e;
}
this.doubleBuf = new EditsDoubleBuffer(DEFAULT_BUFFER_SIZE);
this.out = new DataOutputBuffer(DEFAULT_BUFFER_SIZE);
}
开发者ID:naver,项目名称:hadoop,代码行数:20,代码来源:EditLogBackupOutputStream.java
示例3: journal
import org.apache.hadoop.hdfs.server.protocol.JournalInfo; //导入依赖的package包/类
@Override
public void journal(JournalInfo journalInfo, long epoch, long firstTxnId,
int numTxns, byte[] records) throws IOException {
JournalRequestProto req = JournalRequestProto.newBuilder()
.setJournalInfo(PBHelper.convert(journalInfo))
.setEpoch(epoch)
.setFirstTxnId(firstTxnId)
.setNumTxns(numTxns)
.setRecords(PBHelper.getByteString(records))
.build();
try {
rpcProxy.journal(NULL_CONTROLLER, req);
} catch (ServiceException e) {
throw ProtobufHelper.getRemoteException(e);
}
}
开发者ID:naver,项目名称:hadoop,代码行数:17,代码来源:JournalProtocolTranslatorPB.java
示例4: journal
import org.apache.hadoop.hdfs.server.protocol.JournalInfo; //导入依赖的package包/类
@Override
public void journal(JournalInfo journalInfo, long epoch, long firstTxnId,
int numTxns, byte[] records) throws IOException {
JournalRequestProto req = JournalRequestProto.newBuilder()
.setJournalInfo(PBHelper.convert(journalInfo))
.setEpoch(epoch)
.setFirstTxnId(firstTxnId)
.setNumTxns(numTxns)
.setRecords(PBHelperClient.getByteString(records))
.build();
try {
rpcProxy.journal(NULL_CONTROLLER, req);
} catch (ServiceException e) {
throw ProtobufHelper.getRemoteException(e);
}
}
开发者ID:aliyun-beta,项目名称:aliyun-oss-hadoop-fs,代码行数:17,代码来源:JournalProtocolTranslatorPB.java
示例5: startLogSegment
import org.apache.hadoop.hdfs.server.protocol.JournalInfo; //导入依赖的package包/类
@Override
public void startLogSegment(JournalInfo journalInfo, long epoch,
long txid) throws IOException {
namesystem.checkOperation(OperationCategory.JOURNAL);
verifyJournalRequest(journalInfo);
getBNImage().namenodeStartedLogSegment(txid);
}
开发者ID:naver,项目名称:hadoop,代码行数:8,代码来源:BackupNode.java
示例6: journal
import org.apache.hadoop.hdfs.server.protocol.JournalInfo; //导入依赖的package包/类
@Override
public void journal(JournalInfo journalInfo, long epoch, long firstTxId,
int numTxns, byte[] records) throws IOException {
namesystem.checkOperation(OperationCategory.JOURNAL);
verifyJournalRequest(journalInfo);
getBNImage().journal(firstTxId, numTxns, records);
}
开发者ID:naver,项目名称:hadoop,代码行数:8,代码来源:BackupNode.java
示例7: fence
import org.apache.hadoop.hdfs.server.protocol.JournalInfo; //导入依赖的package包/类
@Override
public FenceResponse fence(JournalInfo journalInfo, long epoch,
String fencerInfo) throws IOException {
LOG.info("Fenced by " + fencerInfo + " with epoch " + epoch);
throw new UnsupportedOperationException(
"BackupNode does not support fence");
}
开发者ID:naver,项目名称:hadoop,代码行数:8,代码来源:BackupNode.java
示例8: startLogSegment
import org.apache.hadoop.hdfs.server.protocol.JournalInfo; //导入依赖的package包/类
@Override
public void startLogSegment(JournalInfo journalInfo, long epoch, long txid)
throws IOException {
StartLogSegmentRequestProto req = StartLogSegmentRequestProto.newBuilder()
.setJournalInfo(PBHelper.convert(journalInfo))
.setEpoch(epoch)
.setTxid(txid)
.build();
try {
rpcProxy.startLogSegment(NULL_CONTROLLER, req);
} catch (ServiceException e) {
throw ProtobufHelper.getRemoteException(e);
}
}
开发者ID:naver,项目名称:hadoop,代码行数:15,代码来源:JournalProtocolTranslatorPB.java
示例9: fence
import org.apache.hadoop.hdfs.server.protocol.JournalInfo; //导入依赖的package包/类
@Override
public FenceResponse fence(JournalInfo journalInfo, long epoch,
String fencerInfo) throws IOException {
FenceRequestProto req = FenceRequestProto.newBuilder().setEpoch(epoch)
.setJournalInfo(PBHelper.convert(journalInfo)).build();
try {
FenceResponseProto resp = rpcProxy.fence(NULL_CONTROLLER, req);
return new FenceResponse(resp.getPreviousEpoch(),
resp.getLastTransactionId(), resp.getInSync());
} catch (ServiceException e) {
throw ProtobufHelper.getRemoteException(e);
}
}
开发者ID:naver,项目名称:hadoop,代码行数:14,代码来源:JournalProtocolTranslatorPB.java
注:本文中的org.apache.hadoop.hdfs.server.protocol.JournalInfo类示例整理自Github/MSDocs等源码及文档管理平台,相关代码片段筛选自各路编程大神贡献的开源项目,源码版权归原作者所有,传播和使用请参考对应项目的License;未经允许,请勿转载。 |
请发表评论