本文整理汇总了Java中org.apache.hadoop.hdfs.server.datanode.fsdataset.ReplicaOutputStreams类的典型用法代码示例。如果您正苦于以下问题:Java ReplicaOutputStreams类的具体用法?Java ReplicaOutputStreams怎么用?Java ReplicaOutputStreams使用的例子?那么恭喜您, 这里精选的类代码示例或许可以为您提供帮助。
ReplicaOutputStreams类属于org.apache.hadoop.hdfs.server.datanode.fsdataset包,在下文中一共展示了ReplicaOutputStreams类的16个代码示例,这些例子默认根据受欢迎程度排序。您可以为喜欢或者感觉有用的代码点赞,您的评价将有助于我们的系统推荐出更棒的Java代码示例。
示例1: adjustCrcChannelPosition
import org.apache.hadoop.hdfs.server.datanode.fsdataset.ReplicaOutputStreams; //导入依赖的package包/类
/**
* Sets the offset in the meta file so that the
* last checksum will be overwritten.
*/
@Override // FsDatasetSpi
public void adjustCrcChannelPosition(ExtendedBlock b, ReplicaOutputStreams streams,
int checksumSize) throws IOException {
FileOutputStream file = (FileOutputStream)streams.getChecksumOut();
FileChannel channel = file.getChannel();
long oldPos = channel.position();
long newPos = oldPos - checksumSize;
if (LOG.isDebugEnabled()) {
LOG.debug("Changing meta file offset of block " + b + " from " +
oldPos + " to " + newPos);
}
channel.position(newPos);
}
开发者ID:naver,项目名称:hadoop,代码行数:18,代码来源:FsDatasetImpl.java
示例2: createStreams
import org.apache.hadoop.hdfs.server.datanode.fsdataset.ReplicaOutputStreams; //导入依赖的package包/类
@Override
synchronized public ReplicaOutputStreams createStreams(boolean isCreate,
DataChecksum requestedChecksum) throws IOException {
if (finalized) {
throw new IOException("Trying to write to a finalized replica "
+ theBlock);
} else {
SimulatedOutputStream crcStream = new SimulatedOutputStream();
return new ReplicaOutputStreams(oStream, crcStream, requestedChecksum,
volume.isTransientStorage());
}
}
开发者ID:naver,项目名称:hadoop,代码行数:13,代码来源:SimulatedFSDataset.java
示例3: addSomeBlocks
import org.apache.hadoop.hdfs.server.datanode.fsdataset.ReplicaOutputStreams; //导入依赖的package包/类
int addSomeBlocks(SimulatedFSDataset fsdataset, int startingBlockId)
throws IOException {
int bytesAdded = 0;
for (int i = startingBlockId; i < startingBlockId+NUMBLOCKS; ++i) {
ExtendedBlock b = new ExtendedBlock(bpid, i, 0, 0);
// we pass expected len as zero, - fsdataset should use the sizeof actual
// data written
ReplicaInPipelineInterface bInfo = fsdataset.createRbw(
StorageType.DEFAULT, b, false).getReplica();
ReplicaOutputStreams out = bInfo.createStreams(true,
DataChecksum.newDataChecksum(DataChecksum.Type.CRC32, 512));
try {
OutputStream dataOut = out.getDataOut();
assertEquals(0, fsdataset.getLength(b));
for (int j=1; j <= blockIdToLen(i); ++j) {
dataOut.write(j);
assertEquals(j, bInfo.getBytesOnDisk()); // correct length even as we write
bytesAdded++;
}
} finally {
out.close();
}
b.setNumBytes(blockIdToLen(i));
fsdataset.finalizeBlock(b);
assertEquals(blockIdToLen(i), fsdataset.getLength(b));
}
return bytesAdded;
}
开发者ID:naver,项目名称:hadoop,代码行数:29,代码来源:TestSimulatedFSDataset.java
示例4: testNotMatchedReplicaID
import org.apache.hadoop.hdfs.server.datanode.fsdataset.ReplicaOutputStreams; //导入依赖的package包/类
/**
* BlockRecoveryFI_11. a replica's recovery id does not match new GS.
*
* @throws IOException in case of an error
*/
@Test
public void testNotMatchedReplicaID() throws IOException {
if(LOG.isDebugEnabled()) {
LOG.debug("Running " + GenericTestUtils.getMethodName());
}
ReplicaInPipelineInterface replicaInfo = dn.data.createRbw(
StorageType.DEFAULT, block, false).getReplica();
ReplicaOutputStreams streams = null;
try {
streams = replicaInfo.createStreams(true,
DataChecksum.newDataChecksum(DataChecksum.Type.CRC32, 512));
streams.getChecksumOut().write('a');
dn.data.initReplicaRecovery(new RecoveringBlock(block, null, RECOVERY_ID+1));
try {
dn.syncBlock(rBlock, initBlockRecords(dn));
fail("Sync should fail");
} catch (IOException e) {
e.getMessage().startsWith("Cannot recover ");
}
DatanodeProtocol namenode = dn.getActiveNamenodeForBP(POOL_ID);
verify(namenode, never()).commitBlockSynchronization(
any(ExtendedBlock.class), anyLong(), anyLong(), anyBoolean(),
anyBoolean(), any(DatanodeID[].class), any(String[].class));
} finally {
streams.close();
}
}
开发者ID:naver,项目名称:hadoop,代码行数:33,代码来源:TestBlockRecovery.java
示例5: addSomeBlocks
import org.apache.hadoop.hdfs.server.datanode.fsdataset.ReplicaOutputStreams; //导入依赖的package包/类
static int addSomeBlocks(SimulatedFSDataset fsdataset, long startingBlockId,
boolean negativeBlkID) throws IOException {
int bytesAdded = 0;
for (long i = startingBlockId; i < startingBlockId+NUMBLOCKS; ++i) {
long blkID = negativeBlkID ? i * -1 : i;
ExtendedBlock b = new ExtendedBlock(bpid, blkID, 0, 0);
// we pass expected len as zero, - fsdataset should use the sizeof actual
// data written
ReplicaInPipelineInterface bInfo = fsdataset.createRbw(
StorageType.DEFAULT, b, false).getReplica();
ReplicaOutputStreams out = bInfo.createStreams(true,
DataChecksum.newDataChecksum(DataChecksum.Type.CRC32, 512));
try {
OutputStream dataOut = out.getDataOut();
assertEquals(0, fsdataset.getLength(b));
for (int j=1; j <= blockIdToLen(i); ++j) {
dataOut.write(j);
assertEquals(j, bInfo.getBytesOnDisk()); // correct length even as we write
bytesAdded++;
}
} finally {
out.close();
}
b.setNumBytes(blockIdToLen(i));
fsdataset.finalizeBlock(b);
assertEquals(blockIdToLen(i), fsdataset.getLength(b));
}
return bytesAdded;
}
开发者ID:aliyun-beta,项目名称:aliyun-oss-hadoop-fs,代码行数:30,代码来源:TestSimulatedFSDataset.java
示例6: testNotMatchedReplicaID
import org.apache.hadoop.hdfs.server.datanode.fsdataset.ReplicaOutputStreams; //导入依赖的package包/类
/**
* BlockRecoveryFI_11. a replica's recovery id does not match new GS.
*
* @throws IOException in case of an error
*/
@Test
public void testNotMatchedReplicaID() throws IOException {
if(LOG.isDebugEnabled()) {
LOG.debug("Running " + GenericTestUtils.getMethodName());
}
ReplicaInPipelineInterface replicaInfo = dn.data.createRbw(
StorageType.DEFAULT, block, false).getReplica();
ReplicaOutputStreams streams = null;
try {
streams = replicaInfo.createStreams(true,
DataChecksum.newDataChecksum(DataChecksum.Type.CRC32, 512));
streams.getChecksumOut().write('a');
dn.data.initReplicaRecovery(new RecoveringBlock(block, null, RECOVERY_ID+1));
BlockRecoveryWorker.RecoveryTaskContiguous RecoveryTaskContiguous =
recoveryWorker.new RecoveryTaskContiguous(rBlock);
try {
RecoveryTaskContiguous.syncBlock(initBlockRecords(dn));
fail("Sync should fail");
} catch (IOException e) {
e.getMessage().startsWith("Cannot recover ");
}
DatanodeProtocol namenode = recoveryWorker.getActiveNamenodeForBP(POOL_ID);
verify(namenode, never()).commitBlockSynchronization(
any(ExtendedBlock.class), anyLong(), anyLong(), anyBoolean(),
anyBoolean(), any(DatanodeID[].class), any(String[].class));
} finally {
streams.close();
}
}
开发者ID:aliyun-beta,项目名称:aliyun-oss-hadoop-fs,代码行数:35,代码来源:TestBlockRecovery.java
示例7: createStreams
import org.apache.hadoop.hdfs.server.datanode.fsdataset.ReplicaOutputStreams; //导入依赖的package包/类
@Override
synchronized public ReplicaOutputStreams createStreams(boolean isCreate,
DataChecksum requestedChecksum) throws IOException {
if (finalized) {
throw new IOException("Trying to write to a finalized replica "
+ theBlock);
} else {
SimulatedOutputStream crcStream = new SimulatedOutputStream();
return new ReplicaOutputStreams(oStream, crcStream, requestedChecksum);
}
}
开发者ID:ict-carch,项目名称:hadoop-plus,代码行数:12,代码来源:SimulatedFSDataset.java
示例8: addSomeBlocks
import org.apache.hadoop.hdfs.server.datanode.fsdataset.ReplicaOutputStreams; //导入依赖的package包/类
int addSomeBlocks(SimulatedFSDataset fsdataset, int startingBlockId)
throws IOException {
int bytesAdded = 0;
for (int i = startingBlockId; i < startingBlockId+NUMBLOCKS; ++i) {
ExtendedBlock b = new ExtendedBlock(bpid, i, 0, 0);
// we pass expected len as zero, - fsdataset should use the sizeof actual
// data written
ReplicaInPipelineInterface bInfo = fsdataset.createRbw(b);
ReplicaOutputStreams out = bInfo.createStreams(true,
DataChecksum.newDataChecksum(DataChecksum.Type.CRC32, 512));
try {
OutputStream dataOut = out.getDataOut();
assertEquals(0, fsdataset.getLength(b));
for (int j=1; j <= blockIdToLen(i); ++j) {
dataOut.write(j);
assertEquals(j, bInfo.getBytesOnDisk()); // correct length even as we write
bytesAdded++;
}
} finally {
out.close();
}
b.setNumBytes(blockIdToLen(i));
fsdataset.finalizeBlock(b);
assertEquals(blockIdToLen(i), fsdataset.getLength(b));
}
return bytesAdded;
}
开发者ID:ict-carch,项目名称:hadoop-plus,代码行数:28,代码来源:TestSimulatedFSDataset.java
示例9: testNotMatchedReplicaID
import org.apache.hadoop.hdfs.server.datanode.fsdataset.ReplicaOutputStreams; //导入依赖的package包/类
/**
* BlockRecoveryFI_11. a replica's recovery id does not match new GS.
*
* @throws IOException in case of an error
*/
@Test
public void testNotMatchedReplicaID() throws IOException {
if(LOG.isDebugEnabled()) {
LOG.debug("Running " + GenericTestUtils.getMethodName());
}
ReplicaInPipelineInterface replicaInfo = dn.data.createRbw(block);
ReplicaOutputStreams streams = null;
try {
streams = replicaInfo.createStreams(true,
DataChecksum.newDataChecksum(DataChecksum.Type.CRC32, 512));
streams.getChecksumOut().write('a');
dn.data.initReplicaRecovery(new RecoveringBlock(block, null, RECOVERY_ID+1));
try {
dn.syncBlock(rBlock, initBlockRecords(dn));
fail("Sync should fail");
} catch (IOException e) {
e.getMessage().startsWith("Cannot recover ");
}
DatanodeProtocol namenode = dn.getActiveNamenodeForBP(POOL_ID);
verify(namenode, never()).commitBlockSynchronization(
any(ExtendedBlock.class), anyLong(), anyLong(), anyBoolean(),
anyBoolean(), any(DatanodeID[].class), any(String[].class));
} finally {
streams.close();
}
}
开发者ID:ict-carch,项目名称:hadoop-plus,代码行数:32,代码来源:TestBlockRecovery.java
示例10: addSomeBlocks
import org.apache.hadoop.hdfs.server.datanode.fsdataset.ReplicaOutputStreams; //导入依赖的package包/类
int addSomeBlocks(SimulatedFSDataset fsdataset, int startingBlockId)
throws IOException {
int bytesAdded = 0;
for (int i = startingBlockId; i < startingBlockId+NUMBLOCKS; ++i) {
ExtendedBlock b = new ExtendedBlock(bpid, i, 0, 0);
// we pass expected len as zero, - fsdataset should use the sizeof actual
// data written
ReplicaInPipelineInterface bInfo = fsdataset.createRbw(
StorageType.DEFAULT, b, false);
ReplicaOutputStreams out = bInfo.createStreams(true,
DataChecksum.newDataChecksum(DataChecksum.Type.CRC32, 512));
try {
OutputStream dataOut = out.getDataOut();
assertEquals(0, fsdataset.getLength(b));
for (int j=1; j <= blockIdToLen(i); ++j) {
dataOut.write(j);
assertEquals(j, bInfo.getBytesOnDisk()); // correct length even as we write
bytesAdded++;
}
} finally {
out.close();
}
b.setNumBytes(blockIdToLen(i));
fsdataset.finalizeBlock(b);
assertEquals(blockIdToLen(i), fsdataset.getLength(b));
}
return bytesAdded;
}
开发者ID:yncxcw,项目名称:FlexMap,代码行数:29,代码来源:TestSimulatedFSDataset.java
示例11: testNotMatchedReplicaID
import org.apache.hadoop.hdfs.server.datanode.fsdataset.ReplicaOutputStreams; //导入依赖的package包/类
/**
* BlockRecoveryFI_11. a replica's recovery id does not match new GS.
*
* @throws IOException in case of an error
*/
@Test
public void testNotMatchedReplicaID() throws IOException {
if(LOG.isDebugEnabled()) {
LOG.debug("Running " + GenericTestUtils.getMethodName());
}
ReplicaInPipelineInterface replicaInfo = dn.data.createRbw(
StorageType.DEFAULT, block, false);
ReplicaOutputStreams streams = null;
try {
streams = replicaInfo.createStreams(true,
DataChecksum.newDataChecksum(DataChecksum.Type.CRC32, 512));
streams.getChecksumOut().write('a');
dn.data.initReplicaRecovery(new RecoveringBlock(block, null, RECOVERY_ID+1));
try {
dn.syncBlock(rBlock, initBlockRecords(dn));
fail("Sync should fail");
} catch (IOException e) {
e.getMessage().startsWith("Cannot recover ");
}
DatanodeProtocol namenode = dn.getActiveNamenodeForBP(POOL_ID);
verify(namenode, never()).commitBlockSynchronization(
any(ExtendedBlock.class), anyLong(), anyLong(), anyBoolean(),
anyBoolean(), any(DatanodeID[].class), any(String[].class));
} finally {
streams.close();
}
}
开发者ID:yncxcw,项目名称:FlexMap,代码行数:33,代码来源:TestBlockRecovery.java
示例12: adjustCrcChannelPosition
import org.apache.hadoop.hdfs.server.datanode.fsdataset.ReplicaOutputStreams; //导入依赖的package包/类
/**
* Sets the offset in the meta file so that the
* last checksum will be overwritten.
*/
@Override // FsDatasetSpi
public void adjustCrcChannelPosition(ExtendedBlock b,
ReplicaOutputStreams streams, int checksumSize) throws IOException {
FileOutputStream file = (FileOutputStream) streams.getChecksumOut();
FileChannel channel = file.getChannel();
long oldPos = channel.position();
long newPos = oldPos - checksumSize;
if (LOG.isDebugEnabled()) {
LOG.debug("Changing meta file offset of block " + b + " from " +
oldPos + " to " + newPos);
}
channel.position(newPos);
}
开发者ID:hopshadoop,项目名称:hops,代码行数:18,代码来源:FsDatasetImpl.java
示例13: createStreams
import org.apache.hadoop.hdfs.server.datanode.fsdataset.ReplicaOutputStreams; //导入依赖的package包/类
@Override
synchronized public ReplicaOutputStreams createStreams(boolean isCreate,
DataChecksum requestedChecksum) throws IOException {
if (finalized) {
throw new IOException(
"Trying to write to a finalized replica " + theBlock);
} else {
SimulatedOutputStream crcStream = new SimulatedOutputStream();
return new ReplicaOutputStreams(oStream, crcStream, requestedChecksum);
}
}
开发者ID:hopshadoop,项目名称:hops,代码行数:12,代码来源:SimulatedFSDataset.java
示例14: addSomeBlocks
import org.apache.hadoop.hdfs.server.datanode.fsdataset.ReplicaOutputStreams; //导入依赖的package包/类
int addSomeBlocks(SimulatedFSDataset fsdataset, int startingBlockId)
throws IOException {
int bytesAdded = 0;
for (int i = startingBlockId; i < startingBlockId + NUMBLOCKS; ++i) {
ExtendedBlock b = new ExtendedBlock(bpid, i, 0, 0);
// we pass expected len as zero, - fsdataset should use the sizeof actual
// data written
ReplicaInPipelineInterface bInfo = fsdataset.createRbw(b);
ReplicaOutputStreams out = bInfo.createStreams(true,
DataChecksum.newDataChecksum(DataChecksum.Type.CRC32, 512));
try {
OutputStream dataOut = out.getDataOut();
assertEquals(0, fsdataset.getLength(b));
for (int j = 1; j <= blockIdToLen(i); ++j) {
dataOut.write(j);
assertEquals(j,
bInfo.getBytesOnDisk()); // correct length even as we write
bytesAdded++;
}
} finally {
out.close();
}
b.setNumBytes(blockIdToLen(i));
fsdataset.finalizeBlock(b);
assertEquals(blockIdToLen(i), fsdataset.getLength(b));
}
return bytesAdded;
}
开发者ID:hopshadoop,项目名称:hops,代码行数:29,代码来源:TestSimulatedFSDataset.java
示例15: testNotMatchedReplicaID
import org.apache.hadoop.hdfs.server.datanode.fsdataset.ReplicaOutputStreams; //导入依赖的package包/类
/**
* BlockRecoveryFI_11. a replica's recovery id does not match new GS.
*
* @throws IOException
* in case of an error
*/
@Test
public void testNotMatchedReplicaID() throws IOException {
if (LOG.isDebugEnabled()) {
LOG.debug("Running " + GenericTestUtils.getMethodName());
}
ReplicaInPipelineInterface replicaInfo = dn.data.createRbw(block);
ReplicaOutputStreams streams = null;
try {
streams = replicaInfo.createStreams(true,
DataChecksum.newDataChecksum(DataChecksum.Type.CRC32, 512));
streams.getChecksumOut().write('a');
dn.data.initReplicaRecovery(
new RecoveringBlock(block, null, RECOVERY_ID + 1));
try {
dn.syncBlock(rBlock, initBlockRecords(dn));
fail("Sync should fail");
} catch (IOException e) {
e.getMessage().startsWith("Cannot recover ");
}
DatanodeProtocol namenode = dn.getActiveNamenodeForBP(POOL_ID);
verify(namenode, never())
.commitBlockSynchronization(any(ExtendedBlock.class), anyLong(),
anyLong(), anyBoolean(), anyBoolean(), any(DatanodeID[].class),
any(String[].class));
} finally {
streams.close();
}
}
开发者ID:hopshadoop,项目名称:hops,代码行数:35,代码来源:TestBlockRecovery.java
示例16: adjustCrcChannelPosition
import org.apache.hadoop.hdfs.server.datanode.fsdataset.ReplicaOutputStreams; //导入依赖的package包/类
@Override // FsDatasetSpi
public synchronized void adjustCrcChannelPosition(ExtendedBlock b,
ReplicaOutputStreams stream,
int checksumSize)
throws IOException {
}
开发者ID:naver,项目名称:hadoop,代码行数:7,代码来源:SimulatedFSDataset.java
注:本文中的org.apache.hadoop.hdfs.server.datanode.fsdataset.ReplicaOutputStreams类示例整理自Github/MSDocs等源码及文档管理平台,相关代码片段筛选自各路编程大神贡献的开源项目,源码版权归原作者所有,传播和使用请参考对应项目的License;未经允许,请勿转载。 |
请发表评论