本文整理汇总了Java中org.apache.hadoop.hdfs.server.namenode.snapshot.SnapshotFSImageFormat类的典型用法代码示例。如果您正苦于以下问题:Java SnapshotFSImageFormat类的具体用法?Java SnapshotFSImageFormat怎么用?Java SnapshotFSImageFormat使用的例子?那么恭喜您, 这里精选的类代码示例或许可以为您提供帮助。
SnapshotFSImageFormat类属于org.apache.hadoop.hdfs.server.namenode.snapshot包,在下文中一共展示了SnapshotFSImageFormat类的5个代码示例,这些例子默认根据受欢迎程度排序。您可以为喜欢或者感觉有用的代码点赞,您的评价将有助于我们的系统推荐出更棒的Java代码示例。
示例1: writeINodeFile
import org.apache.hadoop.hdfs.server.namenode.snapshot.SnapshotFSImageFormat; //导入依赖的package包/类
/**
* Serialize a {@link INodeFile} node
* @param node The node to write
* @param out The {@link DataOutputStream} where the fields are written
* @param writeBlock Whether to write block information
*/
public static void writeINodeFile(INodeFile file, DataOutput out,
boolean writeUnderConstruction) throws IOException {
writeLocalName(file, out);
out.writeLong(file.getId());
out.writeShort(file.getFileReplication());
out.writeLong(file.getModificationTime());
out.writeLong(file.getAccessTime());
out.writeLong(file.getPreferredBlockSize());
writeBlocks(file.getBlocks(), out);
SnapshotFSImageFormat.saveFileDiffList(file, out);
if (writeUnderConstruction) {
if (file.isUnderConstruction()) {
out.writeBoolean(true);
final FileUnderConstructionFeature uc = file.getFileUnderConstructionFeature();
writeString(uc.getClientName(), out);
writeString(uc.getClientMachine(), out);
} else {
out.writeBoolean(false);
}
}
writePermissionStatus(file, out);
}
开发者ID:naver,项目名称:hadoop,代码行数:32,代码来源:FSImageSerialization.java
示例2: writeINodeFile
import org.apache.hadoop.hdfs.server.namenode.snapshot.SnapshotFSImageFormat; //导入依赖的package包/类
/**
* Serialize a {@link INodeFile} node
* @param file The INodeFile to write
* @param out The {@link DataOutputStream} where the fields are written
* @param writeUnderConstruction Whether to write under construction information
*/
public static void writeINodeFile(INodeFile file, DataOutput out,
boolean writeUnderConstruction) throws IOException {
writeLocalName(file, out);
out.writeLong(file.getId());
out.writeShort(file.getFileReplication());
out.writeLong(file.getModificationTime());
out.writeLong(file.getAccessTime());
out.writeLong(file.getPreferredBlockSize());
writeBlocks(file.getBlocks(), out);
SnapshotFSImageFormat.saveFileDiffList(file, out);
if (writeUnderConstruction) {
if (file.isUnderConstruction()) {
out.writeBoolean(true);
final FileUnderConstructionFeature uc = file.getFileUnderConstructionFeature();
writeString(uc.getClientName(), out);
writeString(uc.getClientMachine(), out);
} else {
out.writeBoolean(false);
}
}
writePermissionStatus(file, out);
}
开发者ID:aliyun-beta,项目名称:aliyun-oss-hadoop-fs,代码行数:32,代码来源:FSImageSerialization.java
示例3: writeINodeFile
import org.apache.hadoop.hdfs.server.namenode.snapshot.SnapshotFSImageFormat; //导入依赖的package包/类
/**
* Serialize a {@link INodeFile} node
* @param node The node to write
* @param out The {@link DataOutputStream} where the fields are written
* @param writeBlock Whether to write block information
*/
public static void writeINodeFile(INodeFile file, DataOutput out,
boolean writeUnderConstruction) throws IOException {
writeLocalName(file, out);
out.writeLong(file.getId());
out.writeShort(file.getFileReplication());
out.writeLong(file.getModificationTime());
out.writeLong(file.getAccessTime());
out.writeLong(file.getPreferredBlockSize());
writeBlocks(file.getBlocks(), out);
SnapshotFSImageFormat.saveFileDiffList(file, out);
if (writeUnderConstruction) {
if (file instanceof INodeFileUnderConstruction) {
out.writeBoolean(true);
final INodeFileUnderConstruction uc = (INodeFileUnderConstruction)file;
writeString(uc.getClientName(), out);
writeString(uc.getClientMachine(), out);
} else {
out.writeBoolean(false);
}
}
writePermissionStatus(file, out);
}
开发者ID:ict-carch,项目名称:hadoop-plus,代码行数:32,代码来源:FSImageSerialization.java
示例4: loadDirectoryWithSnapshot
import org.apache.hadoop.hdfs.server.namenode.snapshot.SnapshotFSImageFormat; //导入依赖的package包/类
/**
* Load a directory when snapshot is supported.
* @param in The {@link DataInput} instance to read.
* @param counter Counter to increment for namenode startup progress
*/
private void loadDirectoryWithSnapshot(DataInput in, Counter counter)
throws IOException {
// Step 1. Identify the parent INode
long inodeId = in.readLong();
final INodeDirectory parent = this.namesystem.dir.getInode(inodeId)
.asDirectory();
// Check if the whole subtree has been saved (for reference nodes)
boolean toLoadSubtree = referenceMap.toProcessSubtree(parent.getId());
if (!toLoadSubtree) {
return;
}
// Step 2. Load snapshots if parent is snapshottable
int numSnapshots = in.readInt();
if (numSnapshots >= 0) {
// load snapshots and snapshotQuota
SnapshotFSImageFormat.loadSnapshotList(parent, numSnapshots, in, this);
if (parent.getDirectorySnapshottableFeature().getSnapshotQuota() > 0) {
// add the directory to the snapshottable directory list in
// SnapshotManager. Note that we only add root when its snapshot quota
// is positive.
this.namesystem.getSnapshotManager().addSnapshottable(parent);
}
}
// Step 3. Load children nodes under parent
loadChildren(parent, in, counter);
// Step 4. load Directory Diff List
SnapshotFSImageFormat.loadDirectoryDiffList(parent, in, this);
// Recursively load sub-directories, including snapshot copies of deleted
// directories
int numSubTree = in.readInt();
for (int i = 0; i < numSubTree; i++) {
loadDirectoryWithSnapshot(in, counter);
}
}
开发者ID:naver,项目名称:hadoop,代码行数:45,代码来源:FSImageFormat.java
示例5: loadDirectoryWithSnapshot
import org.apache.hadoop.hdfs.server.namenode.snapshot.SnapshotFSImageFormat; //导入依赖的package包/类
/**
* Load a directory when snapshot is supported.
* @param in The {@link DataInput} instance to read.
* @param counter Counter to increment for namenode startup progress
*/
private void loadDirectoryWithSnapshot(DataInput in, Counter counter)
throws IOException {
// Step 1. Identify the parent INode
long inodeId = in.readLong();
final INodeDirectory parent = this.namesystem.dir.getInode(inodeId)
.asDirectory();
// Check if the whole subtree has been saved (for reference nodes)
boolean toLoadSubtree = referenceMap.toProcessSubtree(parent.getId());
if (!toLoadSubtree) {
return;
}
// Step 2. Load snapshots if parent is snapshottable
int numSnapshots = in.readInt();
if (numSnapshots >= 0) {
final INodeDirectorySnapshottable snapshottableParent
= INodeDirectorySnapshottable.valueOf(parent, parent.getLocalName());
// load snapshots and snapshotQuota
SnapshotFSImageFormat.loadSnapshotList(snapshottableParent,
numSnapshots, in, this);
if (snapshottableParent.getSnapshotQuota() > 0) {
// add the directory to the snapshottable directory list in
// SnapshotManager. Note that we only add root when its snapshot quota
// is positive.
this.namesystem.getSnapshotManager().addSnapshottable(
snapshottableParent);
}
}
// Step 3. Load children nodes under parent
loadChildren(parent, in, counter);
// Step 4. load Directory Diff List
SnapshotFSImageFormat.loadDirectoryDiffList(parent, in, this);
// Recursively load sub-directories, including snapshot copies of deleted
// directories
int numSubTree = in.readInt();
for (int i = 0; i < numSubTree; i++) {
loadDirectoryWithSnapshot(in, counter);
}
}
开发者ID:ict-carch,项目名称:hadoop-plus,代码行数:49,代码来源:FSImageFormat.java
注:本文中的org.apache.hadoop.hdfs.server.namenode.snapshot.SnapshotFSImageFormat类示例整理自Github/MSDocs等源码及文档管理平台,相关代码片段筛选自各路编程大神贡献的开源项目,源码版权归原作者所有,传播和使用请参考对应项目的License;未经允许,请勿转载。 |
请发表评论