本文整理汇总了Java中org.apache.hadoop.hbase.KeyValue.KeyComparator类的典型用法代码示例。如果您正苦于以下问题:Java KeyComparator类的具体用法?Java KeyComparator怎么用?Java KeyComparator使用的例子?那么恭喜您, 这里精选的类代码示例或许可以为您提供帮助。
KeyComparator类属于org.apache.hadoop.hbase.KeyValue包,在下文中一共展示了KeyComparator类的12个代码示例,这些例子默认根据受欢迎程度排序。您可以为喜欢或者感觉有用的代码点赞,您的评价将有助于我们的系统推荐出更棒的Java代码示例。
示例1: HFileWriterV2
import org.apache.hadoop.hbase.KeyValue.KeyComparator; //导入依赖的package包/类
/** Constructor that takes a path, creates and closes the output stream. */
public HFileWriterV2(Configuration conf, CacheConfig cacheConf,
FileSystem fs, Path path, FSDataOutputStream ostream, int blockSize,
Compression.Algorithm compressAlgo, HFileDataBlockEncoder blockEncoder,
final KeyComparator comparator, final ChecksumType checksumType,
final int bytesPerChecksum, boolean includeMVCCReadpoint) throws IOException {
super(cacheConf,
ostream == null ? createOutputStream(conf, fs, path) : ostream,
path, blockSize, compressAlgo, blockEncoder, comparator);
SchemaMetrics.configureGlobally(conf);
this.checksumType = checksumType;
this.bytesPerChecksum = bytesPerChecksum;
this.includeMemstoreTS = includeMVCCReadpoint;
if (!conf.getBoolean(HConstants.HBASE_CHECKSUM_VERIFICATION, false)) {
this.minorVersion = 0;
}
finishInit(conf);
}
开发者ID:fengchen8086,项目名称:LCIndex-HBase-0.94.16,代码行数:19,代码来源:HFileWriterV2.java
示例2: testComparator
import org.apache.hadoop.hbase.KeyValue.KeyComparator; //导入依赖的package包/类
public void testComparator() throws IOException {
if (cacheConf == null) cacheConf = new CacheConfig(conf);
Path mFile = new Path(ROOT_DIR, "meta.tfile");
FSDataOutputStream fout = createFSOutput(mFile);
KeyComparator comparator = new KeyComparator() {
@Override
public int compare(byte[] b1, int s1, int l1, byte[] b2, int s2,
int l2) {
return -Bytes.compareTo(b1, s1, l1, b2, s2, l2);
}
@Override
public int compare(byte[] o1, byte[] o2) {
return compare(o1, 0, o1.length, o2, 0, o2.length);
}
};
Writer writer = HFile.getWriterFactory(conf, cacheConf)
.withOutputStream(fout)
.withBlockSize(minBlockSize)
.withComparator(comparator)
.create();
writer.append("3".getBytes(), "0".getBytes());
writer.append("2".getBytes(), "0".getBytes());
writer.append("1".getBytes(), "0".getBytes());
writer.close();
}
开发者ID:fengchen8086,项目名称:LCIndex-HBase-0.94.16,代码行数:26,代码来源:TestHFile.java
示例3: AbstractHFileWriter
import org.apache.hadoop.hbase.KeyValue.KeyComparator; //导入依赖的package包/类
public AbstractHFileWriter(CacheConfig cacheConf,
FSDataOutputStream outputStream, Path path, int blockSize,
Compression.Algorithm compressAlgo,
HFileDataBlockEncoder dataBlockEncoder,
KeyComparator comparator) {
super(null, path);
this.outputStream = outputStream;
this.path = path;
this.name = path != null ? path.getName() : outputStream.toString();
this.blockSize = blockSize;
this.compressAlgo = compressAlgo == null
? HFile.DEFAULT_COMPRESSION_ALGORITHM : compressAlgo;
this.blockEncoder = dataBlockEncoder != null
? dataBlockEncoder : NoOpDataBlockEncoder.INSTANCE;
this.comparator = comparator != null ? comparator
: Bytes.BYTES_RAWCOMPARATOR;
closeOutputStream = path != null;
this.cacheConf = cacheConf;
}
开发者ID:wanhao,项目名称:IRIndex,代码行数:21,代码来源:AbstractHFileWriter.java
示例4: AbstractHFileWriter
import org.apache.hadoop.hbase.KeyValue.KeyComparator; //导入依赖的package包/类
public AbstractHFileWriter(CacheConfig cacheConf,
FSDataOutputStream outputStream, Path path, int blockSize,
Compression.Algorithm compressAlgo,
HFileDataBlockEncoder dataBlockEncoder,
KeyComparator comparator) {
this.outputStream = outputStream;
this.path = path;
this.name = path != null ? path.getName() : outputStream.toString();
this.blockSize = blockSize;
this.compressAlgo = compressAlgo == null
? HFile.DEFAULT_COMPRESSION_ALGORITHM : compressAlgo;
this.blockEncoder = dataBlockEncoder != null
? dataBlockEncoder : NoOpDataBlockEncoder.INSTANCE;
this.comparator = comparator != null ? comparator
: Bytes.BYTES_RAWCOMPARATOR;
closeOutputStream = path != null;
this.cacheConf = cacheConf;
}
开发者ID:daidong,项目名称:DominoHBase,代码行数:20,代码来源:AbstractHFileWriter.java
示例5: createWriter
import org.apache.hadoop.hbase.KeyValue.KeyComparator; //导入依赖的package包/类
@Override
public Writer createWriter(FileSystem fs, Path path,
FSDataOutputStream ostream, int blockSize,
Compression.Algorithm compress, HFileDataBlockEncoder blockEncoder,
final KeyComparator comparator, final ChecksumType checksumType,
final int bytesPerChecksum, boolean includeMVCCReadpoint) throws IOException {
return new HFileWriterV2(conf, cacheConf, fs, path, ostream, blockSize, compress,
blockEncoder, comparator, checksumType, bytesPerChecksum, includeMVCCReadpoint);
}
开发者ID:fengchen8086,项目名称:LCIndex-HBase-0.94.16,代码行数:10,代码来源:HFileWriterV2.java
示例6: createWriter
import org.apache.hadoop.hbase.KeyValue.KeyComparator; //导入依赖的package包/类
@Override
public Writer createWriter(FileSystem fs, Path path,
FSDataOutputStream ostream, int blockSize,
Algorithm compressAlgo, HFileDataBlockEncoder dataBlockEncoder,
KeyComparator comparator, final ChecksumType checksumType,
final int bytesPerChecksum, boolean includeMVCCReadpoint) throws IOException {
// version 1 does not implement checksums
return new HFileWriterV1(conf, cacheConf, fs, path, ostream, blockSize,
compressAlgo, dataBlockEncoder, comparator);
}
开发者ID:fengchen8086,项目名称:LCIndex-HBase-0.94.16,代码行数:11,代码来源:HFileWriterV1.java
示例7: HFileWriterV1
import org.apache.hadoop.hbase.KeyValue.KeyComparator; //导入依赖的package包/类
/** Constructor that takes a path, creates and closes the output stream. */
public HFileWriterV1(Configuration conf, CacheConfig cacheConf,
FileSystem fs, Path path, FSDataOutputStream ostream,
int blockSize, Compression.Algorithm compress,
HFileDataBlockEncoder blockEncoder,
final KeyComparator comparator) throws IOException {
super(cacheConf, ostream == null ? createOutputStream(conf, fs, path) : ostream, path,
blockSize, compress, blockEncoder, comparator);
SchemaMetrics.configureGlobally(conf);
}
开发者ID:fengchen8086,项目名称:LCIndex-HBase-0.94.16,代码行数:11,代码来源:HFileWriterV1.java
示例8: AbstractHFileWriter
import org.apache.hadoop.hbase.KeyValue.KeyComparator; //导入依赖的package包/类
public AbstractHFileWriter(CacheConfig cacheConf, FSDataOutputStream outputStream, Path path,
int blockSize, Compression.Algorithm compressAlgo, HFileDataBlockEncoder dataBlockEncoder,
KeyComparator comparator) {
super(null, path);
this.outputStream = outputStream;
this.path = path;
this.name = path != null ? path.getName() : outputStream.toString();
this.blockSize = blockSize;
this.compressAlgo = compressAlgo == null ? HFile.DEFAULT_COMPRESSION_ALGORITHM : compressAlgo;
this.blockEncoder = dataBlockEncoder != null ? dataBlockEncoder : NoOpDataBlockEncoder.INSTANCE;
this.comparator = comparator != null ? comparator : Bytes.BYTES_RAWCOMPARATOR;
closeOutputStream = path != null;
this.cacheConf = cacheConf;
}
开发者ID:fengchen8086,项目名称:LCIndex-HBase-0.94.16,代码行数:16,代码来源:AbstractHFileWriter.java
示例9: createWriter
import org.apache.hadoop.hbase.KeyValue.KeyComparator; //导入依赖的package包/类
@Override
public Writer createWriter(FileSystem fs, Path path, int blockSize,
Compression.Algorithm compress,
final KeyComparator comparator) throws IOException {
return new HFileWriterV2(conf, cacheConf, fs, path, blockSize,
compress, comparator);
}
开发者ID:lifeng5042,项目名称:RStore,代码行数:8,代码来源:HFileWriterV2.java
示例10: HFileWriterV2
import org.apache.hadoop.hbase.KeyValue.KeyComparator; //导入依赖的package包/类
/**
* Constructor that takes a path, creates and closes the output stream. Takes
* compression algorithm name as string.
*/
public HFileWriterV2(Configuration conf, CacheConfig cacheConf, FileSystem fs,
Path path, int blockSize, String compressAlgoName,
final KeyComparator comparator) throws IOException {
this(conf, cacheConf, fs, path, blockSize,
compressionByName(compressAlgoName), comparator);
}
开发者ID:lifeng5042,项目名称:RStore,代码行数:11,代码来源:HFileWriterV2.java
示例11: createWriter
import org.apache.hadoop.hbase.KeyValue.KeyComparator; //导入依赖的package包/类
@Override
public Writer createWriter(FileSystem fs, Path path, int blockSize,
Compression.Algorithm compressAlgo, final KeyComparator comparator)
throws IOException {
return new HFileWriterV1(conf, cacheConf, fs, path, blockSize,
compressAlgo, comparator);
}
开发者ID:lifeng5042,项目名称:RStore,代码行数:8,代码来源:HFileWriterV1.java
示例12: HFileWriterV1
import org.apache.hadoop.hbase.KeyValue.KeyComparator; //导入依赖的package包/类
/**
* Constructor that takes a path, creates and closes the output stream. Takes
* compression algorithm name as string.
*/
public HFileWriterV1(Configuration conf, CacheConfig cacheConf, FileSystem fs,
Path path, int blockSize, String compressAlgoName,
final KeyComparator comparator) throws IOException {
this(conf, cacheConf, fs, path, blockSize,
compressionByName(compressAlgoName), comparator);
}
开发者ID:lifeng5042,项目名称:RStore,代码行数:11,代码来源:HFileWriterV1.java
注:本文中的org.apache.hadoop.hbase.KeyValue.KeyComparator类示例整理自Github/MSDocs等源码及文档管理平台,相关代码片段筛选自各路编程大神贡献的开源项目,源码版权归原作者所有,传播和使用请参考对应项目的License;未经允许,请勿转载。 |
请发表评论