• 设为首页
  • 点击收藏
  • 手机版
    手机扫一扫访问
    迪恩网络手机版
  • 关注官方公众号
    微信扫一扫关注
    迪恩网络公众号

Java OrcOutputFormat类代码示例

原作者: [db:作者] 来自: [db:来源] 收藏 邀请

本文整理汇总了Java中org.apache.hadoop.hive.ql.io.orc.OrcOutputFormat的典型用法代码示例。如果您正苦于以下问题:Java OrcOutputFormat类的具体用法?Java OrcOutputFormat怎么用?Java OrcOutputFormat使用的例子?那么恭喜您, 这里精选的类代码示例或许可以为您提供帮助。



OrcOutputFormat类属于org.apache.hadoop.hive.ql.io.orc包,在下文中一共展示了OrcOutputFormat类的11个代码示例,这些例子默认根据受欢迎程度排序。您可以为喜欢或者感觉有用的代码点赞,您的评价将有助于我们的系统推荐出更棒的Java代码示例。

示例1: createOrcRecordWriter

import org.apache.hadoop.hive.ql.io.orc.OrcOutputFormat; //导入依赖的package包/类
static RecordWriter createOrcRecordWriter(File outputFile, Format format, Compression compression, ObjectInspector columnObjectInspector)
        throws IOException
{
    JobConf jobConf = new JobConf();
    jobConf.set("hive.exec.orc.write.format", format == ORC_12 ? "0.12" : "0.11");
    jobConf.set("hive.exec.orc.default.compress", compression.name());
    ReaderWriterProfiler.setProfilerOptions(jobConf);

    return new OrcOutputFormat().getHiveRecordWriter(
            jobConf,
            new Path(outputFile.toURI()),
            Text.class,
            compression != NONE,
            createTableProperties("test", columnObjectInspector.getTypeName()),
            () -> { }
    );
}
 
开发者ID:y-lan,项目名称:presto,代码行数:18,代码来源:OrcTester.java


示例2: createDwrfRecordWriter

import org.apache.hadoop.hive.ql.io.orc.OrcOutputFormat; //导入依赖的package包/类
private static RecordWriter createDwrfRecordWriter(File outputFile, Compression compressionCodec, ObjectInspector columnObjectInspector)
        throws IOException
{
    JobConf jobConf = new JobConf();
    jobConf.set("hive.exec.orc.default.compress", compressionCodec.name());
    jobConf.set("hive.exec.orc.compress", compressionCodec.name());
    OrcConf.setIntVar(jobConf, OrcConf.ConfVars.HIVE_ORC_ENTROPY_STRING_THRESHOLD, 1);
    OrcConf.setIntVar(jobConf, OrcConf.ConfVars.HIVE_ORC_DICTIONARY_ENCODING_INTERVAL, 2);
    OrcConf.setBoolVar(jobConf, OrcConf.ConfVars.HIVE_ORC_BUILD_STRIDE_DICTIONARY, true);
    ReaderWriterProfiler.setProfilerOptions(jobConf);

    return new com.facebook.hive.orc.OrcOutputFormat().getHiveRecordWriter(
            jobConf,
            new Path(outputFile.toURI()),
            Text.class,
            compressionCodec != NONE,
            createTableProperties("test", columnObjectInspector.getTypeName()),
            () -> { }
    );
}
 
开发者ID:y-lan,项目名称:presto,代码行数:21,代码来源:OrcTester.java


示例3: createOrcRecordWriter

import org.apache.hadoop.hive.ql.io.orc.OrcOutputFormat; //导入依赖的package包/类
private static FileSinkOperator.RecordWriter createOrcRecordWriter(File outputFile, Format format, Compression compression, ObjectInspector columnObjectInspector)
        throws IOException
{
    JobConf jobConf = new JobConf();
    jobConf.set("hive.exec.orc.write.format", format == ORC_12 ? "0.12" : "0.11");
    jobConf.set("hive.exec.orc.default.compress", compression.name());
    ReaderWriterProfiler.setProfilerOptions(jobConf);

    Properties tableProperties = new Properties();
    tableProperties.setProperty("columns", "test");
    tableProperties.setProperty("columns.types", columnObjectInspector.getTypeName());
    tableProperties.setProperty("orc.stripe.size", "1200000");

    return new OrcOutputFormat().getHiveRecordWriter(
            jobConf,
            new Path(outputFile.toURI()),
            Text.class,
            compression != NONE,
            tableProperties,
            () -> { });
}
 
开发者ID:y-lan,项目名称:presto,代码行数:22,代码来源:TestCachingOrcDataSource.java


示例4: createOrcRecordWriter

import org.apache.hadoop.hive.ql.io.orc.OrcOutputFormat; //导入依赖的package包/类
static RecordWriter createOrcRecordWriter(File outputFile, Format format, Compression compression, ObjectInspector columnObjectInspector)
        throws IOException
{
    JobConf jobConf = new JobConf();
    jobConf.set("hive.exec.orc.write.format", format == ORC_12 ? "0.12" : "0.11");
    jobConf.set("hive.exec.orc.default.compress", compression.name());

    return new OrcOutputFormat().getHiveRecordWriter(
            jobConf,
            new Path(outputFile.toURI()),
            Text.class,
            compression != NONE,
            createTableProperties("test", columnObjectInspector.getTypeName()),
            () -> { }
    );
}
 
开发者ID:splicemachine,项目名称:spliceengine,代码行数:17,代码来源:OrcTester.java


示例5: createDwrfRecordWriter

import org.apache.hadoop.hive.ql.io.orc.OrcOutputFormat; //导入依赖的package包/类
private static RecordWriter createDwrfRecordWriter(File outputFile, Compression compressionCodec, ObjectInspector columnObjectInspector)
        throws IOException
{
    JobConf jobConf = new JobConf();
    jobConf.set("hive.exec.orc.default.compress", compressionCodec.name());
    jobConf.set("hive.exec.orc.compress", compressionCodec.name());
    OrcConf.setIntVar(jobConf, OrcConf.ConfVars.HIVE_ORC_ENTROPY_STRING_THRESHOLD, 1);
    OrcConf.setIntVar(jobConf, OrcConf.ConfVars.HIVE_ORC_DICTIONARY_ENCODING_INTERVAL, 2);
    OrcConf.setBoolVar(jobConf, OrcConf.ConfVars.HIVE_ORC_BUILD_STRIDE_DICTIONARY, true);
    return new OrcOutputFormat().getHiveRecordWriter(
            jobConf,
            new Path(outputFile.toURI()),
            Text.class,
            compressionCodec != NONE,
            createTableProperties("test", columnObjectInspector.getTypeName()),
            () -> { }
    );
}
 
开发者ID:splicemachine,项目名称:spliceengine,代码行数:19,代码来源:OrcTester.java


示例6: createOrcRecordWriter

import org.apache.hadoop.hive.ql.io.orc.OrcOutputFormat; //导入依赖的package包/类
private static FileSinkOperator.RecordWriter createOrcRecordWriter(File outputFile, Format format, Compression compression, ObjectInspector columnObjectInspector)
        throws IOException
{
    JobConf jobConf = new JobConf();
    jobConf.set("hive.exec.orc.write.format", format == ORC_12 ? "0.12" : "0.11");
    jobConf.set("hive.exec.orc.default.compress", compression.name());

    Properties tableProperties = new Properties();
    tableProperties.setProperty("columns", "test");
    tableProperties.setProperty("columns.types", columnObjectInspector.getTypeName());
    tableProperties.setProperty("orc.stripe.size", "1200000");

    return new OrcOutputFormat().getHiveRecordWriter(
            jobConf,
            new Path(outputFile.toURI()),
            Text.class,
            compression != NONE,
            tableProperties,
            () -> { });
}
 
开发者ID:splicemachine,项目名称:spliceengine,代码行数:21,代码来源:TestCachingOrcDataSource.java


示例7: configure

import org.apache.hadoop.hive.ql.io.orc.OrcOutputFormat; //导入依赖的package包/类
@Override
public void configure() {
	super.configure();
    this.orcSerde = new OrcSerde();
    this.outputFormat = new OrcOutputFormat();
    
    this.columnTypeList = Lists.newArrayList();
    for(String columnType : columnTypes) {
        this.columnTypeList.add(HdfsUtil.columnTypeToObjectInspetor(columnType));
    }
    this.inspector = ObjectInspectorFactory
            .getStandardStructObjectInspector(this.columnNames, this.columnTypeList);

    Class<? extends CompressionCodec> codecClass = null;
    if(CompressEnum.NONE.name().equalsIgnoreCase(compress)){
        codecClass = null;
    } else if(CompressEnum.GZIP.name().equalsIgnoreCase(compress)){
        codecClass = org.apache.hadoop.io.compress.GzipCodec.class;
    } else if (CompressEnum.BZIP2.name().equalsIgnoreCase(compress)) {
        codecClass = org.apache.hadoop.io.compress.BZip2Codec.class;
    } else if(CompressEnum.SNAPPY.name().equalsIgnoreCase(compress)) {
        //todo 等需求明确后支持 需要用户安装SnappyCodec
        codecClass = org.apache.hadoop.io.compress.SnappyCodec.class;
    } else {
        throw new IllegalArgumentException("Unsupported compress format: " + compress);
    }

    if(codecClass != null){
        this.outputFormat.setOutputCompressorClass(jobConf, codecClass);
    }
}
 
开发者ID:DTStack,项目名称:jlogstash-output-plugin,代码行数:32,代码来源:HdfsOrcOutputFormat.java


示例8: getOrcWriterConstructor

import org.apache.hadoop.hive.ql.io.orc.OrcOutputFormat; //导入依赖的package包/类
private static Constructor<? extends RecordWriter> getOrcWriterConstructor()
{
    try {
        String writerClassName = OrcOutputFormat.class.getName() + "$OrcRecordWriter";
        Constructor<? extends RecordWriter> constructor = OrcOutputFormat.class.getClassLoader()
                .loadClass(writerClassName).asSubclass(RecordWriter.class)
                .getDeclaredConstructor(Path.class, OrcFile.WriterOptions.class);
        constructor.setAccessible(true);
        return constructor;
    }
    catch (ReflectiveOperationException e) {
        throw Throwables.propagate(e);
    }
}
 
开发者ID:y-lan,项目名称:presto,代码行数:15,代码来源:OrcFileWriter.java


示例9: flushWriter

import org.apache.hadoop.hive.ql.io.orc.OrcOutputFormat; //导入依赖的package包/类
private static void flushWriter(FileSinkOperator.RecordWriter writer)
        throws IOException, ReflectiveOperationException
{
    Field field = OrcOutputFormat.class.getClassLoader()
            .loadClass(OrcOutputFormat.class.getName() + "$OrcRecordWriter")
            .getDeclaredField("writer");
    field.setAccessible(true);
    ((Writer) field.get(writer)).writeIntermediateFooter();
}
 
开发者ID:y-lan,项目名称:presto,代码行数:10,代码来源:TestOrcReaderPositions.java


示例10: TestPreparer

import org.apache.hadoop.hive.ql.io.orc.OrcOutputFormat; //导入依赖的package包/类
public TestPreparer(String tempFilePath)
        throws Exception
{
    OrcSerde serde = new OrcSerde();
    schema = new Properties();
    schema.setProperty("columns",
            testColumns.stream()
                    .map(TestColumn::getName)
                    .collect(Collectors.joining(",")));
    schema.setProperty("columns.types",
            testColumns.stream()
                    .map(TestColumn::getType)
                    .collect(Collectors.joining(",")));
    schema.setProperty(FILE_INPUT_FORMAT, OrcInputFormat.class.getName());
    schema.setProperty(SERIALIZATION_LIB, serde.getClass().getName());

    partitionKeys = testColumns.stream()
            .filter(TestColumn::isPartitionKey)
            .map(input -> new HivePartitionKey(input.getName(), HiveType.valueOf(input.getObjectInspector().getTypeName()), (String) input.getWriteValue()))
            .collect(toList());

    ImmutableList.Builder<HiveColumnHandle> columnsBuilder = ImmutableList.builder();
    ImmutableList.Builder<Type> typesBuilder = ImmutableList.builder();
    int nextHiveColumnIndex = 0;
    for (int i = 0; i < testColumns.size(); i++) {
        TestColumn testColumn = testColumns.get(i);
        int columnIndex = testColumn.isPartitionKey() ? -1 : nextHiveColumnIndex++;

        ObjectInspector inspector = testColumn.getObjectInspector();
        HiveType hiveType = HiveType.valueOf(inspector.getTypeName());
        Type type = hiveType.getType(TYPE_MANAGER);

        columnsBuilder.add(new HiveColumnHandle("client_id", testColumn.getName(), hiveType, type.getTypeSignature(), columnIndex, testColumn.isPartitionKey()));
        typesBuilder.add(type);
    }
    columns = columnsBuilder.build();
    types = typesBuilder.build();

    fileSplit = createTestFile(tempFilePath, new OrcOutputFormat(), serde, null, testColumns, NUM_ROWS);
}
 
开发者ID:y-lan,项目名称:presto,代码行数:41,代码来源:TestOrcPageSourceMemoryTracking.java


示例11: getOrcWriterConstructor

import org.apache.hadoop.hive.ql.io.orc.OrcOutputFormat; //导入依赖的package包/类
private static Constructor<? extends RecordWriter> getOrcWriterConstructor()
{
    try {
        Constructor<? extends RecordWriter> constructor = OrcOutputFormat.class.getClassLoader()
                .loadClass(ORC_RECORD_WRITER)
                .asSubclass(RecordWriter.class)
                .getDeclaredConstructor(Path.class, WriterOptions.class);
        constructor.setAccessible(true);
        return constructor;
    }
    catch (ReflectiveOperationException e) {
        throw Throwables.propagate(e);
    }
}
 
开发者ID:y-lan,项目名称:presto,代码行数:15,代码来源:TestOrcPageSourceMemoryTracking.java



注:本文中的org.apache.hadoop.hive.ql.io.orc.OrcOutputFormat类示例整理自Github/MSDocs等源码及文档管理平台,相关代码片段筛选自各路编程大神贡献的开源项目,源码版权归原作者所有,传播和使用请参考对应项目的License;未经允许,请勿转载。


鲜花

握手

雷人

路过

鸡蛋
该文章已有0人参与评论

请发表评论

全部评论

专题导读
上一篇:
Java ResourceResolverException类代码示例发布时间:2022-05-23
下一篇:
Java QuadEdge类代码示例发布时间:2022-05-23
热门推荐
阅读排行榜

扫描微信二维码

查看手机版网站

随时了解更新最新资讯

139-2527-9053

在线客服(服务时间 9:00~18:00)

在线QQ客服
地址:深圳市南山区西丽大学城创智工业园
电邮:jeky_zhao#qq.com
移动电话:139-2527-9053

Powered by 互联科技 X3.4© 2001-2213 极客世界.|Sitemap