• 设为首页
  • 点击收藏
  • 手机版
    手机扫一扫访问
    迪恩网络手机版
  • 关注官方公众号
    微信扫一扫关注
    迪恩网络公众号

Java ContextUtil类代码示例

原作者: [db:作者] 来自: [db:来源] 收藏 邀请

本文整理汇总了Java中parquet.hadoop.util.ContextUtil的典型用法代码示例。如果您正苦于以下问题:Java ContextUtil类的具体用法?Java ContextUtil怎么用?Java ContextUtil使用的例子?那么恭喜您, 这里精选的类代码示例或许可以为您提供帮助。



ContextUtil类属于parquet.hadoop.util包,在下文中一共展示了ContextUtil类的20个代码示例,这些例子默认根据受欢迎程度排序。您可以为喜欢或者感觉有用的代码点赞,您的评价将有助于我们的系统推荐出更棒的Java代码示例。

示例1: getSplits

import parquet.hadoop.util.ContextUtil; //导入依赖的package包/类
/**
 * {@inheritDoc}
 */
@Override
public List<InputSplit> getSplits(JobContext jobContext) throws IOException {
    Configuration configuration = ContextUtil.getConfiguration(jobContext);
    List<InputSplit> splits = new ArrayList<InputSplit>();

    if (isTaskSideMetaData(configuration)) {
        // Although not required by the API, some clients may depend on always
        // receiving ParquetInputSplit. Translation is required at some point.
        for (InputSplit split : super.getSplits(jobContext)) {
            Preconditions.checkArgument(split instanceof FileSplit,
                    "Cannot wrap non-FileSplit: " + split);
            splits.add(ParquetInputSplit.from((FileSplit) split));
        }
        return splits;

    } else {
        splits.addAll(getSplits(configuration, getFooters(jobContext)));
    }

    return splits;
}
 
开发者ID:grokcoder,项目名称:pbase,代码行数:25,代码来源:ParquetInputFormat.java


示例2: commitJob

import parquet.hadoop.util.ContextUtil; //导入依赖的package包/类
@Override
public void commitJob(JobContext jobContext) throws IOException {
    super.commitJob(jobContext);
    Configuration conf = ContextUtil.getConfiguration(jobContext);
    Path outputPath = FileOutputFormat.getOutputPath(new JobConf(conf));
    ParquetOutputCommitter.writeMetaDataFile(conf, outputPath);
}
 
开发者ID:grokcoder,项目名称:pbase,代码行数:8,代码来源:MapredParquetOutputCommitter.java


示例3: setUnboundRecordFilter

import parquet.hadoop.util.ContextUtil; //导入依赖的package包/类
public static void setUnboundRecordFilter(Job job, Class<? extends UnboundRecordFilter> filterClass) {
    Configuration conf = ContextUtil.getConfiguration(job);
    checkArgument(getFilterPredicate(conf) == null,
            "You cannot provide an UnboundRecordFilter after providing a FilterPredicate");

    conf.set(UNBOUND_RECORD_FILTER, filterClass.getName());
}
 
开发者ID:grokcoder,项目名称:pbase,代码行数:8,代码来源:ParquetInputFormat.java


示例4: createRecordReader

import parquet.hadoop.util.ContextUtil; //导入依赖的package包/类
/**
 * {@inheritDoc}
 */
@Override
public RecordReader<Void, T> createRecordReader(
        InputSplit inputSplit,
        TaskAttemptContext taskAttemptContext) throws IOException, InterruptedException {
    Configuration conf = ContextUtil.getConfiguration(taskAttemptContext);
    ReadSupport<T> readSupport = getReadSupport(conf);
    return new ParquetRecordReader<T>(readSupport, getFilter(conf));
}
 
开发者ID:grokcoder,项目名称:pbase,代码行数:12,代码来源:ParquetInputFormat.java


示例5: getCounterByNameAndFlag

import parquet.hadoop.util.ContextUtil; //导入依赖的package包/类
@Override
public ICounter getCounterByNameAndFlag(String groupName, String counterName, String counterFlag) {
    if (ContextUtil.getConfiguration(context).getBoolean(counterFlag, true)) {
        return new MapReduceCounterAdapter(ContextUtil.getCounter(context, groupName, counterName));
    } else {
        return new BenchmarkCounter.NullCounter();
    }
}
 
开发者ID:grokcoder,项目名称:pbase,代码行数:9,代码来源:MapReduceCounterLoader.java


示例6: initialize

import parquet.hadoop.util.ContextUtil; //导入依赖的package包/类
/**
 * {@inheritDoc}
 */
@Override
public void initialize(InputSplit inputSplit, TaskAttemptContext context)
        throws IOException, InterruptedException {
    if (context instanceof TaskInputOutputContext<?, ?, ?, ?>) {
        BenchmarkCounter.initCounterFromContext((TaskInputOutputContext<?, ?, ?, ?>) context);
    } else {
        LOG.error("Can not initialize counter due to context is not a instance of TaskInputOutputContext, but is "
                + context.getClass().getCanonicalName());
    }

    initializeInternalReader(toParquetSplit(inputSplit), ContextUtil.getConfiguration(context));
}
 
开发者ID:grokcoder,项目名称:pbase,代码行数:16,代码来源:ParquetRecordReader.java


示例7: shouldUseParquetFlagToSetCodec

import parquet.hadoop.util.ContextUtil; //导入依赖的package包/类
public void shouldUseParquetFlagToSetCodec(String codecNameStr, CompressionCodecName expectedCodec) throws IOException {

        //Test mapreduce API
        Job job = new Job();
        Configuration conf = job.getConfiguration();
        conf.set(ParquetOutputFormat.COMPRESSION, codecNameStr);
        TaskAttemptContext task = ContextUtil.newTaskAttemptContext(conf, new TaskAttemptID(new TaskID(new JobID("test", 1), false, 1), 1));
        Assert.assertEquals(CodecConfig.from(task).getCodec(), expectedCodec);

        //Test mapred API
        JobConf jobConf = new JobConf();
        jobConf.set(ParquetOutputFormat.COMPRESSION, codecNameStr);
        Assert.assertEquals(CodecConfig.from(jobConf).getCodec(), expectedCodec);
    }
 
开发者ID:grokcoder,项目名称:pbase,代码行数:15,代码来源:CodecConfigTest.java


示例8: shouldUseHadoopFlagToSetCodec

import parquet.hadoop.util.ContextUtil; //导入依赖的package包/类
public void shouldUseHadoopFlagToSetCodec(String codecClassStr, CompressionCodecName expectedCodec) throws IOException {
    //Test mapreduce API
    Job job = new Job();
    Configuration conf = job.getConfiguration();
    conf.setBoolean("mapred.output.compress", true);
    conf.set("mapred.output.compression.codec", codecClassStr);
    TaskAttemptContext task = ContextUtil.newTaskAttemptContext(conf, new TaskAttemptID(new TaskID(new JobID("test", 1), false, 1), 1));
    Assert.assertEquals(expectedCodec, CodecConfig.from(task).getCodec());

    //Test mapred API
    JobConf jobConf = new JobConf();
    jobConf.setBoolean("mapred.output.compress", true);
    jobConf.set("mapred.output.compression.codec", codecClassStr);
    Assert.assertEquals(CodecConfig.from(jobConf).getCodec(), expectedCodec);
}
 
开发者ID:grokcoder,项目名称:pbase,代码行数:16,代码来源:CodecConfigTest.java


示例9: setup

import parquet.hadoop.util.ContextUtil; //导入依赖的package包/类
@Override
protected void setup(Context context) throws IOException, InterruptedException {
  factory = new SimpleGroupFactory(GroupWriteSupport.getSchema(ContextUtil.getConfiguration(context)));
}
 
开发者ID:Hanmourang,项目名称:hiped2,代码行数:5,代码来源:ExampleParquetMapReduce.java


示例10: commitJob

import parquet.hadoop.util.ContextUtil; //导入依赖的package包/类
public void commitJob(JobContext jobContext) throws IOException {
    super.commitJob(jobContext);
    Configuration configuration = ContextUtil.getConfiguration(jobContext);
    writeMetaDataFile(configuration, outputPath);
}
 
开发者ID:grokcoder,项目名称:pbase,代码行数:6,代码来源:ParquetOutputCommitter.java


示例11: setTaskSideMetaData

import parquet.hadoop.util.ContextUtil; //导入依赖的package包/类
public static void setTaskSideMetaData(Job job, boolean taskSideMetadata) {
    ContextUtil.getConfiguration(job).setBoolean(TASK_SIDE_METADATA, taskSideMetadata);
}
 
开发者ID:grokcoder,项目名称:pbase,代码行数:4,代码来源:ParquetInputFormat.java


示例12: setReadSupportClass

import parquet.hadoop.util.ContextUtil; //导入依赖的package包/类
public static void setReadSupportClass(Job job, Class<?> readSupportClass) {
    ContextUtil.getConfiguration(job).set(READ_SUPPORT_CLASS, readSupportClass.getName());
}
 
开发者ID:grokcoder,项目名称:pbase,代码行数:4,代码来源:ParquetInputFormat.java


示例13: listStatus

import parquet.hadoop.util.ContextUtil; //导入依赖的package包/类
@Override
protected List<FileStatus> listStatus(JobContext jobContext) throws IOException {
    return getAllFileRecursively(super.listStatus(jobContext),
            ContextUtil.getConfiguration(jobContext));
}
 
开发者ID:grokcoder,项目名称:pbase,代码行数:6,代码来源:ParquetInputFormat.java


示例14: getFooters

import parquet.hadoop.util.ContextUtil; //导入依赖的package包/类
/**
 * @param jobContext the current job context
 * @return the footers for the files
 * @throws IOException
 */
public List<Footer> getFooters(JobContext jobContext) throws IOException {
    List<FileStatus> statuses = listStatus(jobContext);
    if (statuses.isEmpty()) {
        return Collections.emptyList();
    }
    Configuration config = ContextUtil.getConfiguration(jobContext);
    List<Footer> footers = new ArrayList<Footer>(statuses.size());
    Set<FileStatus> missingStatuses = new HashSet<FileStatus>();
    Map<Path, FileStatusWrapper> missingStatusesMap =
            new HashMap<Path, FileStatusWrapper>(missingStatuses.size());

    if (footersCache == null) {
        footersCache =
                new LruCache<FileStatusWrapper, FootersCacheValue>(Math.max(statuses.size(), MIN_FOOTER_CACHE_SIZE));
    }
    for (FileStatus status : statuses) {
        FileStatusWrapper statusWrapper = new FileStatusWrapper(status);
        FootersCacheValue cacheEntry =
                footersCache.getCurrentValue(statusWrapper);
        if (Log.DEBUG) {
            LOG.debug("Cache entry " + (cacheEntry == null ? "not " : "")
                    + " found for '" + status.getPath() + "'");
        }
        if (cacheEntry != null) {
            footers.add(cacheEntry.getFooter());
        } else {
            missingStatuses.add(status);
            missingStatusesMap.put(status.getPath(), statusWrapper);
        }
    }
    if (Log.DEBUG) {
        LOG.debug("found " + footers.size() + " footers in cache and adding up "
                + "to " + missingStatuses.size() + " missing footers to the cache");
    }


    if (missingStatuses.isEmpty()) {
        return footers;
    }

    List<Footer> newFooters = getFooters(config, missingStatuses);
    for (Footer newFooter : newFooters) {
        // Use the original file status objects to make sure we store a
        // conservative (older) modification time (i.e. in case the files and
        // footers were modified and it's not clear which version of the footers
        // we have)
        FileStatusWrapper fileStatus = missingStatusesMap.get(newFooter.getFile());
        footersCache.put(fileStatus, new FootersCacheValue(fileStatus, newFooter));
    }

    footers.addAll(newFooters);
    return footers;
}
 
开发者ID:grokcoder,项目名称:pbase,代码行数:59,代码来源:ParquetInputFormat.java


示例15: getConfiguration

import parquet.hadoop.util.ContextUtil; //导入依赖的package包/类
@Override
public Configuration getConfiguration() {
    return ContextUtil.getConfiguration(context);
}
 
开发者ID:grokcoder,项目名称:pbase,代码行数:5,代码来源:CodecConfig.java


示例16: increment

import parquet.hadoop.util.ContextUtil; //导入依赖的package包/类
@Override
public void increment(long val) {
    ContextUtil.incrementCounter(adaptee, val);
}
 
开发者ID:grokcoder,项目名称:pbase,代码行数:5,代码来源:MapReduceCounterAdapter.java


示例17: setup

import parquet.hadoop.util.ContextUtil; //导入依赖的package包/类
protected void setup(org.apache.hadoop.mapreduce.Mapper<LongWritable, Text, Void, Group>.Context context) throws java.io.IOException, InterruptedException {
    factory = new SimpleGroupFactory(GroupWriteSupport.getSchema(ContextUtil.getConfiguration(context)));
}
 
开发者ID:grokcoder,项目名称:pbase,代码行数:4,代码来源:TestInputOutputFormat.java


示例18: setup

import parquet.hadoop.util.ContextUtil; //导入依赖的package包/类
protected void setup(Context context) throws IOException, InterruptedException {
    factory = new SimpleGroupFactory(GroupWriteSupport.getSchema(ContextUtil.getConfiguration(context)));
}
 
开发者ID:grokcoder,项目名称:pbase,代码行数:4,代码来源:DeprecatedInputFormatTest.java


示例19: setSchema

import parquet.hadoop.util.ContextUtil; //导入依赖的package包/类
/**
 * set the schema being written to the job conf
 *
 * @param schema        the schema of the data
 * @param configuration the job configuration
 */
public static void setSchema(Job job, MessageType schema) {
    GroupWriteSupport.setSchema(schema, ContextUtil.getConfiguration(job));
}
 
开发者ID:grokcoder,项目名称:pbase,代码行数:10,代码来源:ExampleOutputFormat.java


示例20: getSchema

import parquet.hadoop.util.ContextUtil; //导入依赖的package包/类
/**
 * retrieve the schema from the conf
 *
 * @param configuration the job conf
 * @return the schema
 */
public static MessageType getSchema(Job job) {
    return GroupWriteSupport.getSchema(ContextUtil.getConfiguration(job));
}
 
开发者ID:grokcoder,项目名称:pbase,代码行数:10,代码来源:ExampleOutputFormat.java



注:本文中的parquet.hadoop.util.ContextUtil类示例整理自Github/MSDocs等源码及文档管理平台,相关代码片段筛选自各路编程大神贡献的开源项目,源码版权归原作者所有,传播和使用请参考对应项目的License;未经允许,请勿转载。


鲜花

握手

雷人

路过

鸡蛋
该文章已有0人参与评论

请发表评论

全部评论

专题导读
上一篇:
Java XStream类代码示例发布时间:2022-05-22
下一篇:
Java HTTPAddress类代码示例发布时间:2022-05-22
热门推荐
阅读排行榜

扫描微信二维码

查看手机版网站

随时了解更新最新资讯

139-2527-9053

在线客服(服务时间 9:00~18:00)

在线QQ客服
地址:深圳市南山区西丽大学城创智工业园
电邮:jeky_zhao#qq.com
移动电话:139-2527-9053

Powered by 互联科技 X3.4© 2001-2213 极客世界.|Sitemap