• 设为首页
  • 点击收藏
  • 手机版
    手机扫一扫访问
    迪恩网络手机版
  • 关注官方公众号
    微信扫一扫关注
    迪恩网络公众号

Java JobStats类代码示例

原作者: [db:作者] 来自: [db:来源] 收藏 邀请

本文整理汇总了Java中org.apache.pig.tools.pigstats.JobStats的典型用法代码示例。如果您正苦于以下问题:Java JobStats类的具体用法?Java JobStats怎么用?Java JobStats使用的例子?那么恭喜您, 这里精选的类代码示例或许可以为您提供帮助。



JobStats类属于org.apache.pig.tools.pigstats包,在下文中一共展示了JobStats类的20个代码示例,这些例子默认根据受欢迎程度排序。您可以为喜欢或者感觉有用的代码点赞,您的评价将有助于我们的系统推荐出更棒的Java代码示例。

示例1: getJobs

import org.apache.pig.tools.pigstats.JobStats; //导入依赖的package包/类
/**
 * Retrieves a list of Job objects from the PigStats object
 * @param stats
 * @return A list of ExecJob objects
 */
protected List<ExecJob> getJobs(PigStats stats) {
    LinkedList<ExecJob> jobs = new LinkedList<ExecJob>();
    JobGraph jGraph = stats.getJobGraph();
    Iterator<JobStats> iter = jGraph.iterator();
    while (iter.hasNext()) {
        JobStats js = iter.next();
        for (OutputStats output : js.getOutputs()) {
            if (js.isSuccessful()) {
                jobs.add(new HJob(HJob.JOB_STATUS.COMPLETED, pigContext, output
                        .getPOStore(), output.getAlias(), stats));
            } else {
                HJob hjob = new HJob(HJob.JOB_STATUS.FAILED, pigContext, output
                        .getPOStore(), output.getAlias(), stats);
                hjob.setException(js.getException());
                jobs.add(hjob);
            }
        }
    }
    return jobs;
}
 
开发者ID:sigmoidanalytics,项目名称:spork-streaming,代码行数:26,代码来源:PigServer.java


示例2: executePlan

import org.apache.pig.tools.pigstats.JobStats; //导入依赖的package包/类
private boolean executePlan(PhysicalPlan pp) throws IOException {
    boolean failed = true;
    MapReduceLauncher launcher = new MapReduceLauncher();
    PigStats stats = null;
    try {
        stats = launcher.launchPig(pp, "execute", myPig.getPigContext());
    } catch (Exception e) {
        e.printStackTrace(System.out);
        throw new IOException(e);
    }
    Iterator<JobStats> iter = stats.getJobGraph().iterator();
    while (iter.hasNext()) {
        JobStats js = iter.next();
        failed = !js.isSuccessful();
        if (failed) {
            break;
        }
    }
    return !failed;
}
 
开发者ID:sigmoidanalytics,项目名称:spork-streaming,代码行数:21,代码来源:TestMultiQueryLocal.java


示例3: testMedianMapReduceTime

import org.apache.pig.tools.pigstats.JobStats; //导入依赖的package包/类
@Test
public void testMedianMapReduceTime() throws Exception {

	JobConf jobConf = new JobConf();
	JobClient jobClient = Mockito.mock(JobClient.class);
	
	// mock methods to return the predefined map and reduce task reports
	Mockito.when(jobClient.getMapTaskReports(jobID)).thenReturn(mapTaskReports);
	Mockito.when(jobClient.getReduceTaskReports(jobID)).thenReturn(reduceTaskReports);

	PigStats.JobGraph jobGraph = new PigStats.JobGraph();
	JobStats jobStats = createJobStats("JobStatsTest", jobGraph);
	getJobStatsMethod("setId", JobID.class).invoke(jobStats, jobID);
	getJobStatsMethod("setSuccessful", boolean.class).invoke(jobStats, true);

	getJobStatsMethod("addMapReduceStatistics", JobClient.class, Configuration.class)
	    .invoke(jobStats, jobClient, jobConf);
	String msg = (String)getJobStatsMethod("getDisplayString", boolean.class)
	    .invoke(jobStats, false);
	
	System.out.println(JobStats.SUCCESS_HEADER);
	System.out.println(msg);
	
	assertTrue(msg.startsWith(ASSERT_STRING));
}
 
开发者ID:sigmoidanalytics,项目名称:spork-streaming,代码行数:26,代码来源:TestJobStats.java


示例4: setJobParents

import org.apache.pig.tools.pigstats.JobStats; //导入依赖的package包/类
private void setJobParents(String dagName, TezOperator tezOp, Configuration conf) {
    if (tezOp.isVertexGroup()) {
        return;
    }
    // PigStats maintains a job DAG with the job id being updated
    // upon available. Therefore, before a job is submitted, the ids
    // of its parent jobs are already available.
    JobStats js = ((TezPigScriptStats)PigStats.get()).getVertexStats(dagName, tezOp.getOperatorKey().toString());
    if (js != null) {
        List<Operator> preds = js.getPlan().getPredecessors(js);
        if (preds != null) {
            StringBuilder sb = new StringBuilder();
            for (Operator op : preds) {
                JobStats job = (JobStats)op;
                if (sb.length() > 0) sb.append(",");
                sb.append(job.getJobId());
            }
            conf.set(PIG_PROPERTY.JOB_PARENTS.toString(), sb.toString());
        }
    }
}
 
开发者ID:sigmoidanalytics,项目名称:spork,代码行数:22,代码来源:TezScriptState.java


示例5: testMedianMapReduceTime

import org.apache.pig.tools.pigstats.JobStats; //导入依赖的package包/类
@Test
public void testMedianMapReduceTime() throws Exception {
    JobClient jobClient = Mockito.mock(JobClient.class);

    // mock methods to return the predefined map and reduce task reports
    Mockito.when(jobClient.getMapTaskReports(jobID)).thenReturn(mapTaskReports);
    Mockito.when(jobClient.getReduceTaskReports(jobID)).thenReturn(reduceTaskReports);

    PigStats.JobGraph jobGraph = new PigStats.JobGraph();
    MRJobStats jobStats = createJobStats("JobStatsTest", jobGraph);
    getJobStatsMethod("setId", JobID.class).invoke(jobStats, jobID);
    jobStats.setSuccessful(true);

    getJobStatsMethod("addMapReduceStatistics", Iterator.class, Iterator.class)
        .invoke(jobStats, Arrays.asList(mapTaskReports).iterator(), Arrays.asList(reduceTaskReports).iterator());
    String msg = (String)getJobStatsMethod("getDisplayString")
        .invoke(jobStats);

    System.out.println(JobStats.SUCCESS_HEADER);
    System.out.println(msg);

    assertTrue(msg.startsWith(ASSERT_STRING));
}
 
开发者ID:sigmoidanalytics,项目名称:spork,代码行数:24,代码来源:TestMRJobStats.java


示例6: testGetOuputSizeUsingNonFileBasedStorage4

import org.apache.pig.tools.pigstats.JobStats; //导入依赖的package包/类
@Test
public void testGetOuputSizeUsingNonFileBasedStorage4() throws Exception {
    // Register a comma-separated list of readers in configuration, and
    // verify that the one that supports a non-file-based uri is used.
    Configuration conf = new Configuration();
    conf.set(PigStatsOutputSizeReader.OUTPUT_SIZE_READER_KEY,
            FileBasedOutputSizeReader.class.getName() + ","
                    + DummyOutputSizeReader.class.getName());

    // ClientSystemProps needs to be initialized to instantiate HBaseStorage
    UDFContext.getUDFContext().setClientSystemProps(new Properties());
    long outputSize = JobStats.getOutputSize(
            createPOStoreForNonFileBasedSystem(new HBaseStorage("colName"), conf), conf);

    assertEquals("The dummy output size reader always returns " + DummyOutputSizeReader.SIZE,
            DummyOutputSizeReader.SIZE, outputSize);
}
 
开发者ID:sigmoidanalytics,项目名称:spork,代码行数:18,代码来源:TestMRJobStats.java


示例7: testGetOuputSizeUsingNonFileBasedStorage5

import org.apache.pig.tools.pigstats.JobStats; //导入依赖的package包/类
@Test
public void testGetOuputSizeUsingNonFileBasedStorage5() throws Exception {
    Configuration conf = new Configuration();

    long size = 2L * 1024 * 1024 * 1024;
    long outputSize = JobStats.getOutputSize(
            createPOStoreForFileBasedSystem(size, new PigStorageWithStatistics(), conf), conf);

    // By default, FileBasedOutputSizeReader is used to compute the size of output.
    assertEquals("The returned output size is expected to be the same as the file size",
            size, outputSize);

    // Now add PigStorageWithStatistics to the unsupported store funcs list, and
    // verify that JobStats.getOutputSize() returns -1.
    conf.set(PigStatsOutputSizeReader.OUTPUT_SIZE_READER_UNSUPPORTED,
            PigStorageWithStatistics.class.getName());

    outputSize = JobStats.getOutputSize(
            createPOStoreForFileBasedSystem(size, new PigStorageWithStatistics(), conf), conf);
    assertEquals("The default output size reader returns -1 for unsupported store funcs",
            -1, outputSize);
}
 
开发者ID:sigmoidanalytics,项目名称:spork,代码行数:23,代码来源:TestMRJobStats.java


示例8: jobFailedNotification

import org.apache.pig.tools.pigstats.JobStats; //导入依赖的package包/类
@Override
public void jobFailedNotification(String scriptId, JobStats stats) {
  if (stats.getJobId() == null) {
    logger.warn("jobId for failed job not found. This should only happen "
        + "in local mode");
    return;
  }

  PigJobDagNode node = dagNodeJobIdMap.get(stats.getJobId());
  if (node == null) {
    logger.warn("Unrecognized jobId reported for failed job: "
        + stats.getJobId());
    return;
  }

  addCompletedJobStats(node, stats);
  updateJsonFile();
}
 
开发者ID:azkaban,项目名称:azkaban-plugins,代码行数:19,代码来源:AzkabanPigListener.java


示例9: assertAllDocumentsOk

import org.apache.pig.tools.pigstats.JobStats; //导入依赖的package包/类
private void assertAllDocumentsOk(String script, Configuration conf) throws Exception {
    PigServer ps = setup(script, conf);
    List<ExecJob> jobs = ps.executeBatch();
    PigStats stats = jobs.get(0).getStatistics();
    for (JobStats js : stats.getJobGraph()) {
        Counters hadoopCounters = ((MRJobStats)js).getHadoopCounters();
        assertNotNull(hadoopCounters);
        VespaCounters counters = VespaCounters.get(hadoopCounters);
        assertEquals(10, counters.getDocumentsSent());
        assertEquals(0, counters.getDocumentsFailed());
        assertEquals(10, counters.getDocumentsOk());
    }
}
 
开发者ID:vespa-engine,项目名称:vespa,代码行数:14,代码来源:VespaStorageTest.java


示例10: jobFailedNotification

import org.apache.pig.tools.pigstats.JobStats; //导入依赖的package包/类
@Override
public void jobFailedNotification(String scriptId, JobStats jobStats) {
	logger.info("Job " + jobStats.getJobId() + " completed with status " + jobStats.getState() + " for request "
			+ requestId + " with error " + jobStats.getErrorMessage());
	requestStats.setErrorMessage(jobStats.getErrorMessage());
	try {
		PigUtils.writeStatsFile(new Path(requestPath + "/stats"), requestStats);
	} catch (Exception e) {
		logger.error("Unable to write stats file for path: " + requestPath);
	}
}
 
开发者ID:eBay,项目名称:oink,代码行数:12,代码来源:PigEventListener.java


示例11: jobFailedNotification

import org.apache.pig.tools.pigstats.JobStats; //导入依赖的package包/类
@Override
public void jobFailedNotification(String scriptId, JobStats jobStats) {
    synchronized (listeners) {
        for (PigProgressNotificationListener listener : listeners) {
            listener.jobFailedNotification(scriptId, jobStats);
        }
    }        
}
 
开发者ID:sigmoidanalytics,项目名称:spork-streaming,代码行数:9,代码来源:SyncProgressNotificationAdaptor.java


示例12: jobFinishedNotification

import org.apache.pig.tools.pigstats.JobStats; //导入依赖的package包/类
@Override
public void jobFinishedNotification(String scriptId, JobStats jobStats) {
    synchronized (listeners) {
        for (PigProgressNotificationListener listener : listeners) {
            listener.jobFinishedNotification(scriptId, jobStats);
        }
    }        
}
 
开发者ID:sigmoidanalytics,项目名称:spork-streaming,代码行数:9,代码来源:SyncProgressNotificationAdaptor.java


示例13: executeBatch

import org.apache.pig.tools.pigstats.JobStats; //导入依赖的package包/类
private void executeBatch() throws IOException {
    if (mPigServer.isBatchOn()) {
        if (mExplain != null) {
            explainCurrentBatch();
        }

        if (!mLoadOnly) {
            mPigServer.executeBatch();
            PigStats stats = PigStats.get();
            JobGraph jg = stats.getJobGraph();
            Iterator<JobStats> iter = jg.iterator();
            while (iter.hasNext()) {
                JobStats js = iter.next();
                if (!js.isSuccessful()) {
                    mNumFailedJobs++;
                    Exception exp = (js.getException() != null) ? js.getException()
                            : new ExecException(
                                    "Job failed, hadoop does not return any error message",
                                    2244);
                    LogUtils.writeLog(exp,
                            mPigServer.getPigContext().getProperties().getProperty("pig.logfile"),
                            log,
                            "true".equalsIgnoreCase(mPigServer.getPigContext().getProperties().getProperty("verbose")),
                            "Pig Stack Trace");
                } else {
                    mNumSucceededJobs++;
                }
            }
        }
    }
}
 
开发者ID:sigmoidanalytics,项目名称:spork-streaming,代码行数:32,代码来源:GruntParser.java


示例14: test

import org.apache.pig.tools.pigstats.JobStats; //导入依赖的package包/类
@Test
public void test() throws Exception {
    File input = File.createTempFile("test", "input");
    input.deleteOnExit();
    File output = File.createTempFile("test", "output");
    output.delete();
    Util.createLocalInputFile(input.getAbsolutePath(), new String[] {
        "1,2,3",
        "1,1,3",
        "1,1,1",
        "3,1,1",
        "1,2,1",
    });
    PigServer pigServer = new PigServer(ExecType.LOCAL);
    pigServer.setBatchOn();
    pigServer.registerQuery(
            "A = LOAD '" + input.getAbsolutePath() + "' using PigStorage();\n"
        +  	"B = GROUP A BY $0;\n"
        + 	"A = FOREACH B GENERATE COUNT(A);\n"
        +	"STORE A INTO '" + output.getAbsolutePath() + "';");
    ExecJob job = pigServer.executeBatch().get(0);
    List<OriginalLocation> originalLocations = job.getPOStore().getOriginalLocations();
    Assert.assertEquals(1, originalLocations.size());
    OriginalLocation originalLocation = originalLocations.get(0);
    Assert.assertEquals(4, originalLocation.getLine());
    Assert.assertEquals(0, originalLocation.getOffset());
    Assert.assertEquals("A", originalLocation.getAlias());
    JobStats jStats = (JobStats)job.getStatistics().getJobGraph().getSinks().get(0);
    Assert.assertEquals("M: A[1,4],A[3,4],B[2,4] C: A[3,4],B[2,4] R: A[3,4]", jStats.getAliasLocation());
}
 
开发者ID:sigmoidanalytics,项目名称:spork-streaming,代码行数:31,代码来源:TestLocationInPhysicalPlan.java


示例15: simpleTest

import org.apache.pig.tools.pigstats.JobStats; //导入依赖的package包/类
@Test
public void simpleTest() throws Exception {
    PrintWriter w = new PrintWriter(new FileWriter(PIG_FILE));
    w.println("A = load '" + INPUT_FILE + "' as (a0:int, a1:int, a2:int);");
    w.println("B = group A by a0;");
    w.println("C = foreach B generate group, COUNT(A);");
    w.println("store C into '" + OUTPUT_FILE + "';");
    w.close();
    
    try {
        String[] args = { "-Dstop.on.failure=true", "-Dopt.multiquery=false", "-Daggregate.warning=false", PIG_FILE };
        PigStats stats = PigRunner.run(args, new TestNotificationListener());
 
        assertTrue(stats.isSuccessful());
        
        assertEquals(1, stats.getNumberJobs());
        String name = stats.getOutputNames().get(0);
        assertEquals(OUTPUT_FILE, name);
        assertEquals(12, stats.getBytesWritten());
        assertEquals(3, stats.getRecordWritten());       
        
        assertEquals("A,B,C",
                ((JobStats)stats.getJobGraph().getSinks().get(0)).getAlias());
        
        Configuration conf = ConfigurationUtil.toConfiguration(stats.getPigProperties());
        assertTrue(conf.getBoolean("stop.on.failure", false));           
        assertTrue(!conf.getBoolean("aggregate.warning", true));
        assertTrue(!conf.getBoolean("opt.multiquery", true));
    } finally {
        new File(PIG_FILE).delete();
        Util.deleteFile(cluster, OUTPUT_FILE);
    }
}
 
开发者ID:sigmoidanalytics,项目名称:spork-streaming,代码行数:34,代码来源:TestPigRunner.java


示例16: scriptsInDfsTest

import org.apache.pig.tools.pigstats.JobStats; //导入依赖的package包/类
@Test
public void scriptsInDfsTest() throws Exception {
    PrintWriter w = new PrintWriter(new FileWriter(PIG_FILE));
    w.println("A = load '" + INPUT_FILE + "' as (a0:int, a1:int, a2:int);");
    w.println("B = group A by a0;");
    w.println("C = foreach B generate group, COUNT(A);");
    w.println("store C into '" + OUTPUT_FILE + "';");
    w.close();
    Util.copyFromLocalToCluster(cluster, PIG_FILE, PIG_FILE);
    Path inputInDfs = new Path(cluster.getFileSystem().getHomeDirectory(), PIG_FILE);
    
    try {
        String[] args = { inputInDfs.toString() };
        PigStats stats = PigRunner.run(args, new TestNotificationListener());
 
        assertTrue(stats.isSuccessful());
        
        assertTrue(stats.getJobGraph().size() == 1);
        String name = stats.getOutputNames().get(0);
        assertEquals(OUTPUT_FILE, name);
        assertEquals(12, stats.getBytesWritten());
        assertEquals(3, stats.getRecordWritten());       
        
        assertEquals("A,B,C",
                ((JobStats)stats.getJobGraph().getSinks().get(0)).getAlias());
    } finally {
        new File(PIG_FILE).delete();
        Util.deleteFile(cluster, PIG_FILE);
        Util.deleteFile(cluster, OUTPUT_FILE);
    }
}
 
开发者ID:sigmoidanalytics,项目名称:spork-streaming,代码行数:32,代码来源:TestPigRunner.java


示例17: orderByTest

import org.apache.pig.tools.pigstats.JobStats; //导入依赖的package包/类
@Test
public void orderByTest() throws Exception {
    PrintWriter w = new PrintWriter(new FileWriter(PIG_FILE));
    w.println("A = load '" + INPUT_FILE + "' as (a0:int, a1:int, a2:int);");
    w.println("B = order A by a0;");
    w.println("C = limit B 2;");
    w.println("store C into '" + OUTPUT_FILE + "';");
    w.close();
    String[] args = { PIG_FILE };
    try {
        PigStats stats = PigRunner.run(args, new TestNotificationListener());
        assertTrue(stats.isSuccessful());
        assertTrue(stats.getJobGraph().size() == 4);
        assertTrue(stats.getJobGraph().getSinks().size() == 1);
        assertTrue(stats.getJobGraph().getSources().size() == 1);
        JobStats js = (JobStats) stats.getJobGraph().getSinks().get(0);
        assertEquals(OUTPUT_FILE, js.getOutputs().get(0).getName());
        assertEquals(2, js.getOutputs().get(0).getNumberRecords());
        assertEquals(12, js.getOutputs().get(0).getBytes());
        assertEquals(OUTPUT_FILE, stats.getOutputNames().get(0));
        assertEquals(2, stats.getRecordWritten());
        assertEquals(12, stats.getBytesWritten());
        
        assertEquals("A", ((JobStats) stats.getJobGraph().getSources().get(
                0)).getAlias());
        assertEquals("B", ((JobStats) stats.getJobGraph().getPredecessors(
                js).get(0)).getAlias());
        assertEquals("B", js.getAlias()); 
    } finally {
        new File(PIG_FILE).delete();
        Util.deleteFile(cluster, OUTPUT_FILE);
    }
}
 
开发者ID:sigmoidanalytics,项目名称:spork-streaming,代码行数:34,代码来源:TestPigRunner.java


示例18: MQDepJobFailedTest

import org.apache.pig.tools.pigstats.JobStats; //导入依赖的package包/类
@Test
public void MQDepJobFailedTest() throws Exception {
    final String OUTPUT_FILE_2 = "output2";
    PrintWriter w = new PrintWriter(new FileWriter(PIG_FILE));
    w.println("A = load '" + INPUT_FILE + "' as (name:chararray, a1:int, a2:int);");
    w.println("store A into '" + OUTPUT_FILE_2 + "';");
    w.println("B = FOREACH A GENERATE org.apache.pig.test.utils.UPPER(name);");
    w.println("C= order B by $0;");
    w.println("store C into '" + OUTPUT_FILE + "';");
    w.close();
    try {
        String[] args = { PIG_FILE };
        PigStats stats = PigRunner.run(args, null);
        Iterator<JobStats> iter = stats.getJobGraph().iterator();
        while (iter.hasNext()) {
             JobStats js=iter.next();
             if(js.getState().name().equals("FAILED")) {
                 List<Operator> ops=stats.getJobGraph().getSuccessors(js);
                 for(Operator op : ops ) {
                     assertEquals(((JobStats)op).getState().toString(), "UNKNOWN");
                 }
             }
        }
    } finally {
        new File(PIG_FILE).delete();
        Util.deleteFile(cluster, OUTPUT_FILE);
        Util.deleteFile(cluster, OUTPUT_FILE_2);
    }
}
 
开发者ID:sigmoidanalytics,项目名称:spork-streaming,代码行数:30,代码来源:TestPigRunner.java


示例19: testStopOnFailure

import org.apache.pig.tools.pigstats.JobStats; //导入依赖的package包/类
/**
 * PIG-2780: In this test case, Pig submits three jobs at the same time and
 * one of them will fail due to nonexistent input file. If users enable
 * stop.on.failure, then Pig should immediately stop if anyone of the three
 * jobs has failed.
 */
@Test
public void testStopOnFailure() throws Exception {
    
    PrintWriter w1 = new PrintWriter(new FileWriter(PIG_FILE));
    w1.println("A1 = load '" + INPUT_FILE + "';");
    w1.println("B1 = load 'nonexist';");
    w1.println("C1 = load '" + INPUT_FILE + "';");
    w1.println("A2 = distinct A1;");
    w1.println("B2 = distinct B1;");
    w1.println("C2 = distinct C1;");
    w1.println("ret = union A2,B2,C2;");
    w1.println("store ret into 'tmp/output';");
    w1.close();
    
    try {
        String[] args = { "-F", PIG_FILE };
        PigStats stats = PigRunner.run(args, new TestNotificationListener());
 
        assertTrue(!stats.isSuccessful());
        
        int successfulJobs = 0;
        Iterator<Operator> it = stats.getJobGraph().getOperators();
        while (it.hasNext()){
            JobStats js = (JobStats)it.next();
            if (js.isSuccessful())
                successfulJobs++;
        }
        
        // we should have less than 2 successful jobs
        assertTrue("Should have less than 2 successful jobs", successfulJobs < 2);
        
    } finally {
        new File(PIG_FILE).delete();
        Util.deleteFile(cluster, OUTPUT_FILE);
        Util.deleteFile(cluster, "tmp/output");
    }
}
 
开发者ID:sigmoidanalytics,项目名称:spork-streaming,代码行数:44,代码来源:TestPigRunner.java


示例20: doTest

import org.apache.pig.tools.pigstats.JobStats; //导入依赖的package包/类
private void doTest(double bytes_per_reducer, int default_parallel,
                    int parallel, int actual_parallel) throws IOException {
    try {
        String[] args = { PIG_FILE };
        PigStats stats = PigRunner.run(args, null);

        assertTrue(stats.isSuccessful());

        // get the skew-join job stat
        JobStats js = (JobStats) stats.getJobGraph().getSinks().get(0);
        assertEquals(actual_parallel, js.getNumberReduces());

        // estimation should only kick in if parallel and default_parallel are not set
        long estimatedReducers = -1;
        if (parallel < 1 && default_parallel < 1) {
            double fileSize = (double)(new File("test/org/apache/pig/test/data/passwd").length());
            int inputFiles = js.getInputs().size();
            estimatedReducers = Math.min((long)Math.ceil(fileSize/(double)bytes_per_reducer) * inputFiles, 999);
        }

        Util.assertParallelValues(default_parallel, parallel, estimatedReducers,
                actual_parallel, js.getInputs().get(0).getConf());

    } catch (Exception e) {
        assertNull("Exception thrown during verifySkewJoin", e);
    } finally {
        new File(PIG_FILE).delete();
        Util.deleteFile(cluster, OUTPUT_FILE);
    }
}
 
开发者ID:sigmoidanalytics,项目名称:spork-streaming,代码行数:31,代码来源:TestNumberOfReducers.java



注:本文中的org.apache.pig.tools.pigstats.JobStats类示例整理自Github/MSDocs等源码及文档管理平台,相关代码片段筛选自各路编程大神贡献的开源项目,源码版权归原作者所有,传播和使用请参考对应项目的License;未经允许,请勿转载。


鲜花

握手

雷人

路过

鸡蛋
该文章已有0人参与评论

请发表评论

全部评论

专题导读
上一篇:
Java DefaultSavedRequest类代码示例发布时间:2022-05-22
下一篇:
Java InvalidTelnetOptionException类代码示例发布时间:2022-05-22
热门推荐
阅读排行榜

扫描微信二维码

查看手机版网站

随时了解更新最新资讯

139-2527-9053

在线客服(服务时间 9:00~18:00)

在线QQ客服
地址:深圳市南山区西丽大学城创智工业园
电邮:jeky_zhao#qq.com
移动电话:139-2527-9053

Powered by 互联科技 X3.4© 2001-2213 极客世界.|Sitemap