• 设为首页
  • 点击收藏
  • 手机版
    手机扫一扫访问
    迪恩网络手机版
  • 关注官方公众号
    微信扫一扫关注
    迪恩网络公众号

Java Util类代码示例

原作者: [db:作者] 来自: [db:来源] 收藏 邀请

本文整理汇总了Java中org.apache.pig.test.Util的典型用法代码示例。如果您正苦于以下问题:Java Util类的具体用法?Java Util怎么用?Java Util使用的例子?那么恭喜您, 这里精选的类代码示例或许可以为您提供帮助。



Util类属于org.apache.pig.test包,在下文中一共展示了Util类的20个代码示例,这些例子默认根据受欢迎程度排序。您可以为喜欢或者感觉有用的代码点赞,您的评价将有助于我们的系统推荐出更棒的Java代码示例。

示例1: setup

import org.apache.pig.test.Util; //导入依赖的package包/类
@Before
public void setup() throws IOException {
    pig = new PigServer(ExecType.LOCAL);

    Util.deleteDirectory(new File(dataDir));
    try {
        pig.mkdirs(dataDir);

        Util.createLocalInputFile(dataDir + scalarInput,
            new String[] {
                "{ \"i\": 1, \"l\": 10, \"f\": 2.718, \"d\": 3.1415, \"b\": \"17\", \"c\": \"aardvark\" }",
                "{ \"i\": 2, \"l\": 100, \"f\": 1.234, \"d\": 3.3333, \"b\": null, \"c\": \"17.0\" }"
        });

        Util.createLocalInputFile(dataDir + complexInput,
            new String[] {
                "{ \"tuple\": { \"a\": 1, \"b\": 2 }, \"nested_tuple\": { \"a\": 1, \"b\": { \"c\": 2, \"d\": 3 } }, \"bag\": [{ \"a\": 1, \"b\": 2 }, { \"a\": 3, \"b\": 4 }], \"nested_bag\": [{\"a\": 1, \"b\": [{ \"c\": 2, \"d\": 3 }, { \"c\": 4, \"d\": 5 }]}], \"map\": { \"a\": 1, \"b\": 2 }, \"nested_map\": { \"a\": { \"b\": 1, \"c\": 2 } } }",
                "{ \"tuple\": { \"a\": 3, \"b\": 4 }, \"nested_tuple\": { \"a\": 4, \"b\": { \"c\": 5, \"d\": 6 } }, \"bag\": [{ \"a\": 5, \"b\": 6 }, { \"a\": 7, \"b\": 8 }], \"nested_bag\": [{\"a\": 6, \"b\": [{ \"c\": 7, \"d\": 8 }, { \"c\": 9, \"d\": 0 }]}], \"map\": { \"a\": 3, \"b\": 4 }, \"nested_map\": { \"a\": { \"b\": 3, \"c\": 4 } } }"
        });

        Util.createLocalInputFile(dataDir + nestedArrayInput,
            new String[] {
                "{ \"arr\": [1, 2, 3, 4], \"nested_arr\": [[1, 2], [3, 4]], \"nested_arr_2\": [[1, 2], [3, 4]], \"very_nested_arr\": [[[1, 2], [3, 4]], [[5, 6], [7, 6]]], \"i\": 9 }"
        });
    } catch (IOException e) {};
}
 
开发者ID:mortardata,项目名称:pig-json,代码行数:27,代码来源:TestFromJsonInferSchema.java


示例2: runQuery

import org.apache.pig.test.Util; //导入依赖的package包/类
private void runQuery(String outputPath, String compressionType)
      throws Exception, ExecException, IOException, FrontendException {

   // create a data file
   String filename = TestHelper.createTempFile(data, "");
   PigServer pig = new PigServer(LOCAL);
   filename = filename.replace("\\", "\\\\");
   patternString = patternString.replace("\\", "\\\\");
   String query = "A = LOAD '" + Util.encodeEscape(filename)
         + "' USING PigStorage(',') as (a,b,c);";

   String query2 = "STORE A INTO '" + Util.encodeEscape(outputPath)
         + "' USING org.apache.pig.piggybank.storage.MultiStorage" + "('"
         + Util.encodeEscape(outputPath) + "','0', '" + compressionType + "', '\\t');";

   // Run Pig
   pig.setBatchOn();
   pig.registerQuery(query);
   pig.registerQuery(query2);

   pig.executeBatch();
}
 
开发者ID:sigmoidanalytics,项目名称:spork,代码行数:23,代码来源:TestMultiStorageCompression.java


示例3: testPredicatePushdownLocal

import org.apache.pig.test.Util; //导入依赖的package包/类
private void testPredicatePushdownLocal(String filterStmt, int expectedRows) throws IOException {

        PigServer pigServer_disabledRule = new PigServer(ExecType.LOCAL);
        // Test with PredicatePushdownOptimizer disabled.
        HashSet<String> disabledOptimizerRules = new HashSet<String>();
        disabledOptimizerRules.add("PredicatePushdownOptimizer");
        pigServer_disabledRule.getPigContext().getProperties().setProperty(PigImplConstants.PIG_OPTIMIZER_RULES_KEY,
                ObjectSerializer.serialize(disabledOptimizerRules));
        pigServer_disabledRule.registerQuery("B = load '" + INPUT + "' using OrcStorage();");
        pigServer_disabledRule.registerQuery("C = filter B by " + filterStmt + ";");

        // Test with PredicatePushdownOptimizer enabled.
        pigServer.registerQuery("D = load '" + INPUT + "' using OrcStorage();");
        pigServer.registerQuery("E = filter D by " + filterStmt + ";");

        //Verify that results are same
        Util.checkQueryOutputs(pigServer_disabledRule.openIterator("C"), pigServer.openIterator("E"), expectedRows);
    }
 
开发者ID:sigmoidanalytics,项目名称:spork,代码行数:19,代码来源:TestOrcStoragePushdown.java


示例4: testRecordWithFieldSchema

import org.apache.pig.test.Util; //导入依赖的package包/类
@Test
public void testRecordWithFieldSchema() throws IOException {
    PigSchema2Avro.setTupleIndex(1);
    String output= outbasedir + "testRecordWithFieldSchema";
    String expected = basedir + "expected_testRecordWithFieldSchema.avro";
    deleteDirectory(new File(output));
    String [] queries = {
       " avro = LOAD '" + Util.encodeEscape(testRecordFile) + " ' USING org.apache.pig.piggybank.storage.avro.AvroStorage ();",
       " avro1 = FILTER avro BY member_id > 1211;",
       " avro2 = FOREACH avro1 GENERATE member_id, browser_id, tracking_time, act_content ;",
       " STORE avro2 INTO '" + output + "' " +
             " USING org.apache.pig.piggybank.storage.avro.AvroStorage (" +
             "'{\"data\":  \"" + Util.encodeEscape(testRecordFile) + "\" ," +
             "  \"field0\": \"int\", " +
              " \"field1\":  \"def:browser_id\", " +
             "  \"field3\": \"def:act_content\" " +
            " }');"
        };
    testAvroStorage( queries);
    verifyResults(output, expected);
}
 
开发者ID:sigmoidanalytics,项目名称:spork,代码行数:22,代码来源:TestAvroStorage.java


示例5: testLogicPartitionFilter

import org.apache.pig.test.Util; //导入依赖的package包/类
/**
 * Test that we can filter by a logic partition based on a range, in this
 * case block<=2
 * 
 * @throws IOException
 */
@Test
public void testLogicPartitionFilter() throws IOException {

    server.registerQuery("a = LOAD '" + Util.encodeEscape(logicPartitionDir.getAbsolutePath())
            + "' using " + allLoaderName + "('block<=2')"
            + " as (q:float, p:float);");

    server.registerQuery("r = FOREACH a GENERATE q, p;");

    Iterator<Tuple> it = server.openIterator("r");

    int count = 0;

    while (it.hasNext()) {
        count++;
        Tuple t = it.next();
        // System.out.println(count + " : " + t.toDelimitedString(","));
        assertEquals(2, t.size());

    }

    // only 2 partitions are used in the query block=3 is filtered out.
    assertEquals(fileRecords * 2 * fileTypes.length, count);

}
 
开发者ID:sigmoidanalytics,项目名称:spork-streaming,代码行数:32,代码来源:TestAllLoader.java


示例6: testRecursiveRecordWithSchemaCheck

import org.apache.pig.test.Util; //导入依赖的package包/类
@Test
public void testRecursiveRecordWithSchemaCheck() throws IOException {
    // Verify that recursive records cannot be stored if schema check is enbled.
    String output= outbasedir + "testRecursiveWithSchemaCheck";
    deleteDirectory(new File(output));
    String [] queries = {
      " in = LOAD '" + Util.encodeEscape(testRecursiveRecordInUnion) +
          "' USING org.apache.pig.piggybank.storage.avro.AvroStorage ();",
      " STORE in INTO '" + output +
          "' USING org.apache.pig.piggybank.storage.avro.AvroStorage (" +
          " 'schema', '" + recursiveRecordInUnion + "' );"
       };
    try {
        testAvroStorage(queries);
        Assert.fail("Negative test to test an exception. Should not be succeeding!");
    } catch (IOException e) {
        // An IOException is thrown by AvroStorage during schema check due to incompatible
        // data types.
        assertTrue(e.getMessage().contains("bytearray is not compatible with avro"));
    }
}
 
开发者ID:sigmoidanalytics,项目名称:spork,代码行数:22,代码来源:TestAvroStorage.java


示例7: runQuery

import org.apache.pig.test.Util; //导入依赖的package包/类
private void runQuery(String outputPath, String compressionType)
      throws Exception, ExecException, IOException, FrontendException {
   
   // create a data file
   String filename = TestHelper.createTempFile(data, "");
   PigServer pig = new PigServer(LOCAL);
   filename = filename.replace("\\", "\\\\");
   patternString = patternString.replace("\\", "\\\\");
   String query = "A = LOAD '" + Util.encodeEscape(filename)
         + "' USING PigStorage(',') as (a,b,c);";

   String query2 = "STORE A INTO '" + Util.encodeEscape(outputPath)
         + "' USING org.apache.pig.piggybank.storage.MultiStorage" + "('"
         + Util.encodeEscape(outputPath) + "','0', '" + compressionType + "', '\\t');";

   // Run Pig
   pig.setBatchOn();
   pig.registerQuery(query);
   pig.registerQuery(query2);

   pig.executeBatch();
}
 
开发者ID:sigmoidanalytics,项目名称:spork-streaming,代码行数:23,代码来源:TestMultiStorageCompression.java


示例8: testReadingSingleFileNoProjections

import org.apache.pig.test.Util; //导入依赖的package包/类
@Test
public void testReadingSingleFileNoProjections() throws IOException {
    String funcSpecString = "org.apache.pig.piggybank.storage.HiveColumnarLoader('f1 string,f2 string,f3 string')";

    String singlePartitionedFile = simpleDataFile.getAbsolutePath();

    PigServer server = new PigServer(ExecType.LOCAL);
    server.setBatchOn();
    server.registerFunction("org.apache.pig.piggybank.storage.HiveColumnarLoader",
            new FuncSpec(funcSpecString));

    server.registerQuery("a = LOAD '" + Util.encodeEscape(singlePartitionedFile) + "' using " + funcSpecString
            + ";");

    Iterator<Tuple> result = server.openIterator("a");

    int count = 0;
    Tuple t = null;
    while ((t = result.next()) != null) {
        assertEquals(3, t.size());
        assertEquals(DataType.CHARARRAY, t.getType(0));
        count++;
    }

    Assert.assertEquals(simpleRowCount, count);
}
 
开发者ID:sigmoidanalytics,项目名称:spork-streaming,代码行数:27,代码来源:TestHiveColumnarLoader.java


示例9: testNumerOfColumnsWhenDatePartitionedFiles

import org.apache.pig.test.Util; //导入依赖的package包/类
@Test
public void testNumerOfColumnsWhenDatePartitionedFiles() throws IOException {
    int count = 0;

    String funcSpecString = "org.apache.pig.piggybank.storage.HiveColumnarLoader('f1 string,f2 string,f3 string'"
            + ", '" + startingDate + ":" + endingDate + "')";

    System.out.println(funcSpecString);

    PigServer server = new PigServer(ExecType.LOCAL);
    server.setBatchOn();
    server.registerFunction("org.apache.pig.piggybank.storage.HiveColumnarLoader",
            new FuncSpec(funcSpecString));

    server.registerQuery("a = LOAD '" + Util.encodeEscape(datePartitionedDir.getAbsolutePath()) + "' using "
            + funcSpecString + ";");
    Iterator<Tuple> result = server.openIterator("a");
    Tuple t = null;
    while ((t = result.next()) != null) {
        Assert.assertEquals(4, t.size());
        count++;
    }

    Assert.assertEquals(datePartitionedRowCount, count);
}
 
开发者ID:sigmoidanalytics,项目名称:spork-streaming,代码行数:26,代码来源:TestHiveColumnarLoader.java


示例10: testRecursiveRecordWithSchemaFile

import org.apache.pig.test.Util; //导入依赖的package包/类
@Test
public void testRecursiveRecordWithSchemaFile() throws IOException {
    // Verify that recursive records cannot be stored if avro schema is specified by 'schema_file'.
    String output= outbasedir + "testRecursiveWithSchemaFile";
    deleteDirectory(new File(output));
    String [] queries = {
      " in = LOAD '" + Util.encodeEscape(testRecursiveRecordInUnion) +
          "' USING org.apache.pig.piggybank.storage.avro.AvroStorage ();",
      " STORE in INTO '" + output +
          "' USING org.apache.pig.piggybank.storage.avro.AvroStorage (" +
          " 'no_schema_check'," +
          " 'schema_file', '" + Util.encodeEscape(testRecursiveRecordInUnionSchema) + "' );"
       };
    try {
        testAvroStorage(queries);
        Assert.fail("Negative test to test an exception. Should not be succeeding!");
    } catch (FrontendException e) {
        // The IOException thrown by AvroSchemaManager for recursive record is caught
        // by the Pig frontend, and FrontendException is re-thrown.
        assertTrue(e.getMessage().contains("could not instantiate 'org.apache.pig.piggybank.storage.avro.AvroStorage'"));
    }
}
 
开发者ID:sigmoidanalytics,项目名称:spork,代码行数:23,代码来源:TestAvroStorage.java


示例11: testRecursiveRecordInArray

import org.apache.pig.test.Util; //导入依赖的package包/类
@Test
public void testRecursiveRecordInArray() throws IOException {
    // Verify that recursive records in array can be loaded/saved.
    String output= outbasedir + "testRecursiveRecordInArray";
    String expected = testRecursiveRecordInArray;
    deleteDirectory(new File(output));
    String [] queries = {
      " in = LOAD '" + Util.encodeEscape(testRecursiveRecordInArray) +
          "' USING org.apache.pig.piggybank.storage.avro.AvroStorage ();",
      " STORE in INTO '" + output +
          "' USING org.apache.pig.piggybank.storage.avro.AvroStorage (" +
          " 'no_schema_check'," +
          " 'schema', '" + recursiveRecordInArray + "' );"
       };
    testAvroStorage(queries);
    verifyResults(output, expected);
}
 
开发者ID:sigmoidanalytics,项目名称:spork-streaming,代码行数:18,代码来源:TestAvroStorage.java


示例12: testRecursiveRecordInRecord

import org.apache.pig.test.Util; //导入依赖的package包/类
@Test
public void testRecursiveRecordInRecord() throws IOException {
    // Verify that recursive records in record can be loaded/saved.
    String output= outbasedir + "testRecursiveRecordInRecord";
    String expected = testRecursiveRecordInRecord;
    deleteDirectory(new File(output));
    String [] queries = {
      " in = LOAD '" + Util.encodeEscape(testRecursiveRecordInRecord) +
          "' USING org.apache.pig.piggybank.storage.avro.AvroStorage ();",
      " STORE in INTO '" + output +
          "' USING org.apache.pig.piggybank.storage.avro.AvroStorage (" +
          " 'no_schema_check'," +
          " 'schema', '" + Util.encodeEscape(recursiveRecordInRecord) + "' );"
       };
    testAvroStorage(queries);
    verifyResults(output, expected);
}
 
开发者ID:sigmoidanalytics,项目名称:spork-streaming,代码行数:18,代码来源:TestAvroStorage.java


示例13: createFile

import org.apache.pig.test.Util; //导入依赖的package包/类
private void createFile() throws IOException {
  PrintWriter w = new PrintWriter(new FileWriter(INPUT_FILE));
  w.println("100\tapple\taaa1");
  w.println("200\torange\tbbb1");
  w.println("300\tstrawberry\tccc1");

  w.println("101\tapple\taaa2");
  w.println("201\torange\tbbb2");
  w.println("301\tstrawberry\tccc2");

  w.println("102\tapple\taaa3");
  w.println("202\torange\tbbb3");
  w.println("302\tstrawberry\tccc3");

  w.close();
  Util.deleteFile(cluster, INPUT_FILE);
  Util.copyFromLocalToCluster(cluster, INPUT_FILE, INPUT_FILE);
}
 
开发者ID:sigmoidanalytics,项目名称:spork,代码行数:19,代码来源:TestMultiStorage.java


示例14: testSchemaResetter

import org.apache.pig.test.Util; //导入依赖的package包/类
@Test
public void testSchemaResetter() throws IOException {
    new File("build/test/tmp/").mkdirs();
    Util.createLocalInputFile("build/test/tmp/TestSchemaResetter.pig", new String[] {
            "A = LOAD 'foo' AS (group:tuple(uid, dst_id));",
            "edges_both = FOREACH A GENERATE",
            "    group.uid AS src_id,",
            "    group.dst_id AS dst_id;",
            "both_counts = GROUP edges_both BY src_id;",
            "both_counts = FOREACH both_counts GENERATE",
            "    group AS src_id, SIZE(edges_both) AS size_both;",
            "",
            "edges_bq = FOREACH A GENERATE",
            "    group.uid AS src_id,",
            "    group.dst_id AS dst_id;",
            "bq_counts = GROUP edges_bq BY src_id;",
            "bq_counts = FOREACH bq_counts GENERATE",
            "    group AS src_id, SIZE(edges_bq) AS size_bq;",
            "",
            "per_user_set_sizes = JOIN bq_counts BY src_id LEFT OUTER, both_counts BY src_id;",
            "store per_user_set_sizes into  'foo';"
            });
    assertEquals(0, PigRunner.run(new String[] {"-x", "local", "-c", "build/test/tmp/TestSchemaResetter.pig" } , null).getReturnCode());
}
 
开发者ID:sigmoidanalytics,项目名称:spork,代码行数:25,代码来源:TestSchemaResetter.java


示例15: testRecursiveRecordWithNoAvroSchema

import org.apache.pig.test.Util; //导入依赖的package包/类
@Test
public void testRecursiveRecordWithNoAvroSchema() throws IOException {
    // Verify that recursive records cannot be stored,
    // if no avro schema is specified either via 'schema' or 'same'.
    String output= outbasedir + "testRecursiveRecordWithNoAvroSchema";
    deleteDirectory(new File(output));
    String [] queries = {
      " in = LOAD '" + Util.encodeEscape(testRecursiveRecordInUnion) +
          "' USING org.apache.pig.piggybank.storage.avro.AvroStorage ();",
      " STORE in INTO '" + output +
          "' USING org.apache.pig.piggybank.storage.avro.AvroStorage (" +
          " 'no_schema_check' );"
       };
    // Since Avro schema is not specified via the 'schema' parameter, it is
    // derived from Pig schema. Job is expected to fail because this derived
    // Avro schema (bytes) is not compatible with data (tuples).
    testAvroStorage(true, queries);
}
 
开发者ID:sigmoidanalytics,项目名称:spork-streaming,代码行数:19,代码来源:TestAvroStorage.java


示例16: testRecursiveRecordWithData

import org.apache.pig.test.Util; //导入依赖的package包/类
@Test
public void testRecursiveRecordWithData() throws IOException {
    // Verify that recursive records cannot be stored if avro schema is specified by 'data'.
    String output= outbasedir + "testRecursiveWithData";
    deleteDirectory(new File(output));
    String [] queries = {
      " in = LOAD '" + Util.encodeEscape(testRecursiveRecordInUnion) +
          "' USING org.apache.pig.piggybank.storage.avro.AvroStorage ();",
      " STORE in INTO '" + output +
          "' USING org.apache.pig.piggybank.storage.avro.AvroStorage (" +
          " 'no_schema_check'," +
          " 'data', '" + Util.encodeEscape(testRecursiveRecordInUnion) + "' );"
       };
    try {
        testAvroStorage(queries);
        Assert.fail("Negative test to test an exception. Should not be succeeding!");
    } catch (FrontendException e) {
        // The IOException thrown by AvroSchemaManager for recursive record is caught
        // by the Pig frontend, and FrontendException is re-thrown.
        assertTrue(e.getMessage().contains("could not instantiate 'org.apache.pig.piggybank.storage.avro.AvroStorage'"));
    }
}
 
开发者ID:sigmoidanalytics,项目名称:spork-streaming,代码行数:23,代码来源:TestAvroStorage.java


示例17: testGenericUnion

import org.apache.pig.test.Util; //导入依赖的package包/类
@Test
public void testGenericUnion() throws IOException {
    // Verify that a FrontendException is thrown if schema has generic union.
    String output= outbasedir + "testGenericUnion";
    deleteDirectory(new File(output));
    String [] queries = {
      " in = LOAD '" + Util.encodeEscape(testGenericUnionFile) +
          "' USING org.apache.pig.piggybank.storage.avro.AvroStorage ();",
      " STORE in INTO '" + output +
          "' USING org.apache.pig.piggybank.storage.avro.AvroStorage ();"
       };
    try {
        testAvroStorage(queries);
        Assert.fail("Negative test to test an exception. Should not be succeeding!");
    } catch (FrontendException e) {
        // The IOException thrown by AvroStorage for generic union is caught
        // by the Pig frontend, and FrontendException is re-thrown.
        assertTrue(e.getMessage().contains("Cannot get schema"));
    }
}
 
开发者ID:sigmoidanalytics,项目名称:spork-streaming,代码行数:21,代码来源:TestAvroStorage.java


示例18: testRecursiveRecordInUnion

import org.apache.pig.test.Util; //导入依赖的package包/类
@Test
public void testRecursiveRecordInUnion() throws IOException {
    // Verify that recursive records in union can be loaded/saved.
    String output= outbasedir + "testRecursiveRecordInUnion";
    String expected = testRecursiveRecordInUnion;
    deleteDirectory(new File(output));
    String [] queries = {
      " in = LOAD '" + Util.encodeEscape(testRecursiveRecordInUnion) +
          "' USING org.apache.pig.piggybank.storage.avro.AvroStorage ();",
      " STORE in INTO '" + output +
          "' USING org.apache.pig.piggybank.storage.avro.AvroStorage (" +
          " 'no_schema_check'," +
          " 'schema', '" + recursiveRecordInUnion + "' );"
       };
    testAvroStorage(queries);
    verifyResults(output, expected);
}
 
开发者ID:sigmoidanalytics,项目名称:spork,代码行数:18,代码来源:TestAvroStorage.java


示例19: testGlob6

import org.apache.pig.test.Util; //导入依赖的package包/类
@Test
public void testGlob6() throws IOException {
    // Verify that an IOException is thrown if no files are matched by the glob pattern.
    String output = outbasedir + "testGlob6";
    deleteDirectory(new File(output));
    String [] queries = {
       " in = LOAD '" + Util.encodeEscape(testNoMatchedFiles) + "' USING org.apache.pig.piggybank.storage.avro.AvroStorage ();",
       " STORE in INTO '" + output + "' USING org.apache.pig.piggybank.storage.avro.AvroStorage (" +
           "   'schema', '{\"type\":\"array\",\"items\":\"float\"}'  );"
        };
    try {
        testAvroStorage(queries);
        Assert.fail("Negative test to test an exception. Should not be succeeding!");
    } catch (JobCreationException e) {
        // The IOException thrown by AvroStorage for input file not found is catched
        // by the Pig backend, and JobCreationException (a subclass of IOException)
        // is re-thrown while creating a job configuration.
        assertEquals(e.getMessage(), "Internal error creating job configuration.");
    }
}
 
开发者ID:sigmoidanalytics,项目名称:spork-streaming,代码行数:21,代码来源:TestAvroStorage.java


示例20: testRecordWithFieldSchemaFromText

import org.apache.pig.test.Util; //导入依赖的package包/类
@Test
public void testRecordWithFieldSchemaFromText() throws IOException {
    PigSchema2Avro.setTupleIndex(1);
    String output= outbasedir + "testRecordWithFieldSchemaFromText";
    String expected = basedir + "expected_testRecordWithFieldSchema.avro";
    deleteDirectory(new File(output));
    String [] queries = {
      " avro = LOAD '" + Util.encodeEscape(testTextFile) + "' AS (member_id:int, browser_id:chararray, tracking_time:long, act_content:bag{inner:tuple(key:chararray, value:chararray)});",
      " avro1 = FILTER avro BY member_id > 1211;",
      " avro2 = FOREACH avro1 GENERATE member_id, browser_id, tracking_time, act_content ;",
      " STORE avro2 INTO '" + output + "' " +
            " USING org.apache.pig.piggybank.storage.avro.AvroStorage (" +
            "'{\"data\":  \"" + Util.encodeEscape(testRecordFile) + "\" ," +
            "  \"field0\": \"int\", " +
             " \"field1\":  \"def:browser_id\", " +
            "  \"field3\": \"def:act_content\" " +
           " }');"
       };
    testAvroStorage( queries);
    verifyResults(output, expected);
}
 
开发者ID:sigmoidanalytics,项目名称:spork-streaming,代码行数:22,代码来源:TestAvroStorage.java



注:本文中的org.apache.pig.test.Util类示例整理自Github/MSDocs等源码及文档管理平台,相关代码片段筛选自各路编程大神贡献的开源项目,源码版权归原作者所有,传播和使用请参考对应项目的License;未经允许,请勿转载。


鲜花

握手

雷人

路过

鸡蛋
该文章已有0人参与评论

请发表评论

全部评论

专题导读
上一篇:
Java TCharObjectHashMap类代码示例发布时间:2022-05-23
下一篇:
Java DoublesSketch类代码示例发布时间:2022-05-23
热门推荐
阅读排行榜

扫描微信二维码

查看手机版网站

随时了解更新最新资讯

139-2527-9053

在线客服(服务时间 9:00~18:00)

在线QQ客服
地址:深圳市南山区西丽大学城创智工业园
电邮:jeky_zhao#qq.com
移动电话:139-2527-9053

Powered by 互联科技 X3.4© 2001-2213 极客世界.|Sitemap