• 设为首页
  • 点击收藏
  • 手机版
    手机扫一扫访问
    迪恩网络手机版
  • 关注官方公众号
    微信扫一扫关注
    迪恩网络公众号

Java TableId类代码示例

原作者: [db:作者] 来自: [db:来源] 收藏 邀请

本文整理汇总了Java中com.google.cloud.bigquery.TableId的典型用法代码示例。如果您正苦于以下问题:Java TableId类的具体用法?Java TableId怎么用?Java TableId使用的例子?那么恭喜您, 这里精选的类代码示例或许可以为您提供帮助。



TableId类属于com.google.cloud.bigquery包,在下文中一共展示了TableId类的20个代码示例,这些例子默认根据受欢迎程度排序。您可以为喜欢或者感觉有用的代码点赞,您的评价将有助于我们的系统推荐出更棒的Java代码示例。

示例1: deleteTableFromId

import com.google.cloud.bigquery.TableId; //导入依赖的package包/类
/**
 * Example of deleting a table.
 */
// [TARGET delete(TableId)]
// [VARIABLE "my_project_id"]
// [VARIABLE "my_dataset_name"]
// [VARIABLE "my_table_name"]
public Boolean deleteTableFromId(String projectId, String datasetName, String tableName) {
  // [START deleteTableFromId]
  TableId tableId = TableId.of(projectId, datasetName, tableName);
  Boolean deleted = bigquery.delete(tableId);
  if (deleted) {
    // the table was deleted
  } else {
    // the table was not found
  }
  // [END deleteTableFromId]
  return deleted;
}
 
开发者ID:michael-hll,项目名称:BigQueryStudy,代码行数:20,代码来源:BigQuerySnippets.java


示例2: createTable

import com.google.cloud.bigquery.TableId; //导入依赖的package包/类
/**
 * Example of creating a table.
 */
// [TARGET create(TableInfo, TableOption...)]
// [VARIABLE "my_dataset_name"]
// [VARIABLE "my_table_name"]
// [VARIABLE "string_field"]
public Table createTable(String datasetName, String tableName, Schema schema) {
  // [START createTable]
  TableId tableId = TableId.of(datasetName, tableName);
  // Table field definition
  //Field field = Field.of(fieldNames[0], Field.Type.string());
  TableDefinition tableDefinition = StandardTableDefinition.of(schema);
  TableInfo tableInfo = TableInfo.newBuilder(tableId, tableDefinition).build();
  Table table = bigquery.create(tableInfo);
  // [END createTable]
  return table;
}
 
开发者ID:michael-hll,项目名称:BigQueryStudy,代码行数:19,代码来源:BigQuerySnippets.java


示例3: listTableDataFromId

import com.google.cloud.bigquery.TableId; //导入依赖的package包/类
/**
 * Example of listing table rows, specifying the page size.
 */
// [TARGET listTableData(TableId, TableDataListOption...)]
// [VARIABLE "my_dataset_name"]
// [VARIABLE "my_table_name"]
public Page<List<FieldValue>> listTableDataFromId(String datasetName, String tableName) {
  // [START listTableDataFromId]
  TableId tableIdObject = TableId.of(datasetName, tableName);
  Page<List<FieldValue>> tableData =
      bigquery.listTableData(tableIdObject, TableDataListOption.pageSize(100));
  Iterator<List<FieldValue>> rowIterator = tableData.iterateAll();
  while (rowIterator.hasNext()) {
    List<FieldValue> row = rowIterator.next();
    // do something with the row
  }
  // [END listTableDataFromId]
  return tableData;
}
 
开发者ID:michael-hll,项目名称:BigQueryStudy,代码行数:20,代码来源:BigQuerySnippets.java


示例4: testRetrieveSchema

import com.google.cloud.bigquery.TableId; //导入依赖的package包/类
@Test
public void testRetrieveSchema() throws Exception {
  final TableId table = TableId.of("test", "kafka_topic");
  final String testTopic = "kafka-topic";
  final String testSubject = "kafka-topic-value";
  final String testAvroSchemaString =
      "{\"type\": \"record\", "
      + "\"name\": \"testrecord\", "
      + "\"fields\": [{\"name\": \"f1\", \"type\": \"string\"}]}";
  final SchemaMetadata testSchemaMetadata = new SchemaMetadata(1, 1, testAvroSchemaString);

  SchemaRegistryClient schemaRegistryClient = mock(SchemaRegistryClient.class);
  when(schemaRegistryClient.getLatestSchemaMetadata(testSubject)).thenReturn(testSchemaMetadata);

  SchemaRegistrySchemaRetriever testSchemaRetriever = new SchemaRegistrySchemaRetriever(
      schemaRegistryClient,
      new AvroData(0)
  );

  Schema expectedKafkaConnectSchema =
      SchemaBuilder.struct().field("f1", Schema.STRING_SCHEMA).name("testrecord").build();

  assertEquals(expectedKafkaConnectSchema, testSchemaRetriever.retrieveSchema(table, testTopic));
}
 
开发者ID:wepay,项目名称:kafka-connect-bigquery,代码行数:25,代码来源:SchemaRegistrySchemaRetrieverTest.java


示例5: getRecordTable

import com.google.cloud.bigquery.TableId; //导入依赖的package包/类
private PartitionedTableId getRecordTable(SinkRecord record) {
  TableId baseTableId = topicsToBaseTableIds.get(record.topic());

  PartitionedTableId.Builder builder = new PartitionedTableId.Builder(baseTableId);
  if (useMessageTimeDatePartitioning) {
    if (record.timestampType() == TimestampType.NO_TIMESTAMP_TYPE) {
      throw new ConnectException("Message has no timestamp type, cannot use message timestamp to partition.");
    }

    builder.setDayPartition(record.timestamp());
  } else {
    builder.setDayPartitionForNow();
  }

  return builder.build();
}
 
开发者ID:wepay,项目名称:kafka-connect-bigquery,代码行数:17,代码来源:BigQuerySinkTask.java


示例6: testRetrieveSchemaWithMultipleSchemasSucceeds

import com.google.cloud.bigquery.TableId; //导入依赖的package包/类
@Test
public void testRetrieveSchemaWithMultipleSchemasSucceeds() {
    final String floatSchemaTopic = "test-float32";
    final String intSchemaTopic = "test-int32";
    final TableId floatTableId = getTableId("testFloatTable", "testFloatDataset");
    final TableId intTableId = getTableId("testIntTable", "testIntDataset");
    SchemaRetriever retriever = new MemorySchemaRetriever();
    retriever.configure(new HashMap<>());

    Schema expectedIntSchema = Schema.INT32_SCHEMA;
    Schema expectedFloatSchema = Schema.OPTIONAL_FLOAT32_SCHEMA;
    retriever.setLastSeenSchema(floatTableId, floatSchemaTopic, expectedFloatSchema);
    retriever.setLastSeenSchema(intTableId, intSchemaTopic, expectedIntSchema);

    Assert.assertEquals(retriever.retrieveSchema(floatTableId, floatSchemaTopic), expectedFloatSchema);
    Assert.assertEquals(retriever.retrieveSchema(intTableId, intSchemaTopic), expectedIntSchema);
}
 
开发者ID:wepay,项目名称:kafka-connect-bigquery,代码行数:18,代码来源:MemorySchemaRetrieverTest.java


示例7: testTableIdBuilder

import com.google.cloud.bigquery.TableId; //导入依赖的package包/类
@Test
public void testTableIdBuilder() {
  final String project = "project";
  final String dataset = "dataset";
  final String table = "table";
  final TableId tableId = TableId.of(project, dataset, table);

  final PartitionedTableId partitionedTableId = new PartitionedTableId.Builder(tableId).build();

  Assert.assertEquals(project, partitionedTableId.getProject());
  Assert.assertEquals(dataset, partitionedTableId.getDataset());
  Assert.assertEquals(table, partitionedTableId.getBaseTableName());
  Assert.assertEquals(table, partitionedTableId.getFullTableName());

  Assert.assertEquals(tableId, partitionedTableId.getBaseTableId());
  Assert.assertEquals(tableId, partitionedTableId.getFullTableId());
}
 
开发者ID:wepay,项目名称:kafka-connect-bigquery,代码行数:18,代码来源:PartitionedTableIdTest.java


示例8: testWithPartition

import com.google.cloud.bigquery.TableId; //导入依赖的package包/类
@Test
public void testWithPartition() {
  final String dataset = "dataset";
  final String table = "table";
  final LocalDate partitionDate = LocalDate.of(2016, 9, 21);

  final PartitionedTableId partitionedTableId =
      new PartitionedTableId.Builder(dataset, table).setDayPartition(partitionDate).build();

  final String expectedPartition = "20160921";

  Assert.assertEquals(dataset, partitionedTableId.getDataset());
  Assert.assertEquals(table, partitionedTableId.getBaseTableName());
  Assert.assertEquals(table + "$" + expectedPartition, partitionedTableId.getFullTableName());

  final TableId expectedBaseTableId = TableId.of(dataset, table);
  final TableId expectedFullTableId = TableId.of(dataset, table + "$" + expectedPartition);

  Assert.assertEquals(expectedBaseTableId, partitionedTableId.getBaseTableId());
  Assert.assertEquals(expectedFullTableId, partitionedTableId.getFullTableId());
}
 
开发者ID:wepay,项目名称:kafka-connect-bigquery,代码行数:22,代码来源:PartitionedTableIdTest.java


示例9: testWithEpochTimePartition

import com.google.cloud.bigquery.TableId; //导入依赖的package包/类
@Test
public void testWithEpochTimePartition() {
  final String dataset = "dataset";
  final String table = "table";

  final long utcTime = 1509007584334L;

  final PartitionedTableId partitionedTableId =
          new PartitionedTableId.Builder(dataset, table).setDayPartition(utcTime).build();

  final String expectedPartition = "20171026";

  Assert.assertEquals(dataset, partitionedTableId.getDataset());
  Assert.assertEquals(table, partitionedTableId.getBaseTableName());
  Assert.assertEquals(table + "$" + expectedPartition, partitionedTableId.getFullTableName());

  final TableId expectedBaseTableId = TableId.of(dataset, table);
  final TableId expectedFullTableId = TableId.of(dataset, table + "$" + expectedPartition);

  Assert.assertEquals(expectedBaseTableId, partitionedTableId.getBaseTableId());
  Assert.assertEquals(expectedFullTableId, partitionedTableId.getFullTableId());
}
 
开发者ID:wepay,项目名称:kafka-connect-bigquery,代码行数:23,代码来源:PartitionedTableIdTest.java


示例10: testNonAutoCreateTablesFailure

import com.google.cloud.bigquery.TableId; //导入依赖的package包/类
@Test(expected = BigQueryConnectException.class)
public void testNonAutoCreateTablesFailure() {
  final String dataset = "scratch";
  final String existingTableTopic = "topic-with-existing-table";
  final String nonExistingTableTopic = "topic-without-existing-table";
  final TableId existingTable = TableId.of(dataset, "topic_with_existing_table");
  final TableId nonExistingTable = TableId.of(dataset, "topic_without_existing_table");

  Map<String, String> properties = propertiesFactory.getProperties();
  properties.put(BigQuerySinkConnectorConfig.TABLE_CREATE_CONFIG, "false");
  properties.put(BigQuerySinkConfig.SANITIZE_TOPICS_CONFIG, "true");
  properties.put(BigQuerySinkConfig.DATASETS_CONFIG, String.format(".*=%s", dataset));
  properties.put(
      BigQuerySinkConfig.TOPICS_CONFIG,
      String.format("%s, %s", existingTableTopic, nonExistingTableTopic)
  );

  BigQuery bigQuery = mock(BigQuery.class);
  Table fakeTable = mock(Table.class);
  when(bigQuery.getTable(existingTable)).thenReturn(fakeTable);
  when(bigQuery.getTable(nonExistingTable)).thenReturn(null);

  BigQuerySinkConnector testConnector = new BigQuerySinkConnector(bigQuery);
  testConnector.start(properties);
}
 
开发者ID:wepay,项目名称:kafka-connect-bigquery,代码行数:26,代码来源:BigQuerySinkConnectorTest.java


示例11: testSimplePutWhenSchemaRetrieverIsNotNull

import com.google.cloud.bigquery.TableId; //导入依赖的package包/类
@Test
public void testSimplePutWhenSchemaRetrieverIsNotNull() {
  final String topic = "test-topic";

  Map<String, String> properties = propertiesFactory.getProperties();
  properties.put(BigQuerySinkConfig.TOPICS_CONFIG, topic);
  properties.put(BigQuerySinkConfig.DATASETS_CONFIG, ".*=scratch");

  BigQuery bigQuery = mock(BigQuery.class);
  SinkTaskContext sinkTaskContext = mock(SinkTaskContext.class);
  InsertAllResponse insertAllResponse = mock(InsertAllResponse.class);

  when(bigQuery.insertAll(anyObject())).thenReturn(insertAllResponse);
  when(insertAllResponse.hasErrors()).thenReturn(false);

  SchemaRetriever schemaRetriever = mock(SchemaRetriever.class);

  BigQuerySinkTask testTask = new BigQuerySinkTask(bigQuery, schemaRetriever);
  testTask.initialize(sinkTaskContext);
  testTask.start(properties);

  testTask.put(Collections.singletonList(spoofSinkRecord(topic)));
  testTask.flush(Collections.emptyMap());
  verify(bigQuery, times(1)).insertAll(any(InsertAllRequest.class));
  verify(schemaRetriever, times(1)).setLastSeenSchema(any(TableId.class), any(String.class), any(Schema.class));
}
 
开发者ID:wepay,项目名称:kafka-connect-bigquery,代码行数:27,代码来源:BigQuerySinkTaskTest.java


示例12: runQueryPermanentTable

import com.google.cloud.bigquery.TableId; //导入依赖的package包/类
public static void runQueryPermanentTable(
    String queryString,
    String destinationDataset,
    String destinationTable,
    boolean allowLargeResults) throws TimeoutException, InterruptedException {
  QueryJobConfiguration queryConfig =
      QueryJobConfiguration.newBuilder(queryString)
          // Save the results of the query to a permanent table. See:
          // https://cloud.google.com/bigquery/docs/writing-results#permanent-table
          .setDestinationTable(TableId.of(destinationDataset, destinationTable))
          // Allow results larger than the maximum response size.
          // If true, a destination table must be set. See: 
          // https://cloud.google.com/bigquery/docs/writing-results#large-results
          .setAllowLargeResults(allowLargeResults)
          .build();

  runQuery(queryConfig);
}
 
开发者ID:GoogleCloudPlatform,项目名称:java-docs-samples,代码行数:19,代码来源:QuerySample.java


示例13: getTableFromId

import com.google.cloud.bigquery.TableId; //导入依赖的package包/类
/**
 * Example of getting a table.
 */
// [TARGET getTable(TableId, TableOption...)]
// [VARIABLE "my_project_id"]
// [VARIABLE "my_dataset_name"]
// [VARIABLE "my_table_name"]
public Table getTableFromId(String projectId, String datasetName, String tableName) {
  // [START getTableFromId]
  TableId tableId = TableId.of(projectId, datasetName, tableName);
  Table table = bigquery.getTable(tableId);
  // [END getTableFromId]
  return table;
}
 
开发者ID:michael-hll,项目名称:BigQueryStudy,代码行数:15,代码来源:BigQuerySnippets.java


示例14: writeToTable

import com.google.cloud.bigquery.TableId; //导入依赖的package包/类
/**
 * Example of creating a channel with which to write to a table.
 */
// [TARGET writer(WriteChannelConfiguration)]
// [VARIABLE "my_dataset_name"]
// [VARIABLE "my_table_name"]
// [VARIABLE "StringValue1\nStringValue2\n"]
public long writeToTable(String datasetName, String tableName, String csvData)
    throws IOException, InterruptedException, TimeoutException {
  // [START writeToTable]
  TableId tableId = TableId.of(datasetName, tableName);
  WriteChannelConfiguration writeChannelConfiguration =
      WriteChannelConfiguration.newBuilder(tableId)
          .setFormatOptions(FormatOptions.csv())
          .build();
  TableDataWriteChannel writer = bigquery.writer(writeChannelConfiguration);
    // Write data to writer
   try {
      writer.write(ByteBuffer.wrap(csvData.getBytes(Charsets.UTF_8)));
    } finally {
      writer.close();
    }
    // Get load job
    Job job = writer.getJob();
    job = job.waitFor();
    LoadStatistics stats = job.getStatistics();
    return stats.getOutputRows();
    // [END writeToTable]
  }
 
开发者ID:michael-hll,项目名称:BigQueryStudy,代码行数:30,代码来源:BigQuerySnippets.java


示例15: writeFileToTable

import com.google.cloud.bigquery.TableId; //导入依赖的package包/类
/**
 * Example of riting a local file to a table.
 */
// [TARGET writer(WriteChannelConfiguration)]
// [VARIABLE "my_dataset_name"]
// [VARIABLE "my_table_name"]
// [VARIABLE FileSystems.getDefault().getPath(".", "my-data.csv")]
public long writeFileToTable(String datasetName, String tableName, Path fileFullPath, FormatOptions formatOptions)
    throws IOException, InterruptedException, TimeoutException {
  // [START writeFileToTable]
  TableId tableId = TableId.of(datasetName, tableName);
  WriteChannelConfiguration writeChannelConfiguration =
      WriteChannelConfiguration.newBuilder(tableId)
          .setFormatOptions(formatOptions)
          .build();
  TableDataWriteChannel writer = bigquery.writer(writeChannelConfiguration);
  // Write data to writer
  try (OutputStream stream = Channels.newOutputStream(writer)) {
  	Files.copy(fileFullPath, stream);
  }
  // Get load job
  Job job = writer.getJob();
  job = job.waitFor();
  if(job.getStatus().getExecutionErrors() != null){
  	String errors = "";
  	for(BigQueryError error : job.getStatus().getExecutionErrors()){
  		errors += "error: " + error.getMessage() + ", reason: " + error.getReason() + ", location: " + error.getLocation() + "; ";
  	}
  	throw new IOException(errors);
  }
  LoadStatistics stats = job.getStatistics();
  return stats.getOutputRows();
  // [END writeFileToTable]
}
 
开发者ID:michael-hll,项目名称:BigQueryStudy,代码行数:35,代码来源:BigQuerySnippets.java


示例16: insertAll

import com.google.cloud.bigquery.TableId; //导入依赖的package包/类
/**
 * Example of inserting rows into a table without running a load job.
 */
// [TARGET insertAll(InsertAllRequest)]
// [VARIABLE "my_dataset_name"]
// [VARIABLE "my_table_name"]
public InsertAllResponse insertAll(String datasetName, String tableName) {
  // [START insertAll]
  TableId tableId = TableId.of(datasetName, tableName);
  // Values of the row to insert
  Map<String, Object> rowContent = new HashMap<String, Object>();
  rowContent.put("booleanField", true);
  // Bytes are passed in base64
  rowContent.put("bytesField", "Cg0NDg0="); // 0xA, 0xD, 0xD, 0xE, 0xD in base64
  // Records are passed as a map
  Map<String, Object> recordsContent = new HashMap<String, Object>();
  recordsContent.put("stringField", "Hello, World!");
  rowContent.put("recordField", recordsContent);
  InsertAllResponse response = bigquery.insertAll(InsertAllRequest.newBuilder(tableId)
      .addRow("rowId", rowContent)
      // More rows can be added in the same RPC by invoking .addRow() on the builder
      .build());
  if (response.hasErrors()) {
    // If any of the insertions failed, this lets you inspect the errors
    for (Entry<Long, List<BigQueryError>> entry : response.getInsertErrors().entrySet()) {
      // inspect row error
    }
  }
  // [END insertAll]
  return response;
}
 
开发者ID:michael-hll,项目名称:BigQueryStudy,代码行数:32,代码来源:BigQuerySnippets.java


示例17: retrieveSchema

import com.google.cloud.bigquery.TableId; //导入依赖的package包/类
@Override
public Schema retrieveSchema(TableId table, String topic) {
  try {
    String subject = getSubject(topic);
    logger.debug("Retrieving schema information for topic {} with subject {}", topic, subject);
    SchemaMetadata latestSchemaMetadata = schemaRegistryClient.getLatestSchemaMetadata(subject);
    org.apache.avro.Schema avroSchema = new Parser().parse(latestSchemaMetadata.getSchema());
    return avroData.toConnectSchema(avroSchema);
  } catch (IOException | RestClientException exception) {
    throw new ConnectException(
        "Exception encountered while trying to fetch latest schema metadata from Schema Registry",
        exception
    );
  }
}
 
开发者ID:wepay,项目名称:kafka-connect-bigquery,代码行数:16,代码来源:SchemaRegistrySchemaRetriever.java


示例18: retrieveSchema

import com.google.cloud.bigquery.TableId; //导入依赖的package包/类
@Override
public Schema retrieveSchema(TableId table, String topic) {
  String tableName = table.getTable();
  Schema schema = schemaCache.get(getCacheKey(tableName, topic));
  if (schema != null) {
    return schema;
  }

  // By returning an empty schema the calling code will create a table without a schema.
  // When we receive our first message and try to add it, we'll hit the invalid schema case and update the schema
  // with the schema from the message
  return SchemaBuilder.struct().build();
}
 
开发者ID:wepay,项目名称:kafka-connect-bigquery,代码行数:14,代码来源:MemorySchemaRetriever.java


示例19: getBaseTablesToTopics

import com.google.cloud.bigquery.TableId; //导入依赖的package包/类
/**
 * Return a Map detailing which topic each base table corresponds to. If sanitization has been
 * enabled, there is a possibility that there are multiple possible schemas a table could
 * correspond to. In that case, each table must only be written to by one topic, or an exception
 * is thrown.
 *
 * @param config Config that contains properties used to generate the map
 * @return The resulting Map from TableId to topic name.
 */
public static Map<TableId, String> getBaseTablesToTopics(BigQuerySinkConfig config) {
  Map<String, TableId> topicsToTableIds = getTopicsToTables(config);
  Map<TableId, String> tableIdsToTopics = new HashMap<>();
  for (Map.Entry<String, TableId> topicToTableId : topicsToTableIds.entrySet()) {
    if (tableIdsToTopics.put(topicToTableId.getValue(), topicToTableId.getKey()) != null) {
      throw new ConfigException("Cannot have multiple topics writing to the same table");
    }
  }
  return tableIdsToTopics;
}
 
开发者ID:wepay,项目名称:kafka-connect-bigquery,代码行数:20,代码来源:TopicToTableResolver.java


示例20: ensureExistingTables

import com.google.cloud.bigquery.TableId; //导入依赖的package包/类
private void ensureExistingTables(
    BigQuery bigQuery,
    SchemaManager schemaManager,
    Map<String, TableId> topicsToTableIds) {
  for (Map.Entry<String, TableId> topicToTableId : topicsToTableIds.entrySet()) {
    String topic = topicToTableId.getKey();
    TableId tableId = topicToTableId.getValue();
    if (bigQuery.getTable(tableId) == null) {
      logger.info("Table {} does not exist; attempting to create", tableId);
      schemaManager.createTable(tableId, topic);
    }
  }
}
 
开发者ID:wepay,项目名称:kafka-connect-bigquery,代码行数:14,代码来源:BigQuerySinkConnector.java



注:本文中的com.google.cloud.bigquery.TableId类示例整理自Github/MSDocs等源码及文档管理平台,相关代码片段筛选自各路编程大神贡献的开源项目,源码版权归原作者所有,传播和使用请参考对应项目的License;未经允许,请勿转载。


鲜花

握手

雷人

路过

鸡蛋
该文章已有0人参与评论

请发表评论

全部评论

专题导读
上一篇:
Java SalFlowService类代码示例发布时间:2022-05-22
下一篇:
Java CommitUtils类代码示例发布时间:2022-05-22
热门推荐
阅读排行榜

扫描微信二维码

查看手机版网站

随时了解更新最新资讯

139-2527-9053

在线客服(服务时间 9:00~18:00)

在线QQ客服
地址:深圳市南山区西丽大学城创智工业园
电邮:jeky_zhao#qq.com
移动电话:139-2527-9053

Powered by 互联科技 X3.4© 2001-2213 极客世界.|Sitemap