本文整理汇总了Java中org.apache.samza.SamzaException类的典型用法代码示例。如果您正苦于以下问题:Java SamzaException类的具体用法?Java SamzaException怎么用?Java SamzaException使用的例子?那么恭喜您, 这里精选的类代码示例或许可以为您提供帮助。
SamzaException类属于org.apache.samza包,在下文中一共展示了SamzaException类的20个代码示例,这些例子默认根据受欢迎程度排序。您可以为喜欢或者感觉有用的代码点赞,您的评价将有助于我们的系统推荐出更棒的Java代码示例。
示例1: RocksDbKeyValueReader
import org.apache.samza.SamzaException; //导入依赖的package包/类
/**
* Construct the <code>RocksDbKeyValueReader</code> with store's name,
* database's path and Samza's config
*
* @param storeName name of the RocksDb defined in the config file
* @param dbPath path to the db directory
* @param config Samza's config
*/
public RocksDbKeyValueReader(String storeName, String dbPath, Config config) {
// get the key serde and value serde from the config
JavaStorageConfig storageConfig = new JavaStorageConfig(config);
JavaSerializerConfig serializerConfig = new JavaSerializerConfig(config);
keySerde = getSerdeFromName(storageConfig.getStorageKeySerde(storeName), serializerConfig);
valueSerde = getSerdeFromName(storageConfig.getStorageMsgSerde(storeName), serializerConfig);
// get db options
ArrayList<TaskName> taskNameList = new ArrayList<TaskName>();
taskNameList.add(new TaskName("read-rocks-db"));
SamzaContainerContext samzaContainerContext =
new SamzaContainerContext("0", config, taskNameList, new MetricsRegistryMap());
Options options = RocksDbOptionsHelper.options(config, samzaContainerContext);
// open the db
RocksDB.loadLibrary();
try {
db = RocksDB.openReadOnly(options, dbPath);
} catch (RocksDBException e) {
throw new SamzaException("can not open the rocksDb in " + dbPath, e);
}
}
开发者ID:apache,项目名称:samza,代码行数:32,代码来源:RocksDbKeyValueReader.java
示例2: obtainPartitionDescriptorMap
import org.apache.samza.SamzaException; //导入依赖的package包/类
static Map<Partition, List<String>> obtainPartitionDescriptorMap(String stagingDirectory, String streamName) {
if (StringUtils.isBlank(stagingDirectory)) {
LOG.info("Empty or null staging directory: {}", stagingDirectory);
return Collections.emptyMap();
}
if (StringUtils.isBlank(streamName)) {
throw new SamzaException(String.format("stream name (%s) is null or empty!", streamName));
}
Path path = PartitionDescriptorUtil.getPartitionDescriptorPath(stagingDirectory, streamName);
try (FileSystem fs = path.getFileSystem(new Configuration())) {
if (!fs.exists(path)) {
return Collections.emptyMap();
}
try (FSDataInputStream fis = fs.open(path)) {
String json = IOUtils.toString(fis, StandardCharsets.UTF_8);
return PartitionDescriptorUtil.getDescriptorMapFromJson(json);
}
} catch (IOException e) {
throw new SamzaException("Failed to read partition description from: " + path);
}
}
开发者ID:apache,项目名称:samza,代码行数:22,代码来源:HdfsSystemAdmin.java
示例3: httpGet
import org.apache.samza.SamzaException; //导入依赖的package包/类
/**
* This method initiates http get request on the request url and returns the
* response returned from the http get.
* @param requestUrl url on which the http get request has to be performed.
* @return the http get response.
* @throws IOException if there are problems with the http get request.
*/
private byte[] httpGet(String requestUrl) throws IOException {
GetMethod getMethod = new GetMethod(requestUrl);
try {
int responseCode = httpClient.executeMethod(getMethod);
LOG.debug("Received response code: {} for the get request on the url: {}", responseCode, requestUrl);
byte[] response = getMethod.getResponseBody();
if (responseCode != HttpStatus.SC_OK) {
throw new SamzaException(String.format("Received response code: %s for get request on: %s, with message: %s.",
responseCode, requestUrl, StringUtils.newStringUtf8(response)));
}
return response;
} finally {
getMethod.releaseConnection();
}
}
开发者ID:apache,项目名称:samza,代码行数:23,代码来源:JobsClient.java
示例4: runTask
import org.apache.samza.SamzaException; //导入依赖的package包/类
@Override
public void runTask() {
JobConfig jobConfig = new JobConfig(this.config);
// validation
String taskName = new TaskConfig(config).getTaskClass().getOrElse(null);
if (taskName == null) {
throw new SamzaException("Neither APP nor task.class are defined defined");
}
LOG.info("LocalApplicationRunner will run " + taskName);
LocalStreamProcessorLifeCycleListener listener = new LocalStreamProcessorLifeCycleListener();
StreamProcessor processor = createStreamProcessor(jobConfig, null, listener);
numProcessorsToStart.set(1);
listener.setProcessor(processor);
processor.start();
}
开发者ID:apache,项目名称:samza,代码行数:19,代码来源:LocalApplicationRunner.java
示例5: testReadFailsOnSerdeExceptions
import org.apache.samza.SamzaException; //导入依赖的package包/类
@Test(expected = SamzaException.class)
public void testReadFailsOnSerdeExceptions() throws Exception {
KafkaStreamSpec checkpointSpec = new KafkaStreamSpec(CHECKPOINT_TOPIC, CHECKPOINT_TOPIC,
CHECKPOINT_SYSTEM, 1);
Config mockConfig = mock(Config.class);
when(mockConfig.get(JobConfig.SSP_GROUPER_FACTORY())).thenReturn(GROUPER_FACTORY_CLASS);
// mock out a consumer that returns a single checkpoint IME
SystemStreamPartition ssp = new SystemStreamPartition("system-1", "input-topic", new Partition(0));
List<List<IncomingMessageEnvelope>> checkpointEnvelopes = ImmutableList.of(
ImmutableList.of(newCheckpointEnvelope(TASK1, ssp, "0")));
SystemConsumer mockConsumer = newConsumer(checkpointEnvelopes);
SystemAdmin mockAdmin = newAdmin("0", "1");
SystemFactory factory = newFactory(mock(SystemProducer.class), mockConsumer, mockAdmin);
// wire up an exception throwing serde with the checkpointmanager
KafkaCheckpointManager checkpointManager = new KafkaCheckpointManager(checkpointSpec, factory,
true, mockConfig, mock(MetricsRegistry.class), new ExceptionThrowingCheckpointSerde(), new KafkaCheckpointLogKeySerde());
checkpointManager.register(TASK1);
checkpointManager.start();
// expect an exception from ExceptionThrowingSerde
checkpointManager.readLastCheckpoint(TASK1);
}
开发者ID:apache,项目名称:samza,代码行数:26,代码来源:TestKafkaCheckpointManagerJava.java
示例6: testCoordinatorStreamSystemConsumer
import org.apache.samza.SamzaException; //导入依赖的package包/类
@Test
public void testCoordinatorStreamSystemConsumer() {
Map<String, String> expectedConfig = new LinkedHashMap<String, String>();
expectedConfig.put("job.id", "1234");
SystemStream systemStream = new SystemStream("system", "stream");
MockSystemConsumer systemConsumer = new MockSystemConsumer(new SystemStreamPartition(systemStream, new Partition(0)));
CoordinatorStreamSystemConsumer consumer = new CoordinatorStreamSystemConsumer(systemStream, systemConsumer, new SinglePartitionWithoutOffsetsSystemAdmin());
assertEquals(0, systemConsumer.getRegisterCount());
consumer.register();
assertEquals(1, systemConsumer.getRegisterCount());
assertFalse(systemConsumer.isStarted());
consumer.start();
assertTrue(systemConsumer.isStarted());
try {
consumer.getConfig();
fail("Should have failed when retrieving config before bootstrapping.");
} catch (SamzaException e) {
// Expected.
}
consumer.bootstrap();
assertEquals(expectedConfig, consumer.getConfig());
assertFalse(systemConsumer.isStopped());
consumer.stop();
assertTrue(systemConsumer.isStopped());
}
开发者ID:apache,项目名称:samza,代码行数:26,代码来源:TestCoordinatorStreamSystemConsumer.java
示例7: testUnregisteredProcessorInLeaderElection
import org.apache.samza.SamzaException; //导入依赖的package包/类
@Test
public void testUnregisteredProcessorInLeaderElection() {
String processorId = "1";
ZkUtils mockZkUtils = mock(ZkUtils.class);
when(mockZkUtils.getSortedActiveProcessorsZnodes()).thenReturn(new ArrayList<String>());
Mockito.doNothing().when(mockZkUtils).validatePaths(any(String[].class));
ZkKeyBuilder kb = mock(ZkKeyBuilder.class);
when(kb.getProcessorsPath()).thenReturn("");
when(mockZkUtils.getKeyBuilder()).thenReturn(kb);
ZkLeaderElector leaderElector = new ZkLeaderElector(processorId, mockZkUtils, null);
leaderElector.setLeaderElectorListener(() -> { });
try {
leaderElector.tryBecomeLeader();
Assert.fail("Was expecting leader election to fail!");
} catch (SamzaException e) {
// No-op Expected
}
}
开发者ID:apache,项目名称:samza,代码行数:21,代码来源:TestZkLeaderElector.java
示例8: register
import org.apache.samza.SamzaException; //导入依赖的package包/类
@Override
public void register(SystemStreamPartition systemStreamPartition, String offset) {
super.register(systemStreamPartition, offset);
LOG.info(String.format("Eventhub consumer trying to register ssp %s, offset %s", systemStreamPartition, offset));
if (isStarted) {
throw new SamzaException("Trying to add partition when the connection has already started.");
}
if (streamPartitionOffsets.containsKey(systemStreamPartition)) {
// Only update if new offset is lower than previous offset
if (END_OF_STREAM.equals(offset)) return;
String prevOffset = streamPartitionOffsets.get(systemStreamPartition);
if (!END_OF_STREAM.equals(prevOffset) && EventHubSystemAdmin.compareOffsets(offset, prevOffset) > -1) {
return;
}
}
streamPartitionOffsets.put(systemStreamPartition, offset);
}
开发者ID:apache,项目名称:samza,代码行数:20,代码来源:EventHubSystemConsumer.java
示例9: getNewestEventHubOffset
import org.apache.samza.SamzaException; //导入依赖的package包/类
private String getNewestEventHubOffset(EventHubClientManager eventHubClientManager, String streamName, Integer partitionId) {
CompletableFuture<EventHubPartitionRuntimeInformation> partitionRuntimeInfoFuture = eventHubClientManager
.getEventHubClient()
.getPartitionRuntimeInformation(partitionId.toString());
try {
long timeoutMs = config.getRuntimeInfoWaitTimeMS(systemName);
EventHubPartitionRuntimeInformation partitionRuntimeInformation = partitionRuntimeInfoFuture
.get(timeoutMs, TimeUnit.MILLISECONDS);
return partitionRuntimeInformation.getLastEnqueuedOffset();
} catch (InterruptedException | ExecutionException | TimeoutException e) {
String msg = String.format(
"Error while fetching EventHubPartitionRuntimeInfo for System:%s, Stream:%s, Partition:%s",
systemName, streamName, partitionId);
throw new SamzaException(msg);
}
}
开发者ID:apache,项目名称:samza,代码行数:19,代码来源:EventHubSystemConsumer.java
示例10: group
import org.apache.samza.SamzaException; //导入依赖的package包/类
@Override
public Map<TaskName, Set<SystemStreamPartition>> group(final Set<SystemStreamPartition> ssps) {
Map<TaskName, Set<SystemStreamPartition>> groupedMap = new HashMap<>();
if (ssps == null) {
throw new SamzaException("ssp set cannot be null!");
}
if (ssps.size() == 0) {
throw new SamzaException("Cannot process stream task with no input system stream partitions");
}
processorList.forEach(processor -> {
// Create a task name for each processor and assign all partitions to each task name.
final TaskName taskName = new TaskName(String.format("Task-%s", processor));
groupedMap.put(taskName, ssps);
});
return groupedMap;
}
开发者ID:apache,项目名称:samza,代码行数:20,代码来源:AllSspToSingleTaskGrouperFactory.java
示例11: flush
import org.apache.samza.SamzaException; //导入依赖的package包/类
@Override
public void flush(String source) {
sourceBulkProcessor.get(source).flush();
if (sendFailed.get()) {
String message = String.format("Unable to send message from %s to system %s.", source,
system);
LOGGER.error(message);
Throwable cause = thrown.get();
if (cause != null) {
throw new SamzaException(message, cause);
} else {
throw new SamzaException(message);
}
}
LOGGER.info(String.format("Flushed %s to %s.", source, system));
}
开发者ID:apache,项目名称:samza,代码行数:20,代码来源:ElasticsearchSystemProducer.java
示例12: start
import org.apache.samza.SamzaException; //导入依赖的package包/类
@Override
public void start(JobInstance jobInstance)
throws Exception {
JobStatus currentStatus = getJobSamzaStatus(jobInstance);
if (currentStatus.hasBeenStarted()) {
log.info("Job {} will not be started because it is currently {}.", jobInstance, currentStatus.toString());
return;
}
String scriptPath = getScriptPath(jobInstance, START_SCRIPT_NAME);
int resultCode = scriptRunner.runScript(scriptPath, CONFIG_FACTORY_PARAM,
generateConfigPathParameter(jobInstance));
if (resultCode != 0) {
throw new SamzaException("Failed to start job. Result code: " + resultCode);
}
}
开发者ID:apache,项目名称:samza,代码行数:17,代码来源:SimpleYarnJobProxy.java
示例13: registerProcessorAndGetId
import org.apache.samza.SamzaException; //导入依赖的package包/类
/**
* Returns a ZK generated identifier for this client.
* If the current client is registering for the first time, it creates an ephemeral sequential node in the ZK tree
* If the current client has already registered and is still within the same session, it returns the already existing
* value for the ephemeralPath
*
* @param data Object that should be written as data in the registered ephemeral ZK node
* @return String representing the absolute ephemeralPath of this client in the current session
*/
public synchronized String registerProcessorAndGetId(final ProcessorData data) {
String processorId = data.getProcessorId();
if (ephemeralPath == null) {
ephemeralPath = zkClient.createEphemeralSequential(keyBuilder.getProcessorsPath() + "/", data.toString());
LOG.info("Created ephemeral path: {} for processor: {} in zookeeper.", ephemeralPath, data);
ProcessorNode processorNode = new ProcessorNode(data, ephemeralPath);
// Determine if there are duplicate processors with this.processorId after registration.
if (!isValidRegisteredProcessor(processorNode)) {
LOG.info("Processor: {} is duplicate. Deleting zookeeper node at path: {}.", processorId, ephemeralPath);
zkClient.delete(ephemeralPath);
throw new SamzaException(String.format("Processor: %s is duplicate in the group. Registration failed.", processorId));
}
} else {
LOG.info("Ephemeral path: {} exists for processor: {} in zookeeper.", ephemeralPath, data);
}
return ephemeralPath;
}
开发者ID:apache,项目名称:samza,代码行数:27,代码来源:ZkUtils.java
示例14: getConfig
import org.apache.samza.SamzaException; //导入依赖的package包/类
/**
* get the config for the AM or containers based on the containers' names.
*
* @return Config the config of this container
*/
protected Config getConfig() {
Config config;
try {
if (isApplicationMaster) {
config = JobModelManager.currentJobModelManager().jobModel().getConfig();
} else {
String url = System.getenv(ShellCommandConfig.ENV_COORDINATOR_URL());
config = SamzaObjectMapper.getObjectMapper()
.readValue(Util.read(new URL(url), 30000), JobModel.class)
.getConfig();
}
} catch (IOException e) {
throw new SamzaException("can not read the config", e);
}
return config;
}
开发者ID:apache,项目名称:samza,代码行数:24,代码来源:StreamAppender.java
示例15: register
import org.apache.samza.SamzaException; //导入依赖的package包/类
@Override
public void register() {
// TODO - make a loop here with some number of attempts.
// possibly split into two method - becomeLeader() and becomeParticipant()
zkLeaderElector.tryBecomeLeader();
// make sure we are connection to a job that uses the same ZK communication protocol version.
try {
zkUtils.validateZkVersion();
} catch (SamzaException e) {
// IMPORTANT: Mismatch of the version, means we are trying to join a job, started by processors with different version.
// If there are no processors running, this is the place to do the migration to the new
// ZK structure.
// If some processors are running, then this processor should fail with an error to tell the user to stop all
// the processors before upgrading to this new version.
// TODO migration here
// for now we just rethrow the exception
throw e;
}
// subscribe to JobModel version updates
zkUtils.subscribeToJobModelVersionChange(new ZkJobModelVersionChangeHandler(zkUtils));
}
开发者ID:apache,项目名称:samza,代码行数:25,代码来源:ZkControllerImpl.java
示例16: start
import org.apache.samza.SamzaException; //导入依赖的package包/类
/**
* Starts the YarnContainerManager and initialize all its sub-systems.
* Attempting to start an already started container manager will return immediately.
*/
@Override
public void start() {
if(!started.compareAndSet(false, true)) {
log.info("Attempting to start an already started ContainerManager");
return;
}
metrics.start();
service.onInit();
log.info("Starting YarnContainerManager.");
amClient.init(yarnConfiguration);
amClient.start();
nmClientAsync.init(yarnConfiguration);
nmClientAsync.start();
lifecycle.onInit();
if(lifecycle.shouldShutdown()) {
clusterManagerCallback.onError(new SamzaException("Invalid resource request."));
}
log.info("Finished starting YarnContainerManager");
}
开发者ID:apache,项目名称:samza,代码行数:26,代码来源:YarnClusterResourceManager.java
示例17: getChangelogStream
import org.apache.samza.SamzaException; //导入依赖的package包/类
public String getChangelogStream(String storeName) {
// If the config specifies 'stores.<storename>.changelog' as '<system>.<stream>' combination - it will take precedence.
// If this config only specifies <astream> and there is a value in job.changelog.system=<asystem> -
// these values will be combined into <asystem>.<astream>
String systemStream = StringUtils.trimToNull(get(String.format(CHANGELOG_STREAM, storeName), null));
String systemStreamRes;
if (systemStream != null && !systemStream.contains(".")) {
String changelogSystem = getChangelogSystem();
// contains only stream name
if (changelogSystem != null) {
systemStreamRes = changelogSystem + "." + systemStream;
} else {
throw new SamzaException("changelog system is not defined:" + systemStream);
}
} else {
systemStreamRes = systemStream;
}
if (systemStreamRes != null) {
systemStreamRes = StreamManager.createUniqueNameForBatch(systemStreamRes, this);
}
return systemStreamRes;
}
开发者ID:apache,项目名称:samza,代码行数:26,代码来源:JavaStorageConfig.java
示例18: fetchSqlFromConfig
import org.apache.samza.SamzaException; //导入依赖的package包/类
public static List<String> fetchSqlFromConfig(Map<String, String> config) {
List<String> sql;
if (config.containsKey(CFG_SQL_STMT) && StringUtils.isNotBlank(config.get(CFG_SQL_STMT))) {
String sqlValue = config.get(CFG_SQL_STMT);
sql = Collections.singletonList(sqlValue);
} else if (config.containsKey(CFG_SQL_STMTS_JSON) && StringUtils.isNotBlank(config.get(CFG_SQL_STMTS_JSON))) {
sql = deserializeSqlStmts(config.get(CFG_SQL_STMTS_JSON));
} else if (config.containsKey(CFG_SQL_FILE)) {
String sqlFile = config.get(CFG_SQL_FILE);
sql = SqlFileParser.parseSqlFile(sqlFile);
} else {
String msg = "Config doesn't contain the SQL that needs to be executed.";
LOG.error(msg);
throw new SamzaException(msg);
}
return sql;
}
开发者ID:apache,项目名称:samza,代码行数:19,代码来源:SamzaSqlApplicationConfig.java
示例19: getPartitionCountMonitor
import org.apache.samza.SamzaException; //导入依赖的package包/类
private StreamPartitionCountMonitor getPartitionCountMonitor(Config config) {
Map<String, SystemAdmin> systemAdmins = new JavaSystemConfig(config).getSystemAdmins();
StreamMetadataCache streamMetadata = new StreamMetadataCache(Util.javaMapAsScalaMap(systemAdmins), 0, SystemClock.instance());
Set<SystemStream> inputStreamsToMonitor = new TaskConfigJava(config).getAllInputStreams();
if (inputStreamsToMonitor.isEmpty()) {
throw new SamzaException("Input streams to a job can not be empty.");
}
return new StreamPartitionCountMonitor(
inputStreamsToMonitor,
streamMetadata,
metrics,
new JobConfig(config).getMonitorPartitionChangeFrequency(),
streamsChanged -> {
// Fail the jobs with durable state store. Otherwise, application state.status remains UNDEFINED s.t. YARN job will be restarted
if (hasDurableStores) {
log.error("Input topic partition count changed in a job with durable state. Failing the job.");
state.status = SamzaApplicationState.SamzaAppStatus.FAILED;
}
coordinatorException = new PartitionChangeException("Input topic partition count changes detected.");
});
}
开发者ID:apache,项目名称:samza,代码行数:23,代码来源:ClusterBasedJobCoordinator.java
示例20: createTriggerImpl
import org.apache.samza.SamzaException; //导入依赖的package包/类
public static <M, WK> TriggerImpl<M, WK> createTriggerImpl(Trigger<M> trigger, Clock clock, TriggerKey<WK> triggerKey) {
if (trigger == null) {
throw new IllegalArgumentException("Trigger must not be null");
}
if (trigger instanceof CountTrigger) {
return new CountTriggerImpl<>((CountTrigger<M>) trigger, triggerKey);
} else if (trigger instanceof RepeatingTrigger) {
return new RepeatingTriggerImpl<>((RepeatingTrigger<M>) trigger, clock, triggerKey);
} else if (trigger instanceof AnyTrigger) {
return new AnyTriggerImpl<>((AnyTrigger<M>) trigger, clock, triggerKey);
} else if (trigger instanceof TimeSinceLastMessageTrigger) {
return new TimeSinceLastMessageTriggerImpl<>((TimeSinceLastMessageTrigger<M>) trigger, clock, triggerKey);
} else if (trigger instanceof TimeTrigger) {
return new TimeTriggerImpl((TimeTrigger<M>) trigger, clock, triggerKey);
} else if (trigger instanceof TimeSinceFirstMessageTrigger) {
return new TimeSinceFirstMessageTriggerImpl<>((TimeSinceFirstMessageTrigger<M>) trigger, clock, triggerKey);
}
throw new SamzaException("No implementation class defined for the trigger " + trigger.getClass().getCanonicalName());
}
开发者ID:apache,项目名称:samza,代码行数:23,代码来源:TriggerImpls.java
注:本文中的org.apache.samza.SamzaException类示例整理自Github/MSDocs等源码及文档管理平台,相关代码片段筛选自各路编程大神贡献的开源项目,源码版权归原作者所有,传播和使用请参考对应项目的License;未经允许,请勿转载。 |
请发表评论