• 设为首页
  • 点击收藏
  • 手机版
    手机扫一扫访问
    迪恩网络手机版
  • 关注官方公众号
    微信扫一扫关注
    迪恩网络公众号

Java Slice类代码示例

原作者: [db:作者] 来自: [db:来源] 收藏 邀请

本文整理汇总了Java中com.datatorrent.netlet.util.Slice的典型用法代码示例。如果您正苦于以下问题:Java Slice类的具体用法?Java Slice怎么用?Java Slice使用的例子?那么恭喜您, 这里精选的类代码示例或许可以为您提供帮助。



Slice类属于com.datatorrent.netlet.util包,在下文中一共展示了Slice类的20个代码示例,这些例子默认根据受欢迎程度排序。您可以为喜欢或者感觉有用的代码点赞,您的评价将有助于我们的系统推荐出更棒的Java代码示例。

示例1: stateInternalsForKey

import com.datatorrent.netlet.util.Slice; //导入依赖的package包/类
@Override
public ApexStateInternals<K> stateInternalsForKey(K key) {
  final Slice keyBytes;
  try {
    keyBytes = (key != null) ? new Slice(CoderUtils.encodeToByteArray(keyCoder, key)) :
      new Slice(null);
  } catch (CoderException e) {
    throw new RuntimeException(e);
  }
  HashBasedTable<String, String, byte[]> stateTable = perKeyState.get(keyBytes);
  if (stateTable == null) {
    stateTable = HashBasedTable.create();
    perKeyState.put(keyBytes, stateTable);
  }
  return new ApexStateInternals<>(key, stateTable);
}
 
开发者ID:apache,项目名称:beam,代码行数:17,代码来源:ApexStateInternals.java


示例2: addTimer

import com.datatorrent.netlet.util.Slice; //导入依赖的package包/类
public void addTimer(Slice keyBytes, TimerData timer) {
  Set<Slice> timersForKey = activeTimers.get(keyBytes);
  if (timersForKey == null) {
    timersForKey = new HashSet<>();
  }

  try {
    Slice timerBytes = new Slice(CoderUtils.encodeToByteArray(timerDataCoder, timer));
    timersForKey.add(timerBytes);
  } catch (CoderException e) {
    throw new RuntimeException(e);
  }

  activeTimers.put(keyBytes, timersForKey);
  this.minTimestamp = Math.min(minTimestamp, timer.getTimestamp().getMillis());
}
 
开发者ID:apache,项目名称:beam,代码行数:17,代码来源:ApexTimerInternals.java


示例3: deleteTimer

import com.datatorrent.netlet.util.Slice; //导入依赖的package包/类
public void deleteTimer(Slice keyBytes, TimerData timerKey) {
  Set<Slice> timersForKey = activeTimers.get(keyBytes);
  if (timersForKey != null) {
    try {
      Slice timerBytes = new Slice(CoderUtils.encodeToByteArray(timerDataCoder, timerKey));
      timersForKey.add(timerBytes);
      timersForKey.remove(timerBytes);
    } catch (CoderException e) {
      throw new RuntimeException(e);
    }

    if (timersForKey.isEmpty()) {
      activeTimers.remove(keyBytes);
    } else {
      activeTimers.put(keyBytes, timersForKey);
    }
  }
}
 
开发者ID:apache,项目名称:beam,代码行数:19,代码来源:ApexTimerInternals.java


示例4: toDataStatePair

import com.datatorrent.netlet.util.Slice; //导入依赖的package包/类
@Override
public DataStatePair toDataStatePair(T o)
{
  data.setPosition(0);
  writeClassAndObject(data, o);

  if (!pairs.isEmpty()) {
    state.setPosition(0);
    for (ClassIdPair cip : pairs) {
      writeClassAndObject(state, cip);
    }
    pairs.clear();

    dataStatePair.state = new Slice(state.getBuffer(), 0, state.position());
  } else {
    dataStatePair.state = null;
  }

  dataStatePair.data = new Slice(data.getBuffer(), 0, data.position());
  return dataStatePair;
}
 
开发者ID:apache,项目名称:apex-core,代码行数:22,代码来源:DefaultStatefulStreamCodec.java


示例5: testUpdateBucketMetaDataFile

import com.datatorrent.netlet.util.Slice; //导入依赖的package包/类
@Test
public void testUpdateBucketMetaDataFile() throws IOException
{
  testMeta.bucketsFileSystem.setup(testMeta.managedStateContext);
  BucketsFileSystem.MutableTimeBucketMeta mutableTbm = new BucketsFileSystem.MutableTimeBucketMeta(1, 1);
  mutableTbm.updateTimeBucketMeta(10, 100, new Slice("1".getBytes()));

  testMeta.bucketsFileSystem.updateTimeBuckets(mutableTbm);
  testMeta.bucketsFileSystem.updateBucketMetaFile(1);

  BucketsFileSystem.TimeBucketMeta immutableTbm = testMeta.bucketsFileSystem.getTimeBucketMeta(1, 1);
  Assert.assertNotNull(immutableTbm);
  Assert.assertEquals("last transferred window", 10, immutableTbm.getLastTransferredWindowId());
  Assert.assertEquals("size in bytes", 100, immutableTbm.getSizeInBytes());
  Assert.assertEquals("first key", "1", immutableTbm.getFirstKey().stringValue());
  testMeta.bucketsFileSystem.teardown();
}
 
开发者ID:apache,项目名称:apex-malhar,代码行数:18,代码来源:BucketsFileSystemTest.java


示例6: populateDAG

import com.datatorrent.netlet.util.Slice; //导入依赖的package包/类
public void populateDAG(DAG dag, Configuration conf)
{
  S3InputModule module = dag.addModule("s3InputModule", S3InputModule.class);

  AbstractFileOutputOperator<AbstractFileSplitter.FileMetadata> metadataWriter = new S3InputModuleAppTest.MetadataWriter(S3InputModuleAppTest.OUT_METADATA_FILE);
  metadataWriter.setFilePath(S3InputModuleAppTest.outputDir);
  dag.addOperator("FileMetadataWriter", metadataWriter);

  AbstractFileOutputOperator<AbstractBlockReader.ReaderRecord<Slice>> dataWriter = new S3InputModuleAppTest.HDFSFileWriter(S3InputModuleAppTest.OUT_DATA_FILE);
  dataWriter.setFilePath(S3InputModuleAppTest.outputDir);
  dag.addOperator("FileDataWriter", dataWriter);

  DevNull<BlockMetadata.FileBlockMetadata> devNull = dag.addOperator("devNull", DevNull.class);

  dag.addStream("FileMetaData", module.filesMetadataOutput, metadataWriter.input);
  dag.addStream("data", module.messages, dataWriter.input);
  dag.addStream("blockMetadata", module.blocksMetadataOutput, devNull.data);
}
 
开发者ID:apache,项目名称:apex-malhar,代码行数:19,代码来源:S3InputModuleAppTest.java


示例7: commitHelper

import com.datatorrent.netlet.util.Slice; //导入依赖的package包/类
private void commitHelper(Slice one, Slice two)
{
  testMeta.managedState.setup(testMeta.operatorContext);
  long time = System.currentTimeMillis();
  testMeta.managedState.beginWindow(time);
  testMeta.managedState.put(0, one, one);
  testMeta.managedState.endWindow();
  testMeta.managedState.beforeCheckpoint(time);

  testMeta.managedState.beginWindow(time + 1);
  testMeta.managedState.put(0, two, two);
  testMeta.managedState.endWindow();
  testMeta.managedState.beforeCheckpoint(time + 1);

  testMeta.managedState.committed(time);
}
 
开发者ID:apache,项目名称:apex-malhar,代码行数:17,代码来源:ManagedStateImplTest.java


示例8: testCleanup

import com.datatorrent.netlet.util.Slice; //导入依赖的package包/类
@Test
public void testCleanup() throws IOException
{
  RandomAccessFile r = new RandomAccessFile(testMeta.testFile, "r");
  r.seek(0);
  byte[] b = r.readLine().getBytes();
  storage.store(new Slice(b, 0, b.length));
  byte[] val = storage.store(new Slice(b, 0, b.length));
  storage.flush();
  storage.clean(val);
  Configuration conf = new Configuration();
  FileSystem fs = FileSystem.get(conf);
  boolean exists = fs.exists(new Path(STORAGE_DIRECTORY + "/" + "0"));
  Assert.assertEquals("file should not exist", false, exists);
  r.close();
}
 
开发者ID:DataTorrent,项目名称:Megh,代码行数:17,代码来源:HDFSStorageTest.java


示例9: createHashes

import com.datatorrent.netlet.util.Slice; //导入依赖的package包/类
/**
 * Generate integer array based on the hash function till the number of
 * hashes.
 *
 * @param slice
 *          specifies input slice.
 * @return array of int-sized hashes
 */
private int[] createHashes(Slice slice)
{
  int[] result = new int[numberOfHashes];
  long hash64 = hasher.hash(slice);
  // apply the less hashing technique
  int hash1 = (int)hash64;
  int hash2 = (int)(hash64 >>> 32);
  for (int i = 1; i <= numberOfHashes; i++) {
    int nextHash = hash1 + i * hash2;
    if (nextHash < 0) {
      nextHash = ~nextHash;
    }
    result[i - 1] = nextHash;
  }
  return result;
}
 
开发者ID:apache,项目名称:apex-malhar,代码行数:25,代码来源:SliceBloomFilter.java


示例10: retryWrite

import com.datatorrent.netlet.util.Slice; //导入依赖的package包/类
/**
 * Attempt the sequence of writing after sleeping twice and upon failure assume
 * that the client connection has problems and hence close it.
 *
 * @param address
 * @param e
 * @throws IOException
 */
private void retryWrite(byte[] address, Slice event) throws IOException
{
  if (event == null) {  /* this happens for playback where address and event are sent as single object */
    while (client.isConnected()) {
      sleep();
      if (client.write(address)) {
        return;
      }
    }
  } else {  /* this happens when the events are taken from the flume channel and writing first time failed */
    while (client.isConnected()) {
      sleep();
      if (client.write(address, event)) {
        return;
      }
    }
  }

  throw new IOException("Client disconnected!");
}
 
开发者ID:apache,项目名称:apex-malhar,代码行数:29,代码来源:FlumeSink.java


示例11: endWindow

import com.datatorrent.netlet.util.Slice; //导入依赖的package包/类
@Override
public void endWindow()
{
  Iterator<Map.Entry<Slice, HDSQuery>> it = this.queries.entrySet().iterator();
  while (it.hasNext()) {
    HDSQuery query = it.next().getValue();
    if (!query.processed) {
      processQuery(query);
    }
    // could be processed directly
    if (query.processed) {
      emitQueryResult(query);
      if (--query.keepAliveCount < 0) {
        //LOG.debug("Removing expired query {}", query);
        it.remove(); // query expired
      }
    }
  }
  if (executorError != null) {
    throw new RuntimeException("Error processing queries.", this.executorError);
  }
}
 
开发者ID:DataTorrent,项目名称:Megh,代码行数:23,代码来源:HDHTReader.java


示例12: get

import com.datatorrent.netlet.util.Slice; //导入依赖的package包/类
@Override
public V get(Object o)
{
  K key = (K)o;

  if (cache.getRemovedKeys().contains(key)) {
    return null;
  }

  V val = cache.get(key);

  if (val != null) {
    return val;
  }

  Slice valSlice = store.getSync(getBucketTimeOrId(key), keyValueSerdeManager.serializeDataKey(key, false));

  if (valSlice == null || valSlice == BucketedState.EXPIRED || valSlice.length == 0) {
    return null;
  }

  tmpInput.setBuffer(valSlice.buffer, valSlice.offset, valSlice.length);
  return keyValueSerdeManager.deserializeValue(tmpInput);
}
 
开发者ID:apache,项目名称:apex-malhar,代码行数:25,代码来源:SpillableMapImpl.java


示例13: merge

import com.datatorrent.netlet.util.Slice; //导入依赖的package包/类
/**
 * Merge with other write cache with purge.
 * other contains the more recent data, i.e the new operations performed
 * on the write caches are stored in other. The purge operations stored in
 * other are more recent, they will invalidate the writes done previously.
 * After purge is applied, add all keys to the this set.
 *
 * @param other
 * @return
 */
public void merge(WriteCache other)
{
  /**
   * Remove the keys from this map, which are removed
   * by the purge operation done later. (i.e purge operations
   * in other).
   */
  if (other.purges != null) {
    Iterator<Slice> iter = keySet().iterator();
    while (iter.hasNext()) {
      Slice key = iter.next();
      for (Range r : other.purges) {
        if (r.contains(key, cmp)) {
          iter.remove();
        }
      }
    }
  }

  /**
   * merge keys
   */
  putAll(other);
  mergePurgeList(other.purges);
}
 
开发者ID:DataTorrent,项目名称:Megh,代码行数:36,代码来源:WriteCache.java


示例14: purge

import com.datatorrent.netlet.util.Slice; //导入依赖的package包/类
/**
 * Purge operator removes keys falls in purge range from memory, and note down
 * the purge range.
 *
 * @param start
 * @param end
 */
public void purge(Slice start, Slice end)
{
  Range<Slice> range = new Range<>(start, end);
  Iterator<Slice> iter = keySet().iterator();
  while (iter.hasNext()) {
    Slice key = iter.next();
    if (range.contains(key, cmp)) {
      iter.remove();
    }
  }
  if (purges == null) {
    purges = new RangeSet<>(cmp);
  }
  purges.add(range);
}
 
开发者ID:DataTorrent,项目名称:Megh,代码行数:23,代码来源:WriteCache.java


示例15: testPartialFlushWithFailure

import com.datatorrent.netlet.util.Slice; //导入依赖的package包/类
/**
 * This test covers following use case The file is flushed and then more data is written to the same file, but the new
 * data is not flushed and file is not roll over and storage fails The new storage comes up and client asks for data
 * at the last returned address from earlier storage instance. The new storage returns null. Client stores the data
 * again but the address returned this time is null and the retrieval of the earlier address now returns data
 *
 * @throws Exception
 */
@Test
public void testPartialFlushWithFailure() throws Exception
{
  Assert.assertNull(storage.retrieve(new byte[8]));
  byte[] b = "ab".getBytes();
  byte[] address = storage.store(new Slice(b, 0, b.length));
  Assert.assertNotNull(address);
  storage.flush();
  b = "cb".getBytes();
  byte[] addr = storage.store(new Slice(b, 0, b.length));
  storage = getStorage("1", true);
  Assert.assertNull(storage.retrieve(addr));
  Assert.assertNull(storage.store(new Slice(b, 0, b.length)));
  storage.flush();
  match(storage.retrieve(address), "cb");
}
 
开发者ID:DataTorrent,项目名称:Megh,代码行数:25,代码来源:HDFSStorageTest.java


示例16: put

import com.datatorrent.netlet.util.Slice; //导入依赖的package包/类
/**
 * Inserts the (k,v) into the store.
 * @param k key
 * @param v value
 * @return true if the given (k,v) is successfully inserted into the store otherwise false.
 */
@Override
public boolean put(@Nullable K k, @Nullable V v)
{
  if (isKeyContainsMultiValue) {
    Slice keySlice = streamCodec.toByteArray(k);
    long bucketId = getBucketId(k);
    Slice valueSlice = store.getSync(bucketId, keySlice);
    List<V> listOb;
    if (valueSlice == null || valueSlice.length == 0) {
      listOb = new ArrayList<>();
    } else {
      listOb = (List<V>)streamCodec.fromByteArray(valueSlice);
    }
    listOb.add(v);
    return insertInStore(bucketId, timeBucket, keySlice, streamCodec.toByteArray(listOb));
  }
  return insertInStore(getBucketId(k), timeBucket, streamCodec.toByteArray(k),streamCodec.toByteArray(v));
}
 
开发者ID:apache,项目名称:apex-malhar,代码行数:25,代码来源:ManagedTimeStateMultiValue.java


示例17: retrieveAllWindows

import com.datatorrent.netlet.util.Slice; //导入依赖的package包/类
/**
 * Retrieves artifacts available for all the windows saved by the enclosing partitions.
 * @return  artifact saved per window.
 * @throws IOException
 */
public Map<Long, Object> retrieveAllWindows() throws IOException
{
  Map<Long, Object> artifactPerWindow = new HashMap<>();
  FileSystemWAL.FileSystemWALReader reader = getWal().getReader();
  reader.seek(getWal().getWalStartPointer());

  Slice windowSlice = readNext(reader);
  while (reader.getCurrentPointer().compareTo(getWal().getWalEndPointerAfterRecovery()) < 0 && windowSlice != null) {
    long window = Longs.fromByteArray(windowSlice.toByteArray());
    Object data = fromSlice(readNext(reader));
    artifactPerWindow.put(window, data);
    windowSlice = readNext(reader); //null or next window
  }
  reader.seek(getWal().getWalStartPointer());
  return artifactPerWindow;
}
 
开发者ID:apache,项目名称:apex-malhar,代码行数:22,代码来源:IncrementalCheckpointManager.java


示例18: testStorage

import com.datatorrent.netlet.util.Slice; //导入依赖的package包/类
@Test
public void testStorage() throws IOException
{
  Assert.assertNull(storage.retrieve(new byte[8]));
  byte[] b = new byte[200];
  byte[] identifier;
  Assert.assertNotNull(storage.store(new Slice(b, 0, b.length)));
  Assert.assertNotNull(storage.store(new Slice(b, 0, b.length)));
  Assert.assertNull(storage.retrieve(new byte[8]));
  Assert.assertNotNull(storage.store(new Slice(b, 0, b.length)));
  Assert.assertNotNull(storage.store(new Slice(b, 0, b.length)));
  storage.flush();
  byte[] data = storage.retrieve(new byte[8]);
  Assert.assertNotNull(storage.store(new Slice(b, 0, b.length)));
  identifier = storage.store(new Slice(b, 0, b.length));
  byte[] tempData = new byte[data.length - 8];
  System.arraycopy(data, 8, tempData, 0, tempData.length);
  Assert.assertEquals("matched the stored value with retrieved value", new String(b), new String(tempData));
  Assert.assertNull(storage.retrieve(identifier));
}
 
开发者ID:apache,项目名称:apex-malhar,代码行数:21,代码来源:HDFSStorageTest.java


示例19: testCleanForUnflushedData

import com.datatorrent.netlet.util.Slice; //导入依赖的package包/类
@Test
public void testCleanForUnflushedData() throws IOException
{
  byte[] address = null;
  byte[] b = new byte[200];
  storage.retrieve(new byte[8]);
  for (int i = 0; i < 5; i++) {
    storage.store(new Slice(b, 0, b.length));
    address = storage.store(new Slice(b, 0, b.length));
    storage.flush();
    // storage.clean(address);
  }
  byte[] lastWrittenAddress = null;
  for (int i = 0; i < 5; i++) {
    storage.store(new Slice(b, 0, b.length));
    lastWrittenAddress = storage.store(new Slice(b, 0, b.length));
  }
  storage.clean(lastWrittenAddress);
  byte[] cleanedOffset = storage.readData(new Path(STORAGE_DIRECTORY + "/1/cleanoffsetFile"));
  Assert.assertArrayEquals(address, cleanedOffset);

}
 
开发者ID:apache,项目名称:apex-malhar,代码行数:23,代码来源:HDFSStorageTest.java


示例20: testIncrementalCheckpoint

import com.datatorrent.netlet.util.Slice; //导入依赖的package包/类
@Test
public void testIncrementalCheckpoint()
{
  Slice one = ManagedStateTestUtils.getSliceFor("1");
  testMeta.managedState.setup(testMeta.operatorContext);
  long time = System.currentTimeMillis();
  testMeta.managedState.beginWindow(time);
  testMeta.managedState.put(0, one, one);
  testMeta.managedState.endWindow();
  testMeta.managedState.beforeCheckpoint(time);

  Bucket.DefaultBucket defaultBucket = (Bucket.DefaultBucket)testMeta.managedState.getBucket(0);
  Assert.assertEquals("value of one", one, defaultBucket.getCheckpointedData().get(time).get(one).getValue());

  Slice two = ManagedStateTestUtils.getSliceFor("2");
  testMeta.managedState.beginWindow(time + 1);
  testMeta.managedState.put(0, two, two);
  testMeta.managedState.endWindow();
  testMeta.managedState.beforeCheckpoint(time + 1);

  Assert.assertEquals("value of two", two, defaultBucket.getCheckpointedData().get(time + 1).get(two).getValue());
  testMeta.managedState.teardown();
}
 
开发者ID:apache,项目名称:apex-malhar,代码行数:24,代码来源:ManagedStateImplTest.java



注:本文中的com.datatorrent.netlet.util.Slice类示例整理自Github/MSDocs等源码及文档管理平台,相关代码片段筛选自各路编程大神贡献的开源项目,源码版权归原作者所有,传播和使用请参考对应项目的License;未经允许,请勿转载。


鲜花

握手

雷人

路过

鸡蛋
该文章已有0人参与评论

请发表评论

全部评论

专题导读
上一篇:
Java ErrorPageRegistry类代码示例发布时间:2022-05-22
下一篇:
Java ShaderUtils类代码示例发布时间:2022-05-22
热门推荐
阅读排行榜

扫描微信二维码

查看手机版网站

随时了解更新最新资讯

139-2527-9053

在线客服(服务时间 9:00~18:00)

在线QQ客服
地址:深圳市南山区西丽大学城创智工业园
电邮:jeky_zhao#qq.com
移动电话:139-2527-9053

Powered by 互联科技 X3.4© 2001-2213 极客世界.|Sitemap