本文整理汇总了Java中org.apache.htrace.Sampler类的典型用法代码示例。如果您正苦于以下问题:Java Sampler类的具体用法?Java Sampler怎么用?Java Sampler使用的例子?那么恭喜您, 这里精选的类代码示例或许可以为您提供帮助。
Sampler类属于org.apache.htrace包,在下文中一共展示了Sampler类的20个代码示例,这些例子默认根据受欢迎程度排序。您可以为喜欢或者感觉有用的代码点赞,您的评价将有助于我们的系统推荐出更棒的Java代码示例。
示例1: read
import org.apache.htrace.Sampler; //导入依赖的package包/类
@Override
public int read(ByteBuffer buf) throws IOException {
if (curDataSlice == null || curDataSlice.remaining() == 0 && bytesNeededToFinish > 0) {
TraceScope scope = Trace.startSpan(
"RemoteBlockReader2#readNextPacket(" + blockId + ")", Sampler.NEVER);
try {
readNextPacket();
} finally {
scope.close();
}
}
if (curDataSlice.remaining() == 0) {
// we're at EOF now
return -1;
}
int nRead = Math.min(curDataSlice.remaining(), buf.remaining());
ByteBuffer writeSlice = curDataSlice.duplicate();
writeSlice.limit(writeSlice.position() + nRead);
buf.put(writeSlice);
curDataSlice.position(writeSlice.position());
return nRead;
}
开发者ID:naver,项目名称:hadoop,代码行数:25,代码来源:RemoteBlockReader2.java
示例2: fillBuffer
import org.apache.htrace.Sampler; //导入依赖的package包/类
/**
* Reads bytes into a buffer until EOF or the buffer's limit is reached
*/
private int fillBuffer(FileInputStream stream, ByteBuffer buf)
throws IOException {
TraceScope scope = Trace.startSpan("BlockReaderLocalLegacy#fillBuffer(" +
blockId + ")", Sampler.NEVER);
try {
int bytesRead = stream.getChannel().read(buf);
if (bytesRead < 0) {
//EOF
return bytesRead;
}
while (buf.remaining() > 0) {
int n = stream.getChannel().read(buf);
if (n < 0) {
//EOF
return bytesRead;
}
bytesRead += n;
}
return bytesRead;
} finally {
scope.close();
}
}
开发者ID:naver,项目名称:hadoop,代码行数:27,代码来源:BlockReaderLocalLegacy.java
示例3: Test
import org.apache.htrace.Sampler; //导入依赖的package包/类
/**
* Note that all subclasses of this class must provide a public constructor
* that has the exact same list of arguments.
*/
Test(final Connection con, final TestOptions options, final Status status) {
this.connection = con;
this.conf = con == null ? HBaseConfiguration.create() : this.connection.getConfiguration();
this.opts = options;
this.status = status;
this.testName = this.getClass().getSimpleName();
receiverHost = SpanReceiverHost.getInstance(conf);
if (options.traceRate >= 1.0) {
this.traceSampler = Sampler.ALWAYS;
} else if (options.traceRate > 0.0) {
conf.setDouble("hbase.sampler.fraction", options.traceRate);
this.traceSampler = new ProbabilitySampler(new HBaseHTraceConfiguration(conf));
} else {
this.traceSampler = Sampler.NEVER;
}
everyN = (int) (opts.totalRows / (opts.totalRows * opts.sampleRate));
if (options.isValueZipf()) {
this.zipf = new RandomDistribution.Zipf(this.rand, 1, options.getValueSize(), 1.1);
}
LOG.info("Sampling 1 every " + everyN + " out of " + opts.perClientRunRows + " total rows.");
}
开发者ID:fengchen8086,项目名称:ditb,代码行数:26,代码来源:PerformanceEvaluation.java
示例4: createEntries
import org.apache.htrace.Sampler; //导入依赖的package包/类
private void createEntries(Opts opts) throws TableNotFoundException, AccumuloException, AccumuloSecurityException {
// Trace the write operation. Note, unless you flush the BatchWriter, you will not capture
// the write operation as it is occurs asynchronously. You can optionally create additional Spans
// within a given Trace as seen below around the flush
TraceScope scope = Trace.startSpan("Client Write", Sampler.ALWAYS);
System.out.println("TraceID: " + Long.toHexString(scope.getSpan().getTraceId()));
BatchWriter batchWriter = opts.getConnector().createBatchWriter(opts.getTableName(), new BatchWriterConfig());
Mutation m = new Mutation("row");
m.put("cf", "cq", "value");
batchWriter.addMutation(m);
// You can add timeline annotations to Spans which will be able to be viewed in the Monitor
scope.getSpan().addTimelineAnnotation("Initiating Flush");
batchWriter.flush();
batchWriter.close();
scope.close();
}
开发者ID:apache,项目名称:accumulo-examples,代码行数:22,代码来源:TracingExample.java
示例5: readEntries
import org.apache.htrace.Sampler; //导入依赖的package包/类
private void readEntries(Opts opts) throws TableNotFoundException, AccumuloException, AccumuloSecurityException {
Scanner scanner = opts.getConnector().createScanner(opts.getTableName(), opts.auths);
// Trace the read operation.
TraceScope readScope = Trace.startSpan("Client Read", Sampler.ALWAYS);
System.out.println("TraceID: " + Long.toHexString(readScope.getSpan().getTraceId()));
int numberOfEntriesRead = 0;
for (Entry<Key,Value> entry : scanner) {
System.out.println(entry.getKey().toString() + " -> " + entry.getValue().toString());
++numberOfEntriesRead;
}
// You can add additional metadata (key, values) to Spans which will be able to be viewed in the Monitor
readScope.getSpan().addKVAnnotation("Number of Entries Read".getBytes(UTF_8), String.valueOf(numberOfEntriesRead).getBytes(UTF_8));
readScope.close();
}
开发者ID:apache,项目名称:accumulo-examples,代码行数:19,代码来源:TracingExample.java
示例6: waitForAckedSeqno
import org.apache.htrace.Sampler; //导入依赖的package包/类
private void waitForAckedSeqno(long seqno) throws IOException {
TraceScope scope = Trace.startSpan("waitForAckedSeqno", Sampler.NEVER);
try {
if (DFSClient.LOG.isDebugEnabled()) {
DFSClient.LOG.debug("Waiting for ack for: " + seqno);
}
long begin = Time.monotonicNow();
try {
synchronized (dataQueue) {
while (!isClosed()) {
checkClosed();
if (lastAckedSeqno >= seqno) {
break;
}
try {
dataQueue.wait(1000); // when we receive an ack, we notify on
// dataQueue
} catch (InterruptedException ie) {
throw new InterruptedIOException(
"Interrupted while waiting for data to be acknowledged by pipeline");
}
}
}
checkClosed();
} catch (ClosedChannelException e) {
}
long duration = Time.monotonicNow() - begin;
if (duration > dfsclientSlowLogThresholdMs) {
DFSClient.LOG.warn("Slow waitForAckedSeqno took " + duration
+ "ms (threshold=" + dfsclientSlowLogThresholdMs + "ms)");
}
} finally {
scope.close();
}
}
开发者ID:naver,项目名称:hadoop,代码行数:36,代码来源:DFSOutputStream.java
示例7: readChunk
import org.apache.htrace.Sampler; //导入依赖的package包/类
@Override
protected synchronized int readChunk(long pos, byte[] buf, int offset,
int len, byte[] checksumBuf)
throws IOException {
TraceScope scope =
Trace.startSpan("RemoteBlockReader#readChunk(" + blockId + ")",
Sampler.NEVER);
try {
return readChunkImpl(pos, buf, offset, len, checksumBuf);
} finally {
scope.close();
}
}
开发者ID:naver,项目名称:hadoop,代码行数:14,代码来源:RemoteBlockReader.java
示例8: CacheDirectiveIterator
import org.apache.htrace.Sampler; //导入依赖的package包/类
public CacheDirectiveIterator(ClientProtocol namenode,
CacheDirectiveInfo filter, Sampler<?> traceSampler) {
super(0L);
this.namenode = namenode;
this.filter = filter;
this.traceSampler = traceSampler;
}
开发者ID:naver,项目名称:hadoop,代码行数:8,代码来源:CacheDirectiveIterator.java
示例9: DFSInotifyEventInputStream
import org.apache.htrace.Sampler; //导入依赖的package包/类
DFSInotifyEventInputStream(Sampler traceSampler, ClientProtocol namenode,
long lastReadTxid) throws IOException {
this.traceSampler = traceSampler;
this.namenode = namenode;
this.it = Iterators.emptyIterator();
this.lastReadTxid = lastReadTxid;
}
开发者ID:naver,项目名称:hadoop,代码行数:8,代码来源:DFSInotifyEventInputStream.java
示例10: testShortCircuitTraceHooks
import org.apache.htrace.Sampler; //导入依赖的package包/类
@Test
public void testShortCircuitTraceHooks() throws IOException {
assumeTrue(NativeCodeLoader.isNativeCodeLoaded() && !Path.WINDOWS);
conf = new Configuration();
conf.set(DFSConfigKeys.DFS_CLIENT_HTRACE_PREFIX +
SpanReceiverHost.SPAN_RECEIVERS_CONF_SUFFIX,
TestTracing.SetSpanReceiver.class.getName());
conf.setLong("dfs.blocksize", 100 * 1024);
conf.setBoolean(DFSConfigKeys.DFS_CLIENT_READ_SHORTCIRCUIT_KEY, true);
conf.setBoolean(DFSConfigKeys.DFS_CLIENT_READ_SHORTCIRCUIT_SKIP_CHECKSUM_KEY, false);
conf.set(DFSConfigKeys.DFS_DOMAIN_SOCKET_PATH_KEY,
"testShortCircuitTraceHooks._PORT");
conf.set(DFSConfigKeys.DFS_CHECKSUM_TYPE_KEY, "CRC32C");
cluster = new MiniDFSCluster.Builder(conf)
.numDataNodes(1)
.build();
dfs = cluster.getFileSystem();
try {
DFSTestUtil.createFile(dfs, TEST_PATH, TEST_LENGTH, (short)1, 5678L);
TraceScope ts = Trace.startSpan("testShortCircuitTraceHooks", Sampler.ALWAYS);
FSDataInputStream stream = dfs.open(TEST_PATH);
byte buf[] = new byte[TEST_LENGTH];
IOUtils.readFully(stream, buf, 0, TEST_LENGTH);
stream.close();
ts.close();
String[] expectedSpanNames = {
"OpRequestShortCircuitAccessProto",
"ShortCircuitShmRequestProto"
};
TestTracing.assertSpanNamesFound(expectedSpanNames);
} finally {
dfs.close();
cluster.shutdown();
}
}
开发者ID:naver,项目名称:hadoop,代码行数:39,代码来源:TestTracingShortCircuitLocalRead.java
示例11: readWithTracing
import org.apache.htrace.Sampler; //导入依赖的package包/类
public void readWithTracing() throws Exception {
String fileName = "testReadTraceHooks.dat";
writeTestFile(fileName);
long startTime = System.currentTimeMillis();
TraceScope ts = Trace.startSpan("testReadTraceHooks", Sampler.ALWAYS);
readTestFile(fileName);
ts.close();
long endTime = System.currentTimeMillis();
String[] expectedSpanNames = {
"testReadTraceHooks",
"org.apache.hadoop.hdfs.protocol.ClientProtocol.getBlockLocations",
"ClientNamenodeProtocol#getBlockLocations",
"OpReadBlockProto"
};
assertSpanNamesFound(expectedSpanNames);
// The trace should last about the same amount of time as the test
Map<String, List<Span>> map = SetSpanReceiver.SetHolder.getMap();
Span s = map.get("testReadTraceHooks").get(0);
Assert.assertNotNull(s);
long spanStart = s.getStartTimeMillis();
long spanEnd = s.getStopTimeMillis();
Assert.assertTrue(spanStart - startTime < 100);
Assert.assertTrue(spanEnd - endTime < 100);
// There should only be one trace id as it should all be homed in the
// top trace.
for (Span span : SetSpanReceiver.SetHolder.spans.values()) {
Assert.assertEquals(ts.getSpan().getTraceId(), span.getTraceId());
}
SetSpanReceiver.SetHolder.spans.clear();
}
开发者ID:naver,项目名称:hadoop,代码行数:35,代码来源:TestTracing.java
示例12: createTable
import org.apache.htrace.Sampler; //导入依赖的package包/类
private void createTable() throws IOException {
TraceScope createScope = null;
try {
createScope = Trace.startSpan("createTable", Sampler.ALWAYS);
util.createTable(tableName, familyName);
} finally {
if (createScope != null) createScope.close();
}
}
开发者ID:fengchen8086,项目名称:ditb,代码行数:10,代码来源:IntegrationTestSendTraceRequests.java
示例13: deleteTable
import org.apache.htrace.Sampler; //导入依赖的package包/类
private void deleteTable() throws IOException {
TraceScope deleteScope = null;
try {
if (admin.tableExists(tableName)) {
deleteScope = Trace.startSpan("deleteTable", Sampler.ALWAYS);
util.deleteTable(tableName);
}
} finally {
if (deleteScope != null) deleteScope.close();
}
}
开发者ID:fengchen8086,项目名称:ditb,代码行数:13,代码来源:IntegrationTestSendTraceRequests.java
示例14: insertData
import org.apache.htrace.Sampler; //导入依赖的package包/类
private LinkedBlockingQueue<Long> insertData() throws IOException, InterruptedException {
LinkedBlockingQueue<Long> rowKeys = new LinkedBlockingQueue<Long>(25000);
BufferedMutator ht = util.getConnection().getBufferedMutator(this.tableName);
byte[] value = new byte[300];
for (int x = 0; x < 5000; x++) {
TraceScope traceScope = Trace.startSpan("insertData", Sampler.ALWAYS);
try {
for (int i = 0; i < 5; i++) {
long rk = random.nextLong();
rowKeys.add(rk);
Put p = new Put(Bytes.toBytes(rk));
for (int y = 0; y < 10; y++) {
random.nextBytes(value);
p.add(familyName, Bytes.toBytes(random.nextLong()), value);
}
ht.mutate(p);
}
if ((x % 1000) == 0) {
admin.flush(tableName);
}
} finally {
traceScope.close();
}
}
admin.flush(tableName);
return rowKeys;
}
开发者ID:fengchen8086,项目名称:ditb,代码行数:28,代码来源:IntegrationTestSendTraceRequests.java
示例15: EncryptionZoneIterator
import org.apache.htrace.Sampler; //导入依赖的package包/类
public EncryptionZoneIterator(ClientProtocol namenode,
Sampler<?> traceSampler) {
super(Long.valueOf(0));
this.namenode = namenode;
this.traceSampler = traceSampler;
}
开发者ID:naver,项目名称:hadoop,代码行数:7,代码来源:EncryptionZoneIterator.java
示例16: CachePoolIterator
import org.apache.htrace.Sampler; //导入依赖的package包/类
public CachePoolIterator(ClientProtocol namenode, Sampler traceSampler) {
super("");
this.namenode = namenode;
this.traceSampler = traceSampler;
}
开发者ID:naver,项目名称:hadoop,代码行数:6,代码来源:CachePoolIterator.java
示例17: writeWithTracing
import org.apache.htrace.Sampler; //导入依赖的package包/类
public void writeWithTracing() throws Exception {
long startTime = System.currentTimeMillis();
TraceScope ts = Trace.startSpan("testWriteTraceHooks", Sampler.ALWAYS);
writeTestFile("testWriteTraceHooks.dat");
long endTime = System.currentTimeMillis();
ts.close();
String[] expectedSpanNames = {
"testWriteTraceHooks",
"org.apache.hadoop.hdfs.protocol.ClientProtocol.create",
"ClientNamenodeProtocol#create",
"org.apache.hadoop.hdfs.protocol.ClientProtocol.fsync",
"ClientNamenodeProtocol#fsync",
"org.apache.hadoop.hdfs.protocol.ClientProtocol.complete",
"ClientNamenodeProtocol#complete",
"newStreamForCreate",
"DFSOutputStream#writeChunk",
"DFSOutputStream#close",
"dataStreamer",
"OpWriteBlockProto",
"org.apache.hadoop.hdfs.protocol.ClientProtocol.addBlock",
"ClientNamenodeProtocol#addBlock"
};
assertSpanNamesFound(expectedSpanNames);
// The trace should last about the same amount of time as the test
Map<String, List<Span>> map = SetSpanReceiver.SetHolder.getMap();
Span s = map.get("testWriteTraceHooks").get(0);
Assert.assertNotNull(s);
long spanStart = s.getStartTimeMillis();
long spanEnd = s.getStopTimeMillis();
// Spans homed in the top trace shoud have same trace id.
// Spans having multiple parents (e.g. "dataStreamer" added by HDFS-7054)
// and children of them are exception.
String[] spansInTopTrace = {
"testWriteTraceHooks",
"org.apache.hadoop.hdfs.protocol.ClientProtocol.create",
"ClientNamenodeProtocol#create",
"org.apache.hadoop.hdfs.protocol.ClientProtocol.fsync",
"ClientNamenodeProtocol#fsync",
"org.apache.hadoop.hdfs.protocol.ClientProtocol.complete",
"ClientNamenodeProtocol#complete",
"newStreamForCreate",
"DFSOutputStream#writeChunk",
"DFSOutputStream#close",
};
for (String desc : spansInTopTrace) {
for (Span span : map.get(desc)) {
Assert.assertEquals(ts.getSpan().getTraceId(), span.getTraceId());
}
}
SetSpanReceiver.SetHolder.spans.clear();
}
开发者ID:naver,项目名称:hadoop,代码行数:55,代码来源:TestTracing.java
示例18: testWaitForCachedReplicas
import org.apache.htrace.Sampler; //导入依赖的package包/类
@Test(timeout=120000)
public void testWaitForCachedReplicas() throws Exception {
FileSystemTestHelper helper = new FileSystemTestHelper();
GenericTestUtils.waitFor(new Supplier<Boolean>() {
@Override
public Boolean get() {
return ((namenode.getNamesystem().getCacheCapacity() ==
(NUM_DATANODES * CACHE_CAPACITY)) &&
(namenode.getNamesystem().getCacheUsed() == 0));
}
}, 500, 60000);
// Send a cache report referring to a bogus block. It is important that
// the NameNode be robust against this.
NamenodeProtocols nnRpc = namenode.getRpcServer();
DataNode dn0 = cluster.getDataNodes().get(0);
String bpid = cluster.getNamesystem().getBlockPoolId();
LinkedList<Long> bogusBlockIds = new LinkedList<Long> ();
bogusBlockIds.add(999999L);
nnRpc.cacheReport(dn0.getDNRegistrationForBP(bpid), bpid, bogusBlockIds);
Path rootDir = helper.getDefaultWorkingDirectory(dfs);
// Create the pool
final String pool = "friendlyPool";
nnRpc.addCachePool(new CachePoolInfo("friendlyPool"));
// Create some test files
final int numFiles = 2;
final int numBlocksPerFile = 2;
final List<String> paths = new ArrayList<String>(numFiles);
for (int i=0; i<numFiles; i++) {
Path p = new Path(rootDir, "testCachePaths-" + i);
FileSystemTestHelper.createFile(dfs, p, numBlocksPerFile,
(int)BLOCK_SIZE);
paths.add(p.toUri().getPath());
}
// Check the initial statistics at the namenode
waitForCachedBlocks(namenode, 0, 0, "testWaitForCachedReplicas:0");
// Cache and check each path in sequence
int expected = 0;
for (int i=0; i<numFiles; i++) {
CacheDirectiveInfo directive =
new CacheDirectiveInfo.Builder().
setPath(new Path(paths.get(i))).
setPool(pool).
build();
nnRpc.addCacheDirective(directive, EnumSet.noneOf(CacheFlag.class));
expected += numBlocksPerFile;
waitForCachedBlocks(namenode, expected, expected,
"testWaitForCachedReplicas:1");
}
// Check that the datanodes have the right cache values
DatanodeInfo[] live = dfs.getDataNodeStats(DatanodeReportType.LIVE);
assertEquals("Unexpected number of live nodes", NUM_DATANODES, live.length);
long totalUsed = 0;
for (DatanodeInfo dn : live) {
final long cacheCapacity = dn.getCacheCapacity();
final long cacheUsed = dn.getCacheUsed();
final long cacheRemaining = dn.getCacheRemaining();
assertEquals("Unexpected cache capacity", CACHE_CAPACITY, cacheCapacity);
assertEquals("Capacity not equal to used + remaining",
cacheCapacity, cacheUsed + cacheRemaining);
assertEquals("Remaining not equal to capacity - used",
cacheCapacity - cacheUsed, cacheRemaining);
totalUsed += cacheUsed;
}
assertEquals(expected*BLOCK_SIZE, totalUsed);
// Uncache and check each path in sequence
RemoteIterator<CacheDirectiveEntry> entries =
new CacheDirectiveIterator(nnRpc, null, Sampler.NEVER);
for (int i=0; i<numFiles; i++) {
CacheDirectiveEntry entry = entries.next();
nnRpc.removeCacheDirective(entry.getInfo().getId());
expected -= numBlocksPerFile;
waitForCachedBlocks(namenode, expected, expected,
"testWaitForCachedReplicas:2");
}
}
开发者ID:naver,项目名称:hadoop,代码行数:80,代码来源:TestCacheDirectives.java
示例19: testTraceCreateTable
import org.apache.htrace.Sampler; //导入依赖的package包/类
@Test
public void testTraceCreateTable() throws Exception {
TraceScope tableCreationSpan = Trace.startSpan("creating table", Sampler.ALWAYS);
Table table;
try {
table = TEST_UTIL.createTable(TableName.valueOf("table"),
FAMILY_BYTES);
} finally {
tableCreationSpan.close();
}
// Some table creation is async. Need to make sure that everything is full in before
// checking to see if the spans are there.
TEST_UTIL.waitFor(1000, new Waiter.Predicate<Exception>() {
@Override
public boolean evaluate() throws Exception {
return rcvr.getSpans().size() >= 5;
}
});
Collection<Span> spans = rcvr.getSpans();
TraceTree traceTree = new TraceTree(spans);
Collection<Span> roots = traceTree.getSpansByParent().find(ROOT_SPAN_ID);
assertEquals(1, roots.size());
Span createTableRoot = roots.iterator().next();
assertEquals("creating table", createTableRoot.getDescription());
int createTableCount = 0;
for (Span s : traceTree.getSpansByParent().find(createTableRoot.getSpanId())) {
if (s.getDescription().startsWith("MasterService.CreateTable")) {
createTableCount++;
}
}
assertTrue(createTableCount >= 1);
assertTrue(traceTree.getSpansByParent().find(createTableRoot.getSpanId()).size() > 3);
assertTrue(spans.size() > 5);
Put put = new Put("row".getBytes());
put.add(FAMILY_BYTES, "col".getBytes(), "value".getBytes());
TraceScope putSpan = Trace.startSpan("doing put", Sampler.ALWAYS);
try {
table.put(put);
} finally {
putSpan.close();
}
spans = rcvr.getSpans();
traceTree = new TraceTree(spans);
roots = traceTree.getSpansByParent().find(ROOT_SPAN_ID);
assertEquals(2, roots.size());
Span putRoot = null;
for (Span root : roots) {
if (root.getDescription().equals("doing put")) {
putRoot = root;
}
}
assertNotNull(putRoot);
}
开发者ID:fengchen8086,项目名称:ditb,代码行数:67,代码来源:TestHTraceHooks.java
示例20: testTraceCreateTable
import org.apache.htrace.Sampler; //导入依赖的package包/类
@Test
public void testTraceCreateTable() throws Exception {
TraceScope tableCreationSpan = Trace.startSpan("creating table", Sampler.ALWAYS);
Table table;
try {
table = TEST_UTIL.createTable(TableName.valueOf("table"),
FAMILY_BYTES);
} finally {
tableCreationSpan.close();
}
// Some table creation is async. Need to make sure that everything is full in before
// checking to see if the spans are there.
TEST_UTIL.waitFor(1000, new Waiter.Predicate<Exception>() {
@Override
public boolean evaluate() throws Exception {
return rcvr.getSpans().size() >= 5;
}
});
Collection<Span> spans = rcvr.getSpans();
TraceTree traceTree = new TraceTree(spans);
Collection<Span> roots = traceTree.getSpansByParent().find(Span.ROOT_SPAN_ID);
assertEquals(1, roots.size());
Span createTableRoot = roots.iterator().next();
assertEquals("creating table", createTableRoot.getDescription());
int createTableCount = 0;
for (Span s : traceTree.getSpansByParent().find(createTableRoot.getSpanId())) {
if (s.getDescription().startsWith("MasterService.CreateTable")) {
createTableCount++;
}
}
assertTrue(createTableCount >= 1);
assertTrue(traceTree.getSpansByParent().find(createTableRoot.getSpanId()).size() > 3);
assertTrue(spans.size() > 5);
Put put = new Put("row".getBytes());
put.add(FAMILY_BYTES, "col".getBytes(), "value".getBytes());
TraceScope putSpan = Trace.startSpan("doing put", Sampler.ALWAYS);
try {
table.put(put);
} finally {
putSpan.close();
}
spans = rcvr.getSpans();
traceTree = new TraceTree(spans);
roots = traceTree.getSpansByParent().find(Span.ROOT_SPAN_ID);
assertEquals(2, roots.size());
Span putRoot = null;
for (Span root : roots) {
if (root.getDescription().equals("doing put")) {
putRoot = root;
}
}
assertNotNull(putRoot);
}
开发者ID:grokcoder,项目名称:pbase,代码行数:67,代码来源:TestHTraceHooks.java
注:本文中的org.apache.htrace.Sampler类示例整理自Github/MSDocs等源码及文档管理平台,相关代码片段筛选自各路编程大神贡献的开源项目,源码版权归原作者所有,传播和使用请参考对应项目的License;未经允许,请勿转载。 |
请发表评论