• 设为首页
  • 点击收藏
  • 手机版
    手机扫一扫访问
    迪恩网络手机版
  • 关注官方公众号
    微信扫一扫关注
    迪恩网络公众号

Java IPCLoggerChannel类代码示例

原作者: [db:作者] 来自: [db:来源] 收藏 邀请

本文整理汇总了Java中org.apache.hadoop.hdfs.qjournal.client.IPCLoggerChannel的典型用法代码示例。如果您正苦于以下问题:Java IPCLoggerChannel类的具体用法?Java IPCLoggerChannel怎么用?Java IPCLoggerChannel使用的例子?那么恭喜您, 这里精选的类代码示例或许可以为您提供帮助。



IPCLoggerChannel类属于org.apache.hadoop.hdfs.qjournal.client包,在下文中一共展示了IPCLoggerChannel类的14个代码示例,这些例子默认根据受欢迎程度排序。您可以为喜欢或者感觉有用的代码点赞,您的评价将有助于我们的系统推荐出更棒的Java代码示例。

示例1: setup

import org.apache.hadoop.hdfs.qjournal.client.IPCLoggerChannel; //导入依赖的package包/类
@Before
public void setup() throws Exception {
  File editsDir = new File(MiniDFSCluster.getBaseDirectory() +
      File.separator + "TestJournalNode");
  FileUtil.fullyDelete(editsDir);
  
  conf.set(DFSConfigKeys.DFS_JOURNALNODE_EDITS_DIR_KEY,
      editsDir.getAbsolutePath());
  conf.set(DFSConfigKeys.DFS_JOURNALNODE_RPC_ADDRESS_KEY,
      "0.0.0.0:0");
  jn = new JournalNode();
  jn.setConf(conf);
  jn.start();
  journalId = "test-journalid-" + GenericTestUtils.uniqueSequenceId();
  journal = jn.getOrCreateJournal(journalId);
  journal.format(FAKE_NSINFO);
  
  ch = new IPCLoggerChannel(conf, FAKE_NSINFO, journalId, jn.getBoundIpcAddress());
}
 
开发者ID:naver,项目名称:hadoop,代码行数:20,代码来源:TestJournalNode.java


示例2: setup

import org.apache.hadoop.hdfs.qjournal.client.IPCLoggerChannel; //导入依赖的package包/类
@Before
public void setup() throws Exception {
  File editsDir = new File(MiniDFSCluster.getBaseDirectory(null) +
      File.separator + "TestJournalNode");
  FileUtil.fullyDelete(editsDir);
  
  conf.set(JournalConfigKeys.DFS_JOURNALNODE_DIR_KEY,
      editsDir.getAbsolutePath());
  conf.set(JournalConfigKeys.DFS_JOURNALNODE_RPC_ADDRESS_KEY,
      "0.0.0.0:0");    
  MiniJournalCluster.getFreeHttpPortAndUpdateConf(conf, true);
  
  jn = new JournalNode();
  jn.setConf(conf);
  jn.start();
  journalId = "test-journalid-" + QJMTestUtil.uniqueSequenceId();
  journal = jn.getOrCreateJournal(QuorumJournalManager
      .journalIdStringToBytes(journalId));
  journal.transitionJournal(FAKE_NSINFO, Transition.FORMAT, null);
  
  ch = new IPCLoggerChannel(conf, FAKE_NSINFO, journalId, jn.getBoundIpcAddress());
}
 
开发者ID:rhli,项目名称:hadoop-EAR,代码行数:23,代码来源:TestJournalNode.java


示例3: setupMock

import org.apache.hadoop.hdfs.qjournal.client.IPCLoggerChannel; //导入依赖的package包/类
@Before
public void setupMock() {
  conf.setInt(DFSConfigKeys.DFS_QJOURNAL_QUEUE_SIZE_LIMIT_KEY,
      LIMIT_QUEUE_SIZE_MB);

  // Channel to the mock object instead of a real IPC proxy.
  ch = new IPCLoggerChannel(conf, FAKE_NSINFO, JID, FAKE_ADDR) {
    @Override
    protected QJournalProtocol getProxy() throws IOException {
      return mockProxy;
    }
  };
  
  ch.setEpoch(1);
}
 
开发者ID:naver,项目名称:hadoop,代码行数:16,代码来源:TestIPCLoggerChannel.java


示例4: testJournal

import org.apache.hadoop.hdfs.qjournal.client.IPCLoggerChannel; //导入依赖的package包/类
@Test(timeout=100000)
public void testJournal() throws Exception {
  MetricsRecordBuilder metrics = MetricsAsserts.getMetrics(
      journal.getMetricsForTests().getName());
  MetricsAsserts.assertCounter("BatchesWritten", 0L, metrics);
  MetricsAsserts.assertCounter("BatchesWrittenWhileLagging", 0L, metrics);
  MetricsAsserts.assertGauge("CurrentLagTxns", 0L, metrics);

  IPCLoggerChannel ch = new IPCLoggerChannel(
      conf, FAKE_NSINFO, journalId, jn.getBoundIpcAddress());
  ch.newEpoch(1).get();
  ch.setEpoch(1);
  ch.startLogSegment(1, NameNodeLayoutVersion.CURRENT_LAYOUT_VERSION).get();
  ch.sendEdits(1L, 1, 1, "hello".getBytes(Charsets.UTF_8)).get();
  
  metrics = MetricsAsserts.getMetrics(
      journal.getMetricsForTests().getName());
  MetricsAsserts.assertCounter("BatchesWritten", 1L, metrics);
  MetricsAsserts.assertCounter("BatchesWrittenWhileLagging", 0L, metrics);
  MetricsAsserts.assertGauge("CurrentLagTxns", 0L, metrics);

  ch.setCommittedTxId(100L);
  ch.sendEdits(1L, 2, 1, "goodbye".getBytes(Charsets.UTF_8)).get();

  metrics = MetricsAsserts.getMetrics(
      journal.getMetricsForTests().getName());
  MetricsAsserts.assertCounter("BatchesWritten", 2L, metrics);
  MetricsAsserts.assertCounter("BatchesWrittenWhileLagging", 1L, metrics);
  MetricsAsserts.assertGauge("CurrentLagTxns", 98L, metrics);

}
 
开发者ID:naver,项目名称:hadoop,代码行数:32,代码来源:TestJournalNode.java


示例5: testJournal

import org.apache.hadoop.hdfs.qjournal.client.IPCLoggerChannel; //导入依赖的package包/类
@Test(timeout=100000)
public void testJournal() throws Exception {
  MetricsRecordBuilder metrics = MetricsAsserts.getMetrics(
      journal.getMetricsForTests().getName());
  MetricsAsserts.assertCounter("BatchesWritten", 0L, metrics);
  MetricsAsserts.assertCounter("BatchesWrittenWhileLagging", 0L, metrics);
  MetricsAsserts.assertGauge("CurrentLagTxns", 0L, metrics);
  MetricsAsserts.assertGauge("LastJournalTimestamp", 0L, metrics);

  long beginTimestamp = System.currentTimeMillis();
  IPCLoggerChannel ch = new IPCLoggerChannel(
      conf, FAKE_NSINFO, journalId, jn.getBoundIpcAddress());
  ch.newEpoch(1).get();
  ch.setEpoch(1);
  ch.startLogSegment(1, NameNodeLayoutVersion.CURRENT_LAYOUT_VERSION).get();
  ch.sendEdits(1L, 1, 1, "hello".getBytes(Charsets.UTF_8)).get();
  
  metrics = MetricsAsserts.getMetrics(
      journal.getMetricsForTests().getName());
  MetricsAsserts.assertCounter("BatchesWritten", 1L, metrics);
  MetricsAsserts.assertCounter("BatchesWrittenWhileLagging", 0L, metrics);
  MetricsAsserts.assertGauge("CurrentLagTxns", 0L, metrics);
  long lastJournalTimestamp = MetricsAsserts.getLongGauge(
      "LastJournalTimestamp", metrics);
  assertTrue(lastJournalTimestamp > beginTimestamp);
  beginTimestamp = lastJournalTimestamp;

  ch.setCommittedTxId(100L);
  ch.sendEdits(1L, 2, 1, "goodbye".getBytes(Charsets.UTF_8)).get();

  metrics = MetricsAsserts.getMetrics(
      journal.getMetricsForTests().getName());
  MetricsAsserts.assertCounter("BatchesWritten", 2L, metrics);
  MetricsAsserts.assertCounter("BatchesWrittenWhileLagging", 1L, metrics);
  MetricsAsserts.assertGauge("CurrentLagTxns", 98L, metrics);
  lastJournalTimestamp = MetricsAsserts.getLongGauge(
      "LastJournalTimestamp", metrics);
  assertTrue(lastJournalTimestamp > beginTimestamp);

}
 
开发者ID:aliyun-beta,项目名称:aliyun-oss-hadoop-fs,代码行数:41,代码来源:TestJournalNode.java


示例6: setupMock

import org.apache.hadoop.hdfs.qjournal.client.IPCLoggerChannel; //导入依赖的package包/类
@Before
public void setupMock() {
  conf.setInt(JournalConfigKeys.DFS_QJOURNAL_QUEUE_SIZE_LIMIT_KEY,
      LIMIT_QUEUE_SIZE_MB);

  // Channel to the mock object instead of a real IPC proxy.
  ch = new IPCLoggerChannel(conf, FAKE_NSINFO, JID, FAKE_ADDR) {
    @Override
    protected QJournalProtocol getProxy() throws IOException {
      return mockProxy;
    }
  };
  
  ch.setEpoch(1);
}
 
开发者ID:rhli,项目名称:hadoop-EAR,代码行数:16,代码来源:TestIPCLoggerChannel.java


示例7: setup

import org.apache.hadoop.hdfs.qjournal.client.IPCLoggerChannel; //导入依赖的package包/类
@Before
public void setup() throws Exception {
  File editsDir = new File(MiniDFSCluster.getBaseDirectory(null)
      + File.separator + "TestJournalNode");
  FileUtil.fullyDelete(editsDir);

  conf.set(JournalConfigKeys.DFS_JOURNALNODE_DIR_KEY,
      editsDir.getAbsolutePath());
  conf.set(JournalConfigKeys.DFS_JOURNALNODE_RPC_ADDRESS_KEY, "0.0.0.0:0");
  int port = MiniJournalCluster.getFreeHttpPortAndUpdateConf(conf, true);
  httpAddress = "http://localhost:" + port;

  jn = new JournalNode();
  jn.setConf(conf);
  jn.start();
  journalId = "test-journalid-" + QJMTestUtil.uniqueSequenceId();
  journal = jn.getOrCreateJournal(QuorumJournalManager
      .journalIdStringToBytes(journalId));
  journal.transitionJournal(FAKE_NSINFO, Transition.FORMAT, null);
  journal.transitionImage(FAKE_NSINFO, Transition.FORMAT, null);

  ch = new IPCLoggerChannel(conf, FAKE_NSINFO, journalId,
      jn.getBoundIpcAddress());

  // this will setup the http port
  ch.getJournalState();
}
 
开发者ID:rhli,项目名称:hadoop-EAR,代码行数:28,代码来源:TestJournalNodeImageManifest.java


示例8: testJournal

import org.apache.hadoop.hdfs.qjournal.client.IPCLoggerChannel; //导入依赖的package包/类
@Test
public void testJournal() throws Exception {
  //MetricsRecordBuilder metrics = MetricsAsserts.getMetrics(
  //    journal.getMetricsForTests().getName());
  //MetricsAsserts.assertCounter("BatchesWritten", 0L, metrics);
  //MetricsAsserts.assertCounter("BatchesWrittenWhileLagging", 0L, metrics);
  //MetricsAsserts.assertGauge("CurrentLagTxns", 0L, metrics);

  IPCLoggerChannel ch = new IPCLoggerChannel(
      conf, FAKE_NSINFO, journalId, jn.getBoundIpcAddress());
  ch.newEpoch(1).get();
  ch.setEpoch(1);
  ch.startLogSegment(1).get();
  ch.sendEdits(1L, 1, 1, "hello".getBytes(Charsets.UTF_8)).get();
  
  //metrics = MetricsAsserts.getMetrics(
  //    journal.getMetricsForTests().getName());
  //MetricsAsserts.assertCounter("BatchesWritten", 1L, metrics);
  //MetricsAsserts.assertCounter("BatchesWrittenWhileLagging", 0L, metrics);
  //MetricsAsserts.assertGauge("CurrentLagTxns", 0L, metrics);

  ch.setCommittedTxId(100L, false);
  ch.sendEdits(1L, 2, 1, "goodbye".getBytes(Charsets.UTF_8)).get();

  //metrics = MetricsAsserts.getMetrics(
  //    journal.getMetricsForTests().getName());
  //MetricsAsserts.assertCounter("BatchesWritten", 2L, metrics);
  //MetricsAsserts.assertCounter("BatchesWrittenWhileLagging", 1L, metrics);
  //MetricsAsserts.assertGauge("CurrentLagTxns", 98L, metrics);

}
 
开发者ID:rhli,项目名称:hadoop-EAR,代码行数:32,代码来源:TestJournalNode.java


示例9: testJournal

import org.apache.hadoop.hdfs.qjournal.client.IPCLoggerChannel; //导入依赖的package包/类
@Test(timeout=100000)
public void testJournal() throws Exception {
  MetricsRecordBuilder metrics = MetricsAsserts.getMetrics(
      journal.getMetricsForTests().getName());
  MetricsAsserts.assertCounter("BatchesWritten", 0L, metrics);
  MetricsAsserts.assertCounter("BatchesWrittenWhileLagging", 0L, metrics);
  MetricsAsserts.assertGauge("CurrentLagTxns", 0L, metrics);

  IPCLoggerChannel ch = new IPCLoggerChannel(
      conf, FAKE_NSINFO, journalId, jn.getBoundIpcAddress());
  ch.newEpoch(1).get();
  ch.setEpoch(1);
  ch.startLogSegment(1).get();
  ch.sendEdits(1L, 1, 1, "hello".getBytes(Charsets.UTF_8)).get();
  
  metrics = MetricsAsserts.getMetrics(
      journal.getMetricsForTests().getName());
  MetricsAsserts.assertCounter("BatchesWritten", 1L, metrics);
  MetricsAsserts.assertCounter("BatchesWrittenWhileLagging", 0L, metrics);
  MetricsAsserts.assertGauge("CurrentLagTxns", 0L, metrics);

  ch.setCommittedTxId(100L);
  ch.sendEdits(1L, 2, 1, "goodbye".getBytes(Charsets.UTF_8)).get();

  metrics = MetricsAsserts.getMetrics(
      journal.getMetricsForTests().getName());
  MetricsAsserts.assertCounter("BatchesWritten", 2L, metrics);
  MetricsAsserts.assertCounter("BatchesWrittenWhileLagging", 1L, metrics);
  MetricsAsserts.assertGauge("CurrentLagTxns", 98L, metrics);

}
 
开发者ID:ict-carch,项目名称:hadoop-plus,代码行数:32,代码来源:TestJournalNode.java


示例10: testHttpServer

import org.apache.hadoop.hdfs.qjournal.client.IPCLoggerChannel; //导入依赖的package包/类
@Test(timeout=100000)
public void testHttpServer() throws Exception {
  String urlRoot = jn.getHttpServerURI();
  
  // Check default servlets.
  String pageContents = DFSTestUtil.urlGet(new URL(urlRoot + "/jmx"));
  assertTrue("Bad contents: " + pageContents,
      pageContents.contains(
          "Hadoop:service=JournalNode,name=JvmMetrics"));

  // Create some edits on server side
  byte[] EDITS_DATA = QJMTestUtil.createTxnData(1, 3);
  IPCLoggerChannel ch = new IPCLoggerChannel(
      conf, FAKE_NSINFO, journalId, jn.getBoundIpcAddress());
  ch.newEpoch(1).get();
  ch.setEpoch(1);
  ch.startLogSegment(1, NameNodeLayoutVersion.CURRENT_LAYOUT_VERSION).get();
  ch.sendEdits(1L, 1, 3, EDITS_DATA).get();
  ch.finalizeLogSegment(1, 3).get();

  // Attempt to retrieve via HTTP, ensure we get the data back
  // including the header we expected
  byte[] retrievedViaHttp = DFSTestUtil.urlGetBytes(new URL(urlRoot +
      "/getJournal?segmentTxId=1&jid=" + journalId));
  byte[] expected = Bytes.concat(
          Ints.toByteArray(HdfsConstants.NAMENODE_LAYOUT_VERSION),
          (new byte[] { 0, 0, 0, 0 }), // layout flags section
          EDITS_DATA);

  assertArrayEquals(expected, retrievedViaHttp);
  
  // Attempt to fetch a non-existent file, check that we get an
  // error status code
  URL badUrl = new URL(urlRoot + "/getJournal?segmentTxId=12345&jid=" + journalId);
  HttpURLConnection connection = (HttpURLConnection)badUrl.openConnection();
  try {
    assertEquals(404, connection.getResponseCode());
  } finally {
    connection.disconnect();
  }
}
 
开发者ID:naver,项目名称:hadoop,代码行数:42,代码来源:TestJournalNode.java


示例11: testHttpServer

import org.apache.hadoop.hdfs.qjournal.client.IPCLoggerChannel; //导入依赖的package包/类
@Test(timeout=100000)
public void testHttpServer() throws Exception {
  String urlRoot = jn.getHttpServerURI();
  
  // Check default servlets.
  String pageContents = DFSTestUtil.urlGet(new URL(urlRoot + "/jmx"));
  assertTrue("Bad contents: " + pageContents,
      pageContents.contains(
          "Hadoop:service=JournalNode,name=JvmMetrics"));

  // Create some edits on server side
  byte[] EDITS_DATA = QJMTestUtil.createTxnData(1, 3);
  IPCLoggerChannel ch = new IPCLoggerChannel(
      conf, FAKE_NSINFO, journalId, jn.getBoundIpcAddress());
  ch.newEpoch(1).get();
  ch.setEpoch(1);
  ch.startLogSegment(1, NameNodeLayoutVersion.CURRENT_LAYOUT_VERSION).get();
  ch.sendEdits(1L, 1, 3, EDITS_DATA).get();
  ch.finalizeLogSegment(1, 3).get();

  // Attempt to retrieve via HTTP, ensure we get the data back
  // including the header we expected
  byte[] retrievedViaHttp = DFSTestUtil.urlGetBytes(new URL(urlRoot +
      "/getJournal?segmentTxId=1&jid=" + journalId));
  byte[] expected = Bytes.concat(
          Ints.toByteArray(HdfsServerConstants.NAMENODE_LAYOUT_VERSION),
          (new byte[] { 0, 0, 0, 0 }), // layout flags section
          EDITS_DATA);

  assertArrayEquals(expected, retrievedViaHttp);
  
  // Attempt to fetch a non-existent file, check that we get an
  // error status code
  URL badUrl = new URL(urlRoot + "/getJournal?segmentTxId=12345&jid=" + journalId);
  HttpURLConnection connection = (HttpURLConnection)badUrl.openConnection();
  try {
    assertEquals(404, connection.getResponseCode());
  } finally {
    connection.disconnect();
  }
}
 
开发者ID:aliyun-beta,项目名称:aliyun-oss-hadoop-fs,代码行数:42,代码来源:TestJournalNode.java


示例12: testHttpServer

import org.apache.hadoop.hdfs.qjournal.client.IPCLoggerChannel; //导入依赖的package包/类
@Test(timeout=100000)
public void testHttpServer() throws Exception {
  String urlRoot = jn.getHttpServerURI();
  
  // Check default servlets.
  String pageContents = DFSTestUtil.urlGet(new URL(urlRoot + "/jmx"));
  assertTrue("Bad contents: " + pageContents,
      pageContents.contains(
          "Hadoop:service=JournalNode,name=JvmMetrics"));
  
  // Check JSP page.
  pageContents = DFSTestUtil.urlGet(
      new URL(urlRoot + "/journalstatus.jsp"));
  assertTrue(pageContents.contains("JournalNode"));

  // Create some edits on server side
  byte[] EDITS_DATA = QJMTestUtil.createTxnData(1, 3);
  IPCLoggerChannel ch = new IPCLoggerChannel(
      conf, FAKE_NSINFO, journalId, jn.getBoundIpcAddress());
  ch.newEpoch(1).get();
  ch.setEpoch(1);
  ch.startLogSegment(1, NameNodeLayoutVersion.CURRENT_LAYOUT_VERSION).get();
  ch.sendEdits(1L, 1, 3, EDITS_DATA).get();
  ch.finalizeLogSegment(1, 3).get();

  // Attempt to retrieve via HTTP, ensure we get the data back
  // including the header we expected
  byte[] retrievedViaHttp = DFSTestUtil.urlGetBytes(new URL(urlRoot +
      "/getJournal?segmentTxId=1&jid=" + journalId));
  byte[] expected = Bytes.concat(
          Ints.toByteArray(HdfsConstants.NAMENODE_LAYOUT_VERSION),
          (new byte[] { 0, 0, 0, 0 }), // layout flags section
          EDITS_DATA);

  assertArrayEquals(expected, retrievedViaHttp);
  
  // Attempt to fetch a non-existent file, check that we get an
  // error status code
  URL badUrl = new URL(urlRoot + "/getJournal?segmentTxId=12345&jid=" + journalId);
  HttpURLConnection connection = (HttpURLConnection)badUrl.openConnection();
  try {
    assertEquals(404, connection.getResponseCode());
  } finally {
    connection.disconnect();
  }
}
 
开发者ID:Nextzero,项目名称:hadoop-2.6.0-cdh5.4.3,代码行数:47,代码来源:TestJournalNode.java


示例13: testHttpServer

import org.apache.hadoop.hdfs.qjournal.client.IPCLoggerChannel; //导入依赖的package包/类
@Test
public void testHttpServer() throws Exception {
  InetSocketAddress addr = jn.getBoundHttpAddress();
  assertTrue(addr.getPort() > 0);
  
  String urlRoot = "http://localhost:" + addr.getPort();
  
  // TODO other servlets
  
  // Create some edits on server side
  int numTxns = 10;
  byte[] EDITS_DATA = QJMTestUtil.createTxnData(1, numTxns);
  IPCLoggerChannel ch = new IPCLoggerChannel(
      conf, FAKE_NSINFO, journalId, jn.getBoundIpcAddress());
  ch.newEpoch(1).get();
  ch.setEpoch(1);
  ch.startLogSegment(1).get();
  ch.sendEdits(1L, 1, numTxns, EDITS_DATA).get();
  ch.finalizeLogSegment(1, numTxns).get();

  // Attempt to retrieve via HTTP, ensure we get the data back
  // including the header we expected
  byte[] retrievedViaHttp = QJMTestUtil.urlGetBytes(new URL(urlRoot +
      "/getJournal?segmentTxId=1&position=0&jid=" + journalId));
  byte[] expected = Bytes.concat(
          Ints.toByteArray(FSConstants.LAYOUT_VERSION),
          EDITS_DATA);
  
  // retrieve partial edits
  int pos = 100;
  byte[] expectedPart = new byte[expected.length - pos];
  System.arraycopy(expected, pos, expectedPart, 0, expectedPart.length);
  retrievedViaHttp = QJMTestUtil.urlGetBytes(new URL(urlRoot +
      "/getJournal?segmentTxId=1&position=" + pos + "&jid=" + journalId));
  assertArrayEquals(expectedPart, retrievedViaHttp);
  
  // Attempt to fetch a non-existent file, check that we get an
  // error status code
  URL badUrl = new URL(urlRoot + "/getJournal?segmentTxId=12345&position=0&jid=" + journalId);
  HttpURLConnection connection = (HttpURLConnection)badUrl.openConnection();
  try {
    assertEquals(404, connection.getResponseCode());
  } finally {
    connection.disconnect();
  }
}
 
开发者ID:rhli,项目名称:hadoop-EAR,代码行数:47,代码来源:TestJournalNode.java


示例14: testHttpServer

import org.apache.hadoop.hdfs.qjournal.client.IPCLoggerChannel; //导入依赖的package包/类
@Test(timeout=100000)
public void testHttpServer() throws Exception {
  InetSocketAddress addr = jn.getBoundHttpAddress();
  assertTrue(addr.getPort() > 0);
  
  String urlRoot = "http://localhost:" + addr.getPort();
  
  // Check default servlets.
  String pageContents = DFSTestUtil.urlGet(new URL(urlRoot + "/jmx"));
  assertTrue("Bad contents: " + pageContents,
      pageContents.contains(
          "Hadoop:service=JournalNode,name=JvmMetrics"));
  
  // Check JSP page.
  pageContents = DFSTestUtil.urlGet(
      new URL(urlRoot + "/journalstatus.jsp"));
  assertTrue(pageContents.contains("JournalNode"));

  // Create some edits on server side
  byte[] EDITS_DATA = QJMTestUtil.createTxnData(1, 3);
  IPCLoggerChannel ch = new IPCLoggerChannel(
      conf, FAKE_NSINFO, journalId, jn.getBoundIpcAddress());
  ch.newEpoch(1).get();
  ch.setEpoch(1);
  ch.startLogSegment(1).get();
  ch.sendEdits(1L, 1, 3, EDITS_DATA).get();
  ch.finalizeLogSegment(1, 3).get();

  // Attempt to retrieve via HTTP, ensure we get the data back
  // including the header we expected
  byte[] retrievedViaHttp = DFSTestUtil.urlGetBytes(new URL(urlRoot +
      "/getJournal?segmentTxId=1&jid=" + journalId));
  byte[] expected = Bytes.concat(
          Ints.toByteArray(HdfsConstants.LAYOUT_VERSION),
          EDITS_DATA);

  assertArrayEquals(expected, retrievedViaHttp);
  
  // Attempt to fetch a non-existent file, check that we get an
  // error status code
  URL badUrl = new URL(urlRoot + "/getJournal?segmentTxId=12345&jid=" + journalId);
  HttpURLConnection connection = (HttpURLConnection)badUrl.openConnection();
  try {
    assertEquals(404, connection.getResponseCode());
  } finally {
    connection.disconnect();
  }
}
 
开发者ID:ict-carch,项目名称:hadoop-plus,代码行数:49,代码来源:TestJournalNode.java



注:本文中的org.apache.hadoop.hdfs.qjournal.client.IPCLoggerChannel类示例整理自Github/MSDocs等源码及文档管理平台,相关代码片段筛选自各路编程大神贡献的开源项目,源码版权归原作者所有,传播和使用请参考对应项目的License;未经允许,请勿转载。


鲜花

握手

雷人

路过

鸡蛋
该文章已有0人参与评论

请发表评论

全部评论

专题导读
上一篇:
Java OpCopyBlockProto类代码示例发布时间:2022-05-22
下一篇:
Java PolicyException类代码示例发布时间:2022-05-22
热门推荐
阅读排行榜

扫描微信二维码

查看手机版网站

随时了解更新最新资讯

139-2527-9053

在线客服(服务时间 9:00~18:00)

在线QQ客服
地址:深圳市南山区西丽大学城创智工业园
电邮:jeky_zhao#qq.com
移动电话:139-2527-9053

Powered by 互联科技 X3.4© 2001-2213 极客世界.|Sitemap