• 设为首页
  • 点击收藏
  • 手机版
    手机扫一扫访问
    迪恩网络手机版
  • 关注官方公众号
    微信扫一扫关注
    迪恩网络公众号

Java RpcResponseHeaderProto类代码示例

原作者: [db:作者] 来自: [db:来源] 收藏 邀请

本文整理汇总了Java中org.apache.hadoop.ipc.protobuf.RpcHeaderProtos.RpcResponseHeaderProto的典型用法代码示例。如果您正苦于以下问题:Java RpcResponseHeaderProto类的具体用法?Java RpcResponseHeaderProto怎么用?Java RpcResponseHeaderProto使用的例子?那么恭喜您, 这里精选的类代码示例或许可以为您提供帮助。



RpcResponseHeaderProto类属于org.apache.hadoop.ipc.protobuf.RpcHeaderProtos包,在下文中一共展示了RpcResponseHeaderProto类的15个代码示例,这些例子默认根据受欢迎程度排序。您可以为喜欢或者感觉有用的代码点赞,您的评价将有助于我们的系统推荐出更棒的Java代码示例。

示例1: checkResponse

import org.apache.hadoop.ipc.protobuf.RpcHeaderProtos.RpcResponseHeaderProto; //导入依赖的package包/类
/** Check the rpc response header. */
void checkResponse(RpcResponseHeaderProto header) throws IOException {
  if (header == null) {
    throw new EOFException("Response is null.");
  }
  if (header.hasClientId()) {
    // check client IDs
    final byte[] id = header.getClientId().toByteArray();
    if (!Arrays.equals(id, RpcConstants.DUMMY_CLIENT_ID)) {
      if (!Arrays.equals(id, clientId)) {
        throw new IOException("Client IDs not matched: local ID="
            + StringUtils.byteToHexString(clientId) + ", ID in response="
            + StringUtils.byteToHexString(header.getClientId().toByteArray()));
      }
    }
  }
}
 
开发者ID:nucypher,项目名称:hadoop-oss,代码行数:18,代码来源:Client.java


示例2: checkResponse

import org.apache.hadoop.ipc.protobuf.RpcHeaderProtos.RpcResponseHeaderProto; //导入依赖的package包/类
/** Check the rpc response header. */
void checkResponse(RpcResponseHeaderProto header) throws IOException {
  if (header == null) {
    throw new IOException("Response is null.");
  }
  if (header.hasClientId()) {
    // check client IDs
    final byte[] id = header.getClientId().toByteArray();
    if (!Arrays.equals(id, RpcConstants.DUMMY_CLIENT_ID)) {
      if (!Arrays.equals(id, clientId)) {
        throw new IOException("Client IDs not matched: local ID="
            + StringUtils.byteToHexString(clientId) + ", ID in response="
            + StringUtils.byteToHexString(header.getClientId().toByteArray()));
      }
    }
  }
}
 
开发者ID:Nextzero,项目名称:hadoop-2.6.0-cdh5.4.3,代码行数:18,代码来源:Client.java


示例3: checkResponse

import org.apache.hadoop.ipc.protobuf.RpcHeaderProtos.RpcResponseHeaderProto; //导入依赖的package包/类
/** Check the rpc response header. */
void checkResponse(RpcResponseHeaderProto header) throws IOException {
  if (header == null) {
    throw new IOException("Response is null.");
  }
  if (header.hasClientId()) {
    // check client IDs
    final byte[] id = header.getClientId().toByteArray();
    if (!Arrays.equals(id, RpcConstants.DUMMY_CLIENT_ID)) {
      if (!Arrays.equals(id, clientId)) {
        throw new IOException("Client IDs not matched: local ID="
            + StringUtils.byteToHexString(clientId) + ", ID in reponse="
            + StringUtils.byteToHexString(header.getClientId().toByteArray()));
      }
    }
  }
}
 
开发者ID:ict-carch,项目名称:hadoop-plus,代码行数:18,代码来源:Client.java


示例4: setupResponseForWritable

import org.apache.hadoop.ipc.protobuf.RpcHeaderProtos.RpcResponseHeaderProto; //导入依赖的package包/类
private byte[] setupResponseForWritable(
    RpcResponseHeaderProto header, Writable rv) throws IOException {
  ResponseBuffer buf = responseBuffer.get().reset();
  try {
    RpcWritable.wrap(header).writeTo(buf);
    if (rv != null) {
      RpcWritable.wrap(rv).writeTo(buf);
    }
    return buf.toByteArray();
  } finally {
    // Discard a large buf and reset it back to smaller size
    // to free up heap.
    if (buf.capacity() > maxRespSize) {
      buf.setCapacity(INITIAL_RESP_BUF_SIZE);
    }
  }
}
 
开发者ID:hopshadoop,项目名称:hops,代码行数:18,代码来源:Server.java


示例5: setupResponseForProtobuf

import org.apache.hadoop.ipc.protobuf.RpcHeaderProtos.RpcResponseHeaderProto; //导入依赖的package包/类
private byte[] setupResponseForProtobuf(
    RpcResponseHeaderProto header, Writable rv) throws IOException {
  Message payload = (rv != null)
      ? ((RpcWritable.ProtobufWrapper)rv).getMessage() : null;
  int length = getDelimitedLength(header);
  if (payload != null) {
    length += getDelimitedLength(payload);
  }
  byte[] buf = new byte[length + 4];
  CodedOutputStream cos = CodedOutputStream.newInstance(buf);
  // the stream only supports little endian ints
  cos.writeRawByte((byte)((length >>> 24) & 0xFF));
  cos.writeRawByte((byte)((length >>> 16) & 0xFF));
  cos.writeRawByte((byte)((length >>>  8) & 0xFF));
  cos.writeRawByte((byte)((length >>>  0) & 0xFF));
  cos.writeRawVarint32(header.getSerializedSize());
  header.writeTo(cos);
  if (payload != null) {
    cos.writeRawVarint32(payload.getSerializedSize());
    payload.writeTo(cos);
  }
  return buf;
}
 
开发者ID:hopshadoop,项目名称:hops,代码行数:24,代码来源:Server.java


示例6: wrapWithSasl

import org.apache.hadoop.ipc.protobuf.RpcHeaderProtos.RpcResponseHeaderProto; //导入依赖的package包/类
private void wrapWithSasl(RpcCall call) throws IOException {
  if (call.connection.saslServer != null) {
    byte[] token = call.rpcResponse.array();
    // synchronization may be needed since there can be multiple Handler
    // threads using saslServer to wrap responses.
    synchronized (call.connection.saslServer) {
      token = call.connection.saslServer.wrap(token, 0, token.length);
    }
    if (LOG.isDebugEnabled())
      LOG.debug("Adding saslServer wrapped token of size " + token.length
          + " as call response.");
    // rebuild with sasl header and payload
    RpcResponseHeaderProto saslHeader = RpcResponseHeaderProto.newBuilder()
        .setCallId(AuthProtocol.SASL.callId)
        .setStatus(RpcStatusProto.SUCCESS)
        .build();
    RpcSaslProto saslMessage = RpcSaslProto.newBuilder()
        .setState(SaslState.WRAP)
        .setToken(ByteString.copyFrom(token))
        .build();
    setupResponse(call, saslHeader, RpcWritable.wrap(saslMessage));
  }
}
 
开发者ID:hopshadoop,项目名称:hops,代码行数:24,代码来源:Server.java


示例7: readNextRpcPacket

import org.apache.hadoop.ipc.protobuf.RpcHeaderProtos.RpcResponseHeaderProto; //导入依赖的package包/类
private void readNextRpcPacket() throws IOException {
  LOG.debug("reading next wrapped RPC packet");
  DataInputStream dis = new DataInputStream(in);
  int rpcLen = dis.readInt();
  byte[] rpcBuf = new byte[rpcLen];
  dis.readFully(rpcBuf);

  // decode the RPC header
  ByteArrayInputStream bis = new ByteArrayInputStream(rpcBuf);
  RpcResponseHeaderProto.Builder headerBuilder =
      RpcResponseHeaderProto.newBuilder();
  headerBuilder.mergeDelimitedFrom(bis);

  boolean isWrapped = false;
  // Must be SASL wrapped, verify and decode.
  if (headerBuilder.getCallId() == AuthProtocol.SASL.callId) {
    RpcSaslProto.Builder saslMessage = RpcSaslProto.newBuilder();
    saslMessage.mergeDelimitedFrom(bis);
    if (saslMessage.getState() == SaslState.WRAP) {
      isWrapped = true;
      byte[] token = saslMessage.getToken().toByteArray();
      if (LOG.isDebugEnabled()) {
        LOG.debug("unwrapping token of length:" + token.length);
      }
      token = saslClient.unwrap(token, 0, token.length);
      unwrappedRpcBuffer = ByteBuffer.wrap(token);
    }
  }
  if (!isWrapped) {
    throw new SaslException("Server sent non-wrapped response");
  }
}
 
开发者ID:nucypher,项目名称:hadoop-oss,代码行数:33,代码来源:SaslRpcClient.java


示例8: wrapWithSasl

import org.apache.hadoop.ipc.protobuf.RpcHeaderProtos.RpcResponseHeaderProto; //导入依赖的package包/类
private static void wrapWithSasl(ByteArrayOutputStream response, Call call)
    throws IOException {
  if (call.connection.saslServer != null) {
    byte[] token = call.rpcResponse.array();
    // synchronization may be needed since there can be multiple Handler
    // threads using saslServer to wrap responses.
    synchronized (call.connection.saslServer) {
      token = call.connection.saslServer.wrap(token, 0, token.length);
    }
    if (LOG.isDebugEnabled())
      LOG.debug("Adding saslServer wrapped token of size " + token.length
          + " as call response.");
    response.reset();
    // rebuild with sasl header and payload
    RpcResponseHeaderProto saslHeader = RpcResponseHeaderProto.newBuilder()
        .setCallId(AuthProtocol.SASL.callId)
        .setStatus(RpcStatusProto.SUCCESS)
        .build();
    RpcSaslProto saslMessage = RpcSaslProto.newBuilder()
        .setState(SaslState.WRAP)
        .setToken(ByteString.copyFrom(token, 0, token.length))
        .build();
    RpcResponseMessageWrapper saslResponse =
        new RpcResponseMessageWrapper(saslHeader, saslMessage);

    DataOutputStream out = new DataOutputStream(response);
    out.writeInt(saslResponse.getLength());
    saslResponse.write(out);
  }
}
 
开发者ID:nucypher,项目名称:hadoop-oss,代码行数:31,代码来源:Server.java


示例9: readNextRpcPacket

import org.apache.hadoop.ipc.protobuf.RpcHeaderProtos.RpcResponseHeaderProto; //导入依赖的package包/类
private void readNextRpcPacket() throws IOException {
  LOG.debug("reading next wrapped RPC packet");
  DataInputStream dis = new DataInputStream(in);
  int rpcLen = dis.readInt();
  byte[] rpcBuf = new byte[rpcLen];
  dis.readFully(rpcBuf);
  
  // decode the RPC header
  ByteArrayInputStream bis = new ByteArrayInputStream(rpcBuf);
  RpcResponseHeaderProto.Builder headerBuilder =
      RpcResponseHeaderProto.newBuilder();
  headerBuilder.mergeDelimitedFrom(bis);
  
  boolean isWrapped = false;
  // Must be SASL wrapped, verify and decode.
  if (headerBuilder.getCallId() == AuthProtocol.SASL.callId) {
    RpcSaslProto.Builder saslMessage = RpcSaslProto.newBuilder();
    saslMessage.mergeDelimitedFrom(bis);
    if (saslMessage.getState() == SaslState.WRAP) {
      isWrapped = true;
      byte[] token = saslMessage.getToken().toByteArray();
      if (LOG.isDebugEnabled()) {
        LOG.debug("unwrapping token of length:" + token.length);
      }
      token = saslClient.unwrap(token, 0, token.length);
      unwrappedRpcBuffer = ByteBuffer.wrap(token);
    }
  }
  if (!isWrapped) {
    throw new SaslException("Server sent non-wrapped response");
  }
}
 
开发者ID:naver,项目名称:hadoop,代码行数:33,代码来源:SaslRpcClient.java


示例10: wrapWithSasl

import org.apache.hadoop.ipc.protobuf.RpcHeaderProtos.RpcResponseHeaderProto; //导入依赖的package包/类
private void wrapWithSasl(ByteArrayOutputStream response, Call call)
    throws IOException {
  if (call.connection.saslServer != null) {
    byte[] token = response.toByteArray();
    // synchronization may be needed since there can be multiple Handler
    // threads using saslServer to wrap responses.
    synchronized (call.connection.saslServer) {
      token = call.connection.saslServer.wrap(token, 0, token.length);
    }
    if (LOG.isDebugEnabled())
      LOG.debug("Adding saslServer wrapped token of size " + token.length
          + " as call response.");
    response.reset();
    // rebuild with sasl header and payload
    RpcResponseHeaderProto saslHeader = RpcResponseHeaderProto.newBuilder()
        .setCallId(AuthProtocol.SASL.callId)
        .setStatus(RpcStatusProto.SUCCESS)
        .build();
    RpcSaslProto saslMessage = RpcSaslProto.newBuilder()
        .setState(SaslState.WRAP)
        .setToken(ByteString.copyFrom(token, 0, token.length))
        .build();
    RpcResponseMessageWrapper saslResponse =
        new RpcResponseMessageWrapper(saslHeader, saslMessage);

    DataOutputStream out = new DataOutputStream(response);
    out.writeInt(saslResponse.getLength());
    saslResponse.write(out);
  }
}
 
开发者ID:naver,项目名称:hadoop,代码行数:31,代码来源:Server.java


示例11: setupResponse

import org.apache.hadoop.ipc.protobuf.RpcHeaderProtos.RpcResponseHeaderProto; //导入依赖的package包/类
/**
 * Setup response for the IPC Call.
 * 
 * @param responseBuf buffer to serialize the response into
 * @param call {@link Call} to which we are setting up the response
 * @param status of the IPC call
 * @param rv return value for the IPC Call, if the call was successful
 * @param errorClass error class, if the the call failed
 * @param error error message, if the call failed
 * @throws IOException
 */
private void setupResponse(
    RpcCall call, RpcStatusProto status, RpcErrorCodeProto erCode,
    Writable rv, String errorClass, String error)
        throws IOException {
  // fatal responses will cause the reader to close the connection.
  if (status == RpcStatusProto.FATAL) {
    call.connection.setShouldClose();
  }
  RpcResponseHeaderProto.Builder headerBuilder =
      RpcResponseHeaderProto.newBuilder();
  headerBuilder.setClientId(ByteString.copyFrom(call.clientId));
  headerBuilder.setCallId(call.callId);
  headerBuilder.setRetryCount(call.retryCount);
  headerBuilder.setStatus(status);
  headerBuilder.setServerIpcVersionNum(CURRENT_VERSION);

  if (status == RpcStatusProto.SUCCESS) {
    RpcResponseHeaderProto header = headerBuilder.build();
    try {
      setupResponse(call, header, rv);
    } catch (Throwable t) {
      LOG.warn("Error serializing call response for call " + call, t);
      // Call back to same function - this is OK since the
      // buffer is reset at the top, and since status is changed
      // to ERROR it won't infinite loop.
      setupResponse(call, RpcStatusProto.ERROR,
          RpcErrorCodeProto.ERROR_SERIALIZING_RESPONSE,
          null, t.getClass().getName(),
          StringUtils.stringifyException(t));
      return;
    }
  } else { // Rpc Failure
    headerBuilder.setExceptionClassName(errorClass);
    headerBuilder.setErrorMsg(error);
    headerBuilder.setErrorDetail(erCode);
    setupResponse(call, headerBuilder.build(), null);
  }
}
 
开发者ID:hopshadoop,项目名称:hops,代码行数:50,代码来源:Server.java


示例12: RpcResponseMessageWrapper

import org.apache.hadoop.ipc.protobuf.RpcHeaderProtos.RpcResponseHeaderProto; //导入依赖的package包/类
public RpcResponseMessageWrapper(
    RpcResponseHeaderProto responseHeader, Message theRequest) {
  super(responseHeader, theRequest);
}
 
开发者ID:nucypher,项目名称:hadoop-oss,代码行数:5,代码来源:ProtobufRpcEngine.java


示例13: parseHeaderFrom

import org.apache.hadoop.ipc.protobuf.RpcHeaderProtos.RpcResponseHeaderProto; //导入依赖的package包/类
@Override
RpcResponseHeaderProto parseHeaderFrom(byte[] bytes) throws IOException {
  return RpcResponseHeaderProto.parseFrom(bytes);
}
 
开发者ID:nucypher,项目名称:hadoop-oss,代码行数:5,代码来源:ProtobufRpcEngine.java


示例14: testCallIdAndRetry

import org.apache.hadoop.ipc.protobuf.RpcHeaderProtos.RpcResponseHeaderProto; //导入依赖的package包/类
/**
 * Test if
 * (1) the rpc server uses the call id/retry provided by the rpc client, and
 * (2) the rpc client receives the same call id/retry from the rpc server.
 */
@Test(timeout=60000)
public void testCallIdAndRetry() throws IOException {
  final CallInfo info = new CallInfo();

  // Override client to store the call info and check response
  final Client client = new Client(LongWritable.class, conf) {
    @Override
    Call createCall(RpcKind rpcKind, Writable rpcRequest) {
      final Call call = super.createCall(rpcKind, rpcRequest);
      info.id = call.id;
      info.retry = call.retry;
      return call;
    }
    
    @Override
    void checkResponse(RpcResponseHeaderProto header) throws IOException {
      super.checkResponse(header);
      Assert.assertEquals(info.id, header.getCallId());
      Assert.assertEquals(info.retry, header.getRetryCount());
    }
  };

  // Attach a listener that tracks every call received by the server.
  final TestServer server = new TestServer(1, false);
  server.callListener = new Runnable() {
    @Override
    public void run() {
      Assert.assertEquals(info.id, Server.getCallId());
      Assert.assertEquals(info.retry, Server.getCallRetryCount());
    }
  };

  try {
    InetSocketAddress addr = NetUtils.getConnectAddress(server);
    server.start();
    final SerialCaller caller = new SerialCaller(client, addr, 10);
    caller.run();
    assertFalse(caller.failed);
  } finally {
    client.stop();
    server.stop();
  }
}
 
开发者ID:nucypher,项目名称:hadoop-oss,代码行数:49,代码来源:TestIPC.java


示例15: receiveRpcResponse

import org.apache.hadoop.ipc.protobuf.RpcHeaderProtos.RpcResponseHeaderProto; //导入依赖的package包/类
private void receiveRpcResponse() {
  if (shouldCloseConnection.get()) {
    return;
  }
  touch();
  
  try {
    int totalLen = in.readInt();
    RpcResponseHeaderProto header = 
        RpcResponseHeaderProto.parseDelimitedFrom(in);
    checkResponse(header);

    int headerLen = header.getSerializedSize();
    headerLen += CodedOutputStream.computeRawVarint32Size(headerLen);

    int callId = header.getCallId();
    if (LOG.isDebugEnabled())
      LOG.debug(getName() + " got value #" + callId);

    Call call = calls.get(callId);
    RpcStatusProto status = header.getStatus();
    if (status == RpcStatusProto.SUCCESS) {
      Writable value = ReflectionUtils.newInstance(valueClass, conf);
      value.readFields(in);                 // read value
      calls.remove(callId);
      call.setRpcResponse(value);
      
      // verify that length was correct
      // only for ProtobufEngine where len can be verified easily
      if (call.getRpcResponse() instanceof ProtobufRpcEngine.RpcWrapper) {
        ProtobufRpcEngine.RpcWrapper resWrapper = 
            (ProtobufRpcEngine.RpcWrapper) call.getRpcResponse();
        if (totalLen != headerLen + resWrapper.getLength()) { 
          throw new RpcClientException(
              "RPC response length mismatch on rpc success");
        }
      }
    } else { // Rpc Request failed
      // Verify that length was correct
      if (totalLen != headerLen) {
        throw new RpcClientException(
            "RPC response length mismatch on rpc error");
      }
      
      final String exceptionClassName = header.hasExceptionClassName() ?
            header.getExceptionClassName() : 
              "ServerDidNotSetExceptionClassName";
      final String errorMsg = header.hasErrorMsg() ? 
            header.getErrorMsg() : "ServerDidNotSetErrorMsg" ;
      final RpcErrorCodeProto erCode = 
                (header.hasErrorDetail() ? header.getErrorDetail() : null);
      if (erCode == null) {
         LOG.warn("Detailed error code not set by server on rpc error");
      }
      RemoteException re = 
          ( (erCode == null) ? 
              new RemoteException(exceptionClassName, errorMsg) :
          new RemoteException(exceptionClassName, errorMsg, erCode));
      if (status == RpcStatusProto.ERROR) {
        calls.remove(callId);
        call.setException(re);
      } else if (status == RpcStatusProto.FATAL) {
        // Close the connection
        markClosed(re);
      }
    }
  } catch (IOException e) {
    markClosed(e);
  }
}
 
开发者ID:naver,项目名称:hadoop,代码行数:71,代码来源:Client.java



注:本文中的org.apache.hadoop.ipc.protobuf.RpcHeaderProtos.RpcResponseHeaderProto类示例整理自Github/MSDocs等源码及文档管理平台,相关代码片段筛选自各路编程大神贡献的开源项目,源码版权归原作者所有,传播和使用请参考对应项目的License;未经允许,请勿转载。


鲜花

握手

雷人

路过

鸡蛋
该文章已有0人参与评论

请发表评论

全部评论

专题导读
上一篇:
Java FileContent类代码示例发布时间:2022-05-22
下一篇:
Java Connection类代码示例发布时间:2022-05-22
热门推荐
阅读排行榜

扫描微信二维码

查看手机版网站

随时了解更新最新资讯

139-2527-9053

在线客服(服务时间 9:00~18:00)

在线QQ客服
地址:深圳市南山区西丽大学城创智工业园
电邮:jeky_zhao#qq.com
移动电话:139-2527-9053

Powered by 互联科技 X3.4© 2001-2213 极客世界.|Sitemap