• 设为首页
  • 点击收藏
  • 手机版
    手机扫一扫访问
    迪恩网络手机版
  • 关注官方公众号
    微信扫一扫关注
    迪恩网络公众号

Java HttpFSFileSystem类代码示例

原作者: [db:作者] 来自: [db:来源] 收藏 邀请

本文整理汇总了Java中org.apache.hadoop.fs.http.client.HttpFSFileSystem的典型用法代码示例。如果您正苦于以下问题:Java HttpFSFileSystem类的具体用法?Java HttpFSFileSystem怎么用?Java HttpFSFileSystem使用的例子?那么恭喜您, 这里精选的类代码示例或许可以为您提供帮助。



HttpFSFileSystem类属于org.apache.hadoop.fs.http.client包,在下文中一共展示了HttpFSFileSystem类的18个代码示例,这些例子默认根据受欢迎程度排序。您可以为喜欢或者感觉有用的代码点赞,您的评价将有助于我们的系统推荐出更棒的Java代码示例。

示例1: aclStatusToJSON

import org.apache.hadoop.fs.http.client.HttpFSFileSystem; //导入依赖的package包/类
/** Converts an <code>AclStatus</code> object into a JSON object.
 *
 * @param aclStatus AclStatus object
 *
 * @return The JSON representation of the ACLs for the file
 */
@SuppressWarnings({"unchecked"})
private static Map<String,Object> aclStatusToJSON(AclStatus aclStatus) {
  Map<String,Object> json = new LinkedHashMap<String,Object>();
  Map<String,Object> inner = new LinkedHashMap<String,Object>();
  JSONArray entriesArray = new JSONArray();
  inner.put(HttpFSFileSystem.OWNER_JSON, aclStatus.getOwner());
  inner.put(HttpFSFileSystem.GROUP_JSON, aclStatus.getGroup());
  inner.put(HttpFSFileSystem.ACL_STICKY_BIT_JSON, aclStatus.isStickyBit());
  for ( AclEntry e : aclStatus.getEntries() ) {
    entriesArray.add(e.toString());
  }
  inner.put(HttpFSFileSystem.ACL_ENTRIES_JSON, entriesArray);
  json.put(HttpFSFileSystem.ACL_STATUS_JSON, inner);
  return json;
}
 
开发者ID:naver,项目名称:hadoop,代码行数:22,代码来源:FSOperations.java


示例2: toJsonInner

import org.apache.hadoop.fs.http.client.HttpFSFileSystem; //导入依赖的package包/类
/**
 * Return in inner part of the JSON for the status - used by both the
 * GETFILESTATUS and LISTSTATUS calls.
 * @param emptyPathSuffix Whether or not to include PATH_SUFFIX_JSON
 * @return The JSONish Map
 */
public Map<String,Object> toJsonInner(boolean emptyPathSuffix) {
  Map<String,Object> json = new LinkedHashMap<String,Object>();
  json.put(HttpFSFileSystem.PATH_SUFFIX_JSON,
          (emptyPathSuffix) ? "" : fileStatus.getPath().getName());
  json.put(HttpFSFileSystem.TYPE_JSON,
          HttpFSFileSystem.FILE_TYPE.getType(fileStatus).toString());
  json.put(HttpFSFileSystem.LENGTH_JSON, fileStatus.getLen());
  json.put(HttpFSFileSystem.OWNER_JSON, fileStatus.getOwner());
  json.put(HttpFSFileSystem.GROUP_JSON, fileStatus.getGroup());
  json.put(HttpFSFileSystem.PERMISSION_JSON,
          HttpFSFileSystem.permissionToString(fileStatus.getPermission()));
  json.put(HttpFSFileSystem.ACCESS_TIME_JSON, fileStatus.getAccessTime());
  json.put(HttpFSFileSystem.MODIFICATION_TIME_JSON,
          fileStatus.getModificationTime());
  json.put(HttpFSFileSystem.BLOCK_SIZE_JSON, fileStatus.getBlockSize());
  json.put(HttpFSFileSystem.REPLICATION_JSON, fileStatus.getReplication());
  if ( (aclStatus != null) && !(aclStatus.getEntries().isEmpty()) ) {
    json.put(HttpFSFileSystem.ACL_BIT_JSON,true);
  }
  return json;
}
 
开发者ID:naver,项目名称:hadoop,代码行数:28,代码来源:FSOperations.java


示例3: xAttrsToJSON

import org.apache.hadoop.fs.http.client.HttpFSFileSystem; //导入依赖的package包/类
/**
 * Converts xAttrs to a JSON object.
 *
 * @param xAttrs file xAttrs.
 * @param encoding format of xattr values.
 *
 * @return The JSON representation of the xAttrs.
 * @throws IOException 
 */
@SuppressWarnings({"unchecked", "rawtypes"})
private static Map xAttrsToJSON(Map<String, byte[]> xAttrs, 
    XAttrCodec encoding) throws IOException {
  Map jsonMap = new LinkedHashMap();
  JSONArray jsonArray = new JSONArray();
  if (xAttrs != null) {
    for (Entry<String, byte[]> e : xAttrs.entrySet()) {
      Map json = new LinkedHashMap();
      json.put(HttpFSFileSystem.XATTR_NAME_JSON, e.getKey());
      if (e.getValue() != null) {
        json.put(HttpFSFileSystem.XATTR_VALUE_JSON, 
            XAttrCodec.encodeValue(e.getValue(), encoding));
      }
      jsonArray.add(json);
    }
  }
  jsonMap.put(HttpFSFileSystem.XATTRS_JSON, jsonArray);
  return jsonMap;
}
 
开发者ID:naver,项目名称:hadoop,代码行数:29,代码来源:FSOperations.java


示例4: test

import org.apache.hadoop.fs.http.client.HttpFSFileSystem; //导入依赖的package包/类
private void test(String method, String operation, String contentType,
                  boolean upload, boolean error) throws Exception {
  HttpServletRequest request = Mockito.mock(HttpServletRequest.class);
  HttpServletResponse response = Mockito.mock(HttpServletResponse.class);
  Mockito.reset(request);
  Mockito.when(request.getMethod()).thenReturn(method);
  Mockito.when(request.getParameter(HttpFSFileSystem.OP_PARAM)).thenReturn(operation);
  Mockito.when(request.getParameter(HttpFSParametersProvider.DataParam.NAME)).
    thenReturn(Boolean.toString(upload));
  Mockito.when(request.getContentType()).thenReturn(contentType);

  FilterChain chain = Mockito.mock(FilterChain.class);

  Filter filter = new CheckUploadContentTypeFilter();

  filter.doFilter(request, response, chain);

  if (error) {
    Mockito.verify(response).sendError(Mockito.eq(HttpServletResponse.SC_BAD_REQUEST),
                                       Mockito.contains("Data upload"));
  }
  else {
    Mockito.verify(chain).doFilter(request, response);
  }
}
 
开发者ID:naver,项目名称:hadoop,代码行数:26,代码来源:TestCheckUploadContentTypeFilter.java


示例5: testManagementOperationErrors

import org.apache.hadoop.fs.http.client.HttpFSFileSystem; //导入依赖的package包/类
private void testManagementOperationErrors(AuthenticationHandler handler)
  throws Exception {
  HttpServletRequest request = Mockito.mock(HttpServletRequest.class);
  HttpServletResponse response = Mockito.mock(HttpServletResponse.class);
  Mockito.when(request.getParameter(HttpFSFileSystem.OP_PARAM)).
    thenReturn(DelegationTokenOperation.GETDELEGATIONTOKEN.toString());
  Mockito.when(request.getMethod()).thenReturn("FOO");
  Assert.assertFalse(handler.managementOperation(null, request, response));
  Mockito.verify(response).sendError(
    Mockito.eq(HttpServletResponse.SC_BAD_REQUEST),
    Mockito.startsWith("Wrong HTTP method"));

  Mockito.reset(response);
  Mockito.when(request.getMethod()).
    thenReturn(DelegationTokenOperation.GETDELEGATIONTOKEN.getHttpMethod());
  Assert.assertFalse(handler.managementOperation(null, request, response));
  Mockito.verify(response).sendError(
    Mockito.eq(HttpServletResponse.SC_UNAUTHORIZED),
    Mockito.contains("requires SPNEGO"));
}
 
开发者ID:ict-carch,项目名称:hadoop-plus,代码行数:21,代码来源:TestHttpFSKerberosAuthenticationHandler.java


示例6: fileStatusToJSONRaw

import org.apache.hadoop.fs.http.client.HttpFSFileSystem; //导入依赖的package包/类
@SuppressWarnings({"unchecked", "deprecation"})
private static Map fileStatusToJSONRaw(FileStatus status,
    boolean emptyPathSuffix) {
  Map json = new LinkedHashMap();
  json.put(HttpFSFileSystem.PATH_SUFFIX_JSON,
      (emptyPathSuffix) ? "" : status.getPath().getName());
  json.put(HttpFSFileSystem.TYPE_JSON,
      HttpFSFileSystem.FILE_TYPE.getType(status).toString());
  json.put(HttpFSFileSystem.LENGTH_JSON, status.getLen());
  json.put(HttpFSFileSystem.OWNER_JSON, status.getOwner());
  json.put(HttpFSFileSystem.GROUP_JSON, status.getGroup());
  json.put(HttpFSFileSystem.PERMISSION_JSON,
      HttpFSFileSystem.permissionToString(status.getPermission()));
  json.put(HttpFSFileSystem.ACCESS_TIME_JSON, status.getAccessTime());
  json.put(HttpFSFileSystem.MODIFICATION_TIME_JSON,
      status.getModificationTime());
  json.put(HttpFSFileSystem.BLOCK_SIZE_JSON, status.getBlockSize());
  json.put(HttpFSFileSystem.REPLICATION_JSON, status.getReplication());
  return json;
}
 
开发者ID:hopshadoop,项目名称:hops,代码行数:21,代码来源:FSOperations.java


示例7: contentSummaryToJSON

import org.apache.hadoop.fs.http.client.HttpFSFileSystem; //导入依赖的package包/类
/**
 * Converts a <code>ContentSummary</code> object into a JSON array
 * object.
 *
 * @param contentSummary
 *     the content summary
 * @return The JSON representation of the content summary.
 */
@SuppressWarnings({"unchecked"})
private static Map contentSummaryToJSON(ContentSummary contentSummary) {
  Map json = new LinkedHashMap();
  json.put(HttpFSFileSystem.CONTENT_SUMMARY_DIRECTORY_COUNT_JSON,
      contentSummary.getDirectoryCount());
  json.put(HttpFSFileSystem.CONTENT_SUMMARY_FILE_COUNT_JSON,
      contentSummary.getFileCount());
  json.put(HttpFSFileSystem.CONTENT_SUMMARY_LENGTH_JSON,
      contentSummary.getLength());
  json.put(HttpFSFileSystem.CONTENT_SUMMARY_QUOTA_JSON,
      contentSummary.getQuota());
  json.put(HttpFSFileSystem.CONTENT_SUMMARY_SPACE_CONSUMED_JSON,
      contentSummary.getSpaceConsumed());
  json.put(HttpFSFileSystem.CONTENT_SUMMARY_SPACE_QUOTA_JSON,
      contentSummary.getSpaceQuota());
  Map response = new LinkedHashMap();
  response.put(HttpFSFileSystem.CONTENT_SUMMARY_JSON, json);
  return response;
}
 
开发者ID:hopshadoop,项目名称:hops,代码行数:28,代码来源:FSOperations.java


示例8: test

import org.apache.hadoop.fs.http.client.HttpFSFileSystem; //导入依赖的package包/类
private void test(String method, String operation, String contentType,
    boolean upload, boolean error) throws Exception {
  HttpServletRequest request = Mockito.mock(HttpServletRequest.class);
  HttpServletResponse response = Mockito.mock(HttpServletResponse.class);
  Mockito.reset(request);
  Mockito.when(request.getMethod()).thenReturn(method);
  Mockito.when(request.getParameter(HttpFSFileSystem.OP_PARAM))
      .thenReturn(operation);
  Mockito.when(request.getParameter(HttpFSParametersProvider.DataParam.NAME)).
      thenReturn(Boolean.toString(upload));
  Mockito.when(request.getContentType()).thenReturn(contentType);

  FilterChain chain = Mockito.mock(FilterChain.class);

  Filter filter = new CheckUploadContentTypeFilter();

  filter.doFilter(request, response, chain);

  if (error) {
    Mockito.verify(response)
        .sendError(Mockito.eq(HttpServletResponse.SC_BAD_REQUEST),
            Mockito.contains("Data upload"));
  } else {
    Mockito.verify(chain).doFilter(request, response);
  }
}
 
开发者ID:hopshadoop,项目名称:hops,代码行数:27,代码来源:TestCheckUploadContentTypeFilter.java


示例9: testManagementOperationErrors

import org.apache.hadoop.fs.http.client.HttpFSFileSystem; //导入依赖的package包/类
private void testManagementOperationErrors(AuthenticationHandler handler)
    throws Exception {
  HttpServletRequest request = Mockito.mock(HttpServletRequest.class);
  HttpServletResponse response = Mockito.mock(HttpServletResponse.class);
  Mockito.when(request.getParameter(HttpFSFileSystem.OP_PARAM)).
      thenReturn(DelegationTokenOperation.GETDELEGATIONTOKEN.toString());
  Mockito.when(request.getMethod()).thenReturn("FOO");
  Assert.assertFalse(handler.managementOperation(null, request, response));
  Mockito.verify(response)
      .sendError(Mockito.eq(HttpServletResponse.SC_BAD_REQUEST),
          Mockito.startsWith("Wrong HTTP method"));

  Mockito.reset(response);
  Mockito.when(request.getMethod()).
      thenReturn(DelegationTokenOperation.GETDELEGATIONTOKEN.getHttpMethod());
  Assert.assertFalse(handler.managementOperation(null, request, response));
  Mockito.verify(response)
      .sendError(Mockito.eq(HttpServletResponse.SC_UNAUTHORIZED),
          Mockito.contains("requires SPNEGO"));
}
 
开发者ID:hopshadoop,项目名称:hops,代码行数:21,代码来源:TestHttpFSKerberosAuthenticationHandler.java


示例10: enforceRootPath

import org.apache.hadoop.fs.http.client.HttpFSFileSystem; //导入依赖的package包/类
private void enforceRootPath(HttpFSFileSystem.Operation op, String path) {
  if (!path.equals("/")) {
    throw new UnsupportedOperationException(
      MessageFormat.format("Operation [{0}], invalid path [{1}], must be '/'",
                           op, path));
  }
}
 
开发者ID:naver,项目名称:hadoop,代码行数:8,代码来源:HttpFSServer.java


示例11: delete

import org.apache.hadoop.fs.http.client.HttpFSFileSystem; //导入依赖的package包/类
/**
 * Binding to handle DELETE requests.
 *
 * @param path the path for operation.
 * @param op the HttpFS operation of the request.
 * @param params the HttpFS parameters of the request.
 *
 * @return the request response.
 *
 * @throws IOException thrown if an IO error occurred. Thrown exceptions are
 * handled by {@link HttpFSExceptionProvider}.
 * @throws FileSystemAccessException thrown if a FileSystemAccess releated
 * error occurred. Thrown exceptions are handled by
 * {@link HttpFSExceptionProvider}.
 */
@DELETE
@Path("{path:.*}")
@Produces(MediaType.APPLICATION_JSON)
public Response delete(@PathParam("path") String path,
                       @QueryParam(OperationParam.NAME) OperationParam op,
                       @Context Parameters params,
                       @Context HttpServletRequest request)
  throws IOException, FileSystemAccessException {
  UserGroupInformation user = HttpUserGroupInformation.get();
  Response response;
  path = makeAbsolute(path);
  MDC.put(HttpFSFileSystem.OP_PARAM, op.value().name());
  MDC.put("hostname", request.getRemoteAddr());
  switch (op.value()) {
    case DELETE: {
      Boolean recursive =
        params.get(RecursiveParam.NAME,  RecursiveParam.class);
      AUDIT_LOG.info("[{}] recursive [{}]", path, recursive);
      FSOperations.FSDelete command =
        new FSOperations.FSDelete(path, recursive);
      JSONObject json = fsExecute(user, command);
      response = Response.ok(json).type(MediaType.APPLICATION_JSON).build();
      break;
    }
    default: {
      throw new IOException(
        MessageFormat.format("Invalid HTTP DELETE operation [{0}]",
                             op.value()));
    }
  }
  return response;
}
 
开发者ID:naver,项目名称:hadoop,代码行数:48,代码来源:HttpFSServer.java


示例12: doFilter

import org.apache.hadoop.fs.http.client.HttpFSFileSystem; //导入依赖的package包/类
/**
 * Enforces the content-type to be application/octet-stream for
 * POST and PUT requests.
 *
 * @param request servlet request.
 * @param response servlet response.
 * @param chain filter chain.
 *
 * @throws IOException thrown if an IO error occurrs.
 * @throws ServletException thrown if a servet error occurrs.
 */
@Override
public void doFilter(ServletRequest request, ServletResponse response,
                     FilterChain chain)
  throws IOException, ServletException {
  boolean contentTypeOK = true;
  HttpServletRequest httpReq = (HttpServletRequest) request;
  HttpServletResponse httpRes = (HttpServletResponse) response;
  String method = httpReq.getMethod();
  if (method.equals("PUT") || method.equals("POST")) {
    String op = httpReq.getParameter(HttpFSFileSystem.OP_PARAM);
    if (op != null && UPLOAD_OPERATIONS.contains(
        StringUtils.toUpperCase(op))) {
      if ("true".equalsIgnoreCase(httpReq.getParameter(HttpFSParametersProvider.DataParam.NAME))) {
        String contentType = httpReq.getContentType();
        contentTypeOK =
          HttpFSFileSystem.UPLOAD_CONTENT_TYPE.equalsIgnoreCase(contentType);
      }
    }
  }
  if (contentTypeOK) {
    chain.doFilter(httpReq, httpRes);
  }
  else {
    httpRes.sendError(HttpServletResponse.SC_BAD_REQUEST,
                      "Data upload requests must have content-type set to '" +
                      HttpFSFileSystem.UPLOAD_CONTENT_TYPE + "'");

  }
}
 
开发者ID:naver,项目名称:hadoop,代码行数:41,代码来源:CheckUploadContentTypeFilter.java


示例13: toJson

import org.apache.hadoop.fs.http.client.HttpFSFileSystem; //导入依赖的package包/类
/**
 * Return a Map suitable for conversion into JSON.
 * @return A JSONish Map
 */
@SuppressWarnings({"unchecked"})
public Map<String,Object> toJson() {
  Map<String,Object> json = new LinkedHashMap<String,Object>();
  Map<String,Object> inner = new LinkedHashMap<String,Object>();
  JSONArray statuses = new JSONArray();
  for (StatusPair s : statusPairs) {
    statuses.add(s.toJsonInner(false));
  }
  inner.put(HttpFSFileSystem.FILE_STATUS_JSON, statuses);
  json.put(HttpFSFileSystem.FILE_STATUSES_JSON, inner);
  return json;
}
 
开发者ID:naver,项目名称:hadoop,代码行数:17,代码来源:FSOperations.java


示例14: fileChecksumToJSON

import org.apache.hadoop.fs.http.client.HttpFSFileSystem; //导入依赖的package包/类
/**
 * Converts a <code>FileChecksum</code> object into a JSON array
 * object.
 *
 * @param checksum file checksum.
 *
 * @return The JSON representation of the file checksum.
 */
@SuppressWarnings({"unchecked"})
private static Map fileChecksumToJSON(FileChecksum checksum) {
  Map json = new LinkedHashMap();
  json.put(HttpFSFileSystem.CHECKSUM_ALGORITHM_JSON, checksum.getAlgorithmName());
  json.put(HttpFSFileSystem.CHECKSUM_BYTES_JSON,
           org.apache.hadoop.util.StringUtils.byteToHexString(checksum.getBytes()));
  json.put(HttpFSFileSystem.CHECKSUM_LENGTH_JSON, checksum.getLength());
  Map response = new LinkedHashMap();
  response.put(HttpFSFileSystem.FILE_CHECKSUM_JSON, json);
  return response;
}
 
开发者ID:naver,项目名称:hadoop,代码行数:20,代码来源:FSOperations.java


示例15: contentSummaryToJSON

import org.apache.hadoop.fs.http.client.HttpFSFileSystem; //导入依赖的package包/类
/**
 * Converts a <code>ContentSummary</code> object into a JSON array
 * object.
 *
 * @param contentSummary the content summary
 *
 * @return The JSON representation of the content summary.
 */
@SuppressWarnings({"unchecked"})
private static Map contentSummaryToJSON(ContentSummary contentSummary) {
  Map json = new LinkedHashMap();
  json.put(HttpFSFileSystem.CONTENT_SUMMARY_DIRECTORY_COUNT_JSON, contentSummary.getDirectoryCount());
  json.put(HttpFSFileSystem.CONTENT_SUMMARY_FILE_COUNT_JSON, contentSummary.getFileCount());
  json.put(HttpFSFileSystem.CONTENT_SUMMARY_LENGTH_JSON, contentSummary.getLength());
  json.put(HttpFSFileSystem.CONTENT_SUMMARY_QUOTA_JSON, contentSummary.getQuota());
  json.put(HttpFSFileSystem.CONTENT_SUMMARY_SPACE_CONSUMED_JSON, contentSummary.getSpaceConsumed());
  json.put(HttpFSFileSystem.CONTENT_SUMMARY_SPACE_QUOTA_JSON, contentSummary.getSpaceQuota());
  Map response = new LinkedHashMap();
  response.put(HttpFSFileSystem.CONTENT_SUMMARY_JSON, json);
  return response;
}
 
开发者ID:naver,项目名称:hadoop,代码行数:22,代码来源:FSOperations.java


示例16: testDelegationTokenWithHttpFSFileSystem

import org.apache.hadoop.fs.http.client.HttpFSFileSystem; //导入依赖的package包/类
@Test
@TestDir
@TestJetty
@TestHdfs
public void testDelegationTokenWithHttpFSFileSystem() throws Exception {
  testDelegationTokenWithinDoAs(HttpFSFileSystem.class, false);
}
 
开发者ID:naver,项目名称:hadoop,代码行数:8,代码来源:TestHttpFSWithKerberos.java


示例17: testDelegationTokenWithHttpFSFileSystemProxyUser

import org.apache.hadoop.fs.http.client.HttpFSFileSystem; //导入依赖的package包/类
@Test
@TestDir
@TestJetty
@TestHdfs
public void testDelegationTokenWithHttpFSFileSystemProxyUser()
  throws Exception {
  testDelegationTokenWithinDoAs(HttpFSFileSystem.class, true);
}
 
开发者ID:naver,项目名称:hadoop,代码行数:9,代码来源:TestHttpFSWithKerberos.java


示例18: doFilter

import org.apache.hadoop.fs.http.client.HttpFSFileSystem; //导入依赖的package包/类
/**
 * Enforces the content-type to be application/octet-stream for
 * POST and PUT requests.
 *
 * @param request servlet request.
 * @param response servlet response.
 * @param chain filter chain.
 *
 * @throws IOException thrown if an IO error occurs.
 * @throws ServletException thrown if a servlet error occurs.
 */
@Override
public void doFilter(ServletRequest request, ServletResponse response,
                     FilterChain chain)
  throws IOException, ServletException {
  boolean contentTypeOK = true;
  HttpServletRequest httpReq = (HttpServletRequest) request;
  HttpServletResponse httpRes = (HttpServletResponse) response;
  String method = httpReq.getMethod();
  if (method.equals("PUT") || method.equals("POST")) {
    String op = httpReq.getParameter(HttpFSFileSystem.OP_PARAM);
    if (op != null && UPLOAD_OPERATIONS.contains(
        StringUtils.toUpperCase(op))) {
      if ("true".equalsIgnoreCase(httpReq.getParameter(HttpFSParametersProvider.DataParam.NAME))) {
        String contentType = httpReq.getContentType();
        contentTypeOK =
          HttpFSFileSystem.UPLOAD_CONTENT_TYPE.equalsIgnoreCase(contentType);
      }
    }
  }
  if (contentTypeOK) {
    chain.doFilter(httpReq, httpRes);
  }
  else {
    httpRes.sendError(HttpServletResponse.SC_BAD_REQUEST,
                      "Data upload requests must have content-type set to '" +
                      HttpFSFileSystem.UPLOAD_CONTENT_TYPE + "'");

  }
}
 
开发者ID:aliyun-beta,项目名称:aliyun-oss-hadoop-fs,代码行数:41,代码来源:CheckUploadContentTypeFilter.java



注:本文中的org.apache.hadoop.fs.http.client.HttpFSFileSystem类示例整理自Github/MSDocs等源码及文档管理平台,相关代码片段筛选自各路编程大神贡献的开源项目,源码版权归原作者所有,传播和使用请参考对应项目的License;未经允许,请勿转载。


鲜花

握手

雷人

路过

鸡蛋
该文章已有0人参与评论

请发表评论

全部评论

专题导读
上一篇:
Java UnixFakeFileSystem类代码示例发布时间:2022-05-22
下一篇:
Java RefreshSuperUserGroupsConfigurationRequest类代码示例发布时间:2022-05-22
热门推荐
阅读排行榜

扫描微信二维码

查看手机版网站

随时了解更新最新资讯

139-2527-9053

在线客服(服务时间 9:00~18:00)

在线QQ客服
地址:深圳市南山区西丽大学城创智工业园
电邮:jeky_zhao#qq.com
移动电话:139-2527-9053

Powered by 互联科技 X3.4© 2001-2213 极客世界.|Sitemap