• 设为首页
  • 点击收藏
  • 手机版
    手机扫一扫访问
    迪恩网络手机版
  • 关注官方公众号
    微信扫一扫关注
    迪恩网络公众号

Java FSHDFSUtils类代码示例

原作者: [db:作者] 来自: [db:来源] 收藏 邀请

本文整理汇总了Java中org.apache.hadoop.hbase.util.FSHDFSUtils的典型用法代码示例。如果您正苦于以下问题:Java FSHDFSUtils类的具体用法?Java FSHDFSUtils怎么用?Java FSHDFSUtils使用的例子?那么恭喜您, 这里精选的类代码示例或许可以为您提供帮助。



FSHDFSUtils类属于org.apache.hadoop.hbase.util包,在下文中一共展示了FSHDFSUtils类的9个代码示例,这些例子默认根据受欢迎程度排序。您可以为喜欢或者感觉有用的代码点赞,您的评价将有助于我们的系统推荐出更棒的Java代码示例。

示例1: failedBulkLoad

import org.apache.hadoop.hbase.util.FSHDFSUtils; //导入依赖的package包/类
@Override
public void failedBulkLoad(final byte[] family, final String srcPath) throws IOException {
  if (!FSHDFSUtils.isSameHdfs(conf, srcFs, fs)) {
    // files are copied so no need to move them back
    return;
  }
  Path p = new Path(srcPath);
  Path stageP = new Path(stagingDir,
      new Path(Bytes.toString(family), p.getName()));
  LOG.debug("Moving " + stageP + " back to " + p);
  if(!fs.rename(stageP, p))
    throw new IOException("Failed to move HFile: " + stageP + " to " + p);

  // restore original permission
  if (origPermissions.containsKey(srcPath)) {
    fs.setPermission(p, origPermissions.get(srcPath));
  } else {
    LOG.warn("Can't find previous permission for path=" + srcPath);
  }
}
 
开发者ID:fengchen8086,项目名称:ditb,代码行数:21,代码来源:SecureBulkLoadEndpoint.java


示例2: bulkLoadStoreFile

import org.apache.hadoop.hbase.util.FSHDFSUtils; //导入依赖的package包/类
/**
 * Bulk load: Add a specified store file to the specified family. If the source file is on the
 * same different file-system is moved from the source location to the destination location,
 * otherwise is copied over.
 *
 * @param familyName Family that will gain the file
 * @param srcPath    {@link Path} to the file to import
 * @param seqNum     Bulk Load sequence number
 * @return The destination {@link Path} of the bulk loaded file
 * @throws IOException
 */
Path bulkLoadStoreFile(final String familyName, Path srcPath, long seqNum) throws IOException {
  // Copy the file if it's on another filesystem
  FileSystem srcFs = srcPath.getFileSystem(conf);
  FileSystem desFs = fs instanceof HFileSystem ? ((HFileSystem) fs).getBackingFs() : fs;

  // We can't compare FileSystem instances as equals() includes UGI instance
  // as part of the comparison and won't work when doing SecureBulkLoad
  // TODO deal with viewFS
  if (!FSHDFSUtils.isSameHdfs(conf, srcFs, desFs)) {
    LOG.info("Bulk-load file " + srcPath + " is on different filesystem than "
        + "the destination store. Copying file over to destination filesystem.");
    Path tmpPath = createTempName();
    FileUtil.copy(srcFs, srcPath, fs, tmpPath, false, conf);
    LOG.info("Copied " + srcPath + " to temporary path on destination filesystem: " + tmpPath);
    srcPath = tmpPath;
  }

  return commitStoreFile(familyName, srcPath, seqNum, true);
}
 
开发者ID:fengchen8086,项目名称:ditb,代码行数:31,代码来源:HRegionFileSystem.java


示例3: bulkLoadStoreFile

import org.apache.hadoop.hbase.util.FSHDFSUtils; //导入依赖的package包/类
/**
 * Bulk load: Add a specified store file to the specified family.
 * If the source file is on the same different file-system is moved from the
 * source location to the destination location, otherwise is copied over.
 *
 * @param familyName Family that will gain the file
 * @param srcPath {@link Path} to the file to import
 * @param seqNum Bulk Load sequence number
 * @return The destination {@link Path} of the bulk loaded file
 * @throws IOException
 */
Path bulkLoadStoreFile(final String familyName, Path srcPath, long seqNum)
    throws IOException {
  // Copy the file if it's on another filesystem
  FileSystem srcFs = srcPath.getFileSystem(conf);
  FileSystem desFs = fs instanceof HFileSystem ? ((HFileSystem)fs).getBackingFs() : fs;

  // We can't compare FileSystem instances as equals() includes UGI instance
  // as part of the comparison and won't work when doing SecureBulkLoad
  // TODO deal with viewFS
  if (!FSHDFSUtils.isSameHdfs(conf, srcFs, desFs)) {
    LOG.info("Bulk-load file " + srcPath + " is on different filesystem than " +
        "the destination store. Copying file over to destination filesystem.");
    Path tmpPath = createTempName();
    FileUtil.copy(srcFs, srcPath, fs, tmpPath, false, conf);
    LOG.info("Copied " + srcPath + " to temporary path on destination filesystem: " + tmpPath);
    srcPath = tmpPath;
  }

  return commitStoreFile(familyName, srcPath, seqNum, true);
}
 
开发者ID:grokcoder,项目名称:pbase,代码行数:32,代码来源:HRegionFileSystem.java


示例4: prepareBulkLoad

import org.apache.hadoop.hbase.util.FSHDFSUtils; //导入依赖的package包/类
@Override
public String prepareBulkLoad(final byte[] family, final String srcPath) throws IOException {
  Path p = new Path(srcPath);
  Path stageP = new Path(stagingDir, new Path(Bytes.toString(family), p.getName()));
  if (srcFs == null) {
    srcFs = FileSystem.get(p.toUri(), conf);
  }

  if(!isFile(p)) {
    throw new IOException("Path does not reference a file: " + p);
  }

  // Check to see if the source and target filesystems are the same
  if (!FSHDFSUtils.isSameHdfs(conf, srcFs, fs)) {
    LOG.debug("Bulk-load file " + srcPath + " is on different filesystem than " +
        "the destination filesystem. Copying file over to destination staging dir.");
    FileUtil.copy(srcFs, p, fs, stageP, false, conf);
  }
  else {
    LOG.debug("Moving " + p + " to " + stageP);
    if(!fs.rename(p, stageP)) {
      throw new IOException("Failed to move HFile: " + p + " to " + stageP);
    }
  }
  return stageP.toString();
}
 
开发者ID:tenggyut,项目名称:HIndex,代码行数:27,代码来源:SecureBulkLoadEndpoint.java


示例5: bulkLoadStoreFile

import org.apache.hadoop.hbase.util.FSHDFSUtils; //导入依赖的package包/类
/**
 * Bulk load: Add a specified store file to the specified family.
 * If the source file is on the same different file-system is moved from the
 * source location to the destination location, otherwise is copied over.
 *
 * @param familyName Family that will gain the file
 * @param srcPath {@link Path} to the file to import
 * @param seqNum Bulk Load sequence number
 * @return The destination {@link Path} of the bulk loaded file
 * @throws IOException
 */
Pair<Path, Path> bulkLoadStoreFile(final String familyName, Path srcPath, long seqNum)
    throws IOException {
  // Copy the file if it's on another filesystem
  FileSystem srcFs = srcPath.getFileSystem(conf);
  srcPath = srcFs.resolvePath(srcPath);
  FileSystem realSrcFs = srcPath.getFileSystem(conf);
  FileSystem desFs = fs instanceof HFileSystem ? ((HFileSystem)fs).getBackingFs() : fs;

  // We can't compare FileSystem instances as equals() includes UGI instance
  // as part of the comparison and won't work when doing SecureBulkLoad
  // TODO deal with viewFS
  if (!FSHDFSUtils.isSameHdfs(conf, realSrcFs, desFs)) {
    LOG.info("Bulk-load file " + srcPath + " is on different filesystem than " +
        "the destination store. Copying file over to destination filesystem.");
    Path tmpPath = createTempName();
    FileUtil.copy(realSrcFs, srcPath, fs, tmpPath, false, conf);
    LOG.info("Copied " + srcPath + " to temporary path on destination filesystem: " + tmpPath);
    srcPath = tmpPath;
  }

  return new Pair<>(srcPath, preCommitStoreFile(familyName, srcPath, seqNum, true));
}
 
开发者ID:apache,项目名称:hbase,代码行数:34,代码来源:HRegionFileSystem.java


示例6: prepareBulkLoad

import org.apache.hadoop.hbase.util.FSHDFSUtils; //导入依赖的package包/类
@Override
public String prepareBulkLoad(final byte[] family, final String srcPath) throws IOException {
  Path p = new Path(srcPath);
  Path stageP = new Path(stagingDir, new Path(Bytes.toString(family), p.getName()));
  if (srcFs == null) {
    srcFs = FileSystem.get(p.toUri(), conf);
  }

  if(!isFile(p)) {
    throw new IOException("Path does not reference a file: " + p);
  }

  // Check to see if the source and target filesystems are the same
  if (!FSHDFSUtils.isSameHdfs(conf, srcFs, fs)) {
    LOG.debug("Bulk-load file " + srcPath + " is on different filesystem than " +
        "the destination filesystem. Copying file over to destination staging dir.");
    FileUtil.copy(srcFs, p, fs, stageP, false, conf);
  } else {
    LOG.debug("Moving " + p + " to " + stageP);
    FileStatus origFileStatus = fs.getFileStatus(p);
    origPermissions.put(srcPath, origFileStatus.getPermission());
    if(!fs.rename(p, stageP)) {
      throw new IOException("Failed to move HFile: " + p + " to " + stageP);
    }
  }
  fs.setPermission(stageP, PERM_ALL_ACCESS);
  return stageP.toString();
}
 
开发者ID:fengchen8086,项目名称:ditb,代码行数:29,代码来源:SecureBulkLoadEndpoint.java


示例7: prepareBulkLoad

import org.apache.hadoop.hbase.util.FSHDFSUtils; //导入依赖的package包/类
@Override
public String prepareBulkLoad(final byte[] family, final String srcPath, boolean copyFile)
    throws IOException {
  Path p = new Path(srcPath);
  Path stageP = new Path(stagingDir, new Path(Bytes.toString(family), p.getName()));

  // In case of Replication for bulk load files, hfiles are already copied in staging directory
  if (p.equals(stageP)) {
    LOG.debug(p.getName()
        + " is already available in staging directory. Skipping copy or rename.");
    return stageP.toString();
  }

  if (srcFs == null) {
    srcFs = FileSystem.get(p.toUri(), conf);
  }

  if(!isFile(p)) {
    throw new IOException("Path does not reference a file: " + p);
  }

  // Check to see if the source and target filesystems are the same
  if (!FSHDFSUtils.isSameHdfs(conf, srcFs, fs)) {
    LOG.debug("Bulk-load file " + srcPath + " is on different filesystem than " +
        "the destination filesystem. Copying file over to destination staging dir.");
    FileUtil.copy(srcFs, p, fs, stageP, false, conf);
  } else if (copyFile) {
    LOG.debug("Bulk-load file " + srcPath + " is copied to destination staging dir.");
    FileUtil.copy(srcFs, p, fs, stageP, false, conf);
  } else {
    LOG.debug("Moving " + p + " to " + stageP);
    FileStatus origFileStatus = fs.getFileStatus(p);
    origPermissions.put(srcPath, origFileStatus.getPermission());
    if(!fs.rename(p, stageP)) {
      throw new IOException("Failed to move HFile: " + p + " to " + stageP);
    }
  }
  fs.setPermission(stageP, PERM_ALL_ACCESS);
  return stageP.toString();
}
 
开发者ID:apache,项目名称:hbase,代码行数:41,代码来源:SecureBulkLoadManager.java


示例8: failedBulkLoad

import org.apache.hadoop.hbase.util.FSHDFSUtils; //导入依赖的package包/类
@Override
public void failedBulkLoad(final byte[] family, final String srcPath) throws IOException {
  if (!FSHDFSUtils.isSameHdfs(conf, srcFs, fs)) {
    // files are copied so no need to move them back
    return;
  }
  Path p = new Path(srcPath);
  Path stageP = new Path(stagingDir,
      new Path(Bytes.toString(family), p.getName()));

  // In case of Replication for bulk load files, hfiles are not renamed by end point during
  // prepare stage, so no need of rename here again
  if (p.equals(stageP)) {
    LOG.debug(p.getName() + " is already available in source directory. Skipping rename.");
    return;
  }

  LOG.debug("Moving " + stageP + " back to " + p);
  if(!fs.rename(stageP, p))
    throw new IOException("Failed to move HFile: " + stageP + " to " + p);

  // restore original permission
  if (origPermissions.containsKey(srcPath)) {
    fs.setPermission(p, origPermissions.get(srcPath));
  } else {
    LOG.warn("Can't find previous permission for path=" + srcPath);
  }
}
 
开发者ID:apache,项目名称:hbase,代码行数:29,代码来源:SecureBulkLoadManager.java


示例9: buildClientServiceCallable

import org.apache.hadoop.hbase.util.FSHDFSUtils; //导入依赖的package包/类
@VisibleForTesting
protected ClientServiceCallable<byte[]> buildClientServiceCallable(Connection conn,
    TableName tableName, byte[] first, Collection<LoadQueueItem> lqis, boolean copyFile) {
  List<Pair<byte[], String>> famPaths =
      lqis.stream().map(lqi -> Pair.newPair(lqi.getFamily(), lqi.getFilePath().toString()))
          .collect(Collectors.toList());
  return new ClientServiceCallable<byte[]>(conn, tableName, first,
      rpcControllerFactory.newController(), HConstants.PRIORITY_UNSET) {
    @Override
    protected byte[] rpcCall() throws Exception {
      SecureBulkLoadClient secureClient = null;
      boolean success = false;
      try {
        if (LOG.isDebugEnabled()) {
          LOG.debug("Going to connect to server " + getLocation() + " for row " +
              Bytes.toStringBinary(getRow()) + " with hfile group " +
              LoadIncrementalHFiles.this.toString(famPaths));
        }
        byte[] regionName = getLocation().getRegionInfo().getRegionName();
        try (Table table = conn.getTable(getTableName())) {
          secureClient = new SecureBulkLoadClient(getConf(), table);
          success = secureClient.secureBulkLoadHFiles(getStub(), famPaths, regionName,
            assignSeqIds, fsDelegationToken.getUserToken(), bulkToken, copyFile);
        }
        return success ? regionName : null;
      } finally {
        // Best effort copying of files that might not have been imported
        // from the staging directory back to original location
        // in user directory
        if (secureClient != null && !success) {
          FileSystem targetFs = FileSystem.get(getConf());
          FileSystem sourceFs = lqis.iterator().next().getFilePath().getFileSystem(getConf());
          // Check to see if the source and target filesystems are the same
          // If they are the same filesystem, we will try move the files back
          // because previously we moved them to the staging directory.
          if (FSHDFSUtils.isSameHdfs(getConf(), sourceFs, targetFs)) {
            for (Pair<byte[], String> el : famPaths) {
              Path hfileStagingPath = null;
              Path hfileOrigPath = new Path(el.getSecond());
              try {
                hfileStagingPath = new Path(new Path(bulkToken, Bytes.toString(el.getFirst())),
                    hfileOrigPath.getName());
                if (targetFs.rename(hfileStagingPath, hfileOrigPath)) {
                  LOG.debug("Moved back file " + hfileOrigPath + " from " + hfileStagingPath);
                } else if (targetFs.exists(hfileStagingPath)) {
                  LOG.debug(
                    "Unable to move back file " + hfileOrigPath + " from " + hfileStagingPath);
                }
              } catch (Exception ex) {
                LOG.debug(
                  "Unable to move back file " + hfileOrigPath + " from " + hfileStagingPath, ex);
              }
            }
          }
        }
      }
    }
  };
}
 
开发者ID:apache,项目名称:hbase,代码行数:60,代码来源:LoadIncrementalHFiles.java



注:本文中的org.apache.hadoop.hbase.util.FSHDFSUtils类示例整理自Github/MSDocs等源码及文档管理平台,相关代码片段筛选自各路编程大神贡献的开源项目,源码版权归原作者所有,传播和使用请参考对应项目的License;未经允许,请勿转载。


鲜花

握手

雷人

路过

鸡蛋
该文章已有0人参与评论

请发表评论

全部评论

专题导读
上一篇:
Java IntArray类代码示例发布时间:2022-05-22
下一篇:
Java DefaultElementLocatorFactory类代码示例发布时间:2022-05-22
热门推荐
阅读排行榜

扫描微信二维码

查看手机版网站

随时了解更新最新资讯

139-2527-9053

在线客服(服务时间 9:00~18:00)

在线QQ客服
地址:深圳市南山区西丽大学城创智工业园
电邮:jeky_zhao#qq.com
移动电话:139-2527-9053

Powered by 互联科技 X3.4© 2001-2213 极客世界.|Sitemap