• 设为首页
  • 点击收藏
  • 手机版
    手机扫一扫访问
    迪恩网络手机版
  • 关注官方公众号
    微信扫一扫关注
    迪恩网络公众号

Java ReplicationException类代码示例

原作者: [db:作者] 来自: [db:来源] 收藏 邀请

本文整理汇总了Java中org.apache.hadoop.hbase.replication.ReplicationException的典型用法代码示例。如果您正苦于以下问题:Java ReplicationException类的具体用法?Java ReplicationException怎么用?Java ReplicationException使用的例子?那么恭喜您, 这里精选的类代码示例或许可以为您提供帮助。



ReplicationException类属于org.apache.hadoop.hbase.replication包,在下文中一共展示了ReplicationException类的20个代码示例,这些例子默认根据受欢迎程度排序。您可以为喜欢或者感觉有用的代码点赞,您的评价将有助于我们的系统推荐出更棒的Java代码示例。

示例1: init

import org.apache.hadoop.hbase.replication.ReplicationException; //导入依赖的package包/类
/**
 * Adds a normal source per registered peer cluster and tries to process all
 * old region server wal queues
 */
protected void init() throws IOException, ReplicationException {
  for (String id : this.replicationPeers.getPeerIds()) {
    addSource(id);
  }
  List<String> currentReplicators = this.replicationQueues.getListOfReplicators();
  if (currentReplicators == null || currentReplicators.size() == 0) {
    return;
  }
  List<String> otherRegionServers = replicationTracker.getListOfRegionServers();
  LOG.info("Current list of replicators: " + currentReplicators + " other RSs: "
      + otherRegionServers);

  // Look if there's anything to process after a restart
  for (String rs : currentReplicators) {
    if (!otherRegionServers.contains(rs)) {
      transferQueues(rs);
    }
  }
}
 
开发者ID:fengchen8086,项目名称:ditb,代码行数:24,代码来源:ReplicationSourceManager.java


示例2: setupRegionReplicaReplication

import org.apache.hadoop.hbase.replication.ReplicationException; //导入依赖的package包/类
/**
 * Create replication peer for replicating to region replicas if needed.
 * @param conf configuration to use
 * @throws IOException
 */
public static void setupRegionReplicaReplication(Configuration conf) throws IOException {
  if (!isRegionReplicaReplicationEnabled(conf)) {
    return;
  }
  ReplicationAdmin repAdmin = new ReplicationAdmin(conf);
  try {
    if (repAdmin.getPeerConfig(REGION_REPLICA_REPLICATION_PEER) == null) {
      ReplicationPeerConfig peerConfig = new ReplicationPeerConfig();
      peerConfig.setClusterKey(ZKConfig.getZooKeeperClusterKey(conf));
      peerConfig.setReplicationEndpointImpl(RegionReplicaReplicationEndpoint.class.getName());
      repAdmin.addPeer(REGION_REPLICA_REPLICATION_PEER, peerConfig, null);
    }
  } catch (ReplicationException ex) {
    throw new IOException(ex);
  } finally {
    repAdmin.close();
  }
}
 
开发者ID:fengchen8086,项目名称:ditb,代码行数:24,代码来源:ServerRegionReplicaUtil.java


示例3: listReplicationPeers

import org.apache.hadoop.hbase.replication.ReplicationException; //导入依赖的package包/类
@VisibleForTesting
List<ReplicationPeer> listReplicationPeers() {
  Map<String, ReplicationPeerConfig> peers = listPeerConfigs();
  if (peers == null || peers.size() <= 0) {
    return null;
  }
  List<ReplicationPeer> listOfPeers = new ArrayList<ReplicationPeer>(peers.size());
  for (Entry<String, ReplicationPeerConfig> peerEntry : peers.entrySet()) {
    String peerId = peerEntry.getKey();
    try {
      Pair<ReplicationPeerConfig, Configuration> pair = this.replicationPeers.getPeerConf(peerId);
      Configuration peerConf = pair.getSecond();
      ReplicationPeer peer = new ReplicationPeerZKImpl(peerConf, peerId, pair.getFirst(),
          parseTableCFsFromConfig(this.getPeerTableCFs(peerId)));
      listOfPeers.add(peer);
    } catch (ReplicationException e) {
      LOG.warn("Failed to get valid replication peers. "
          + "Error connecting to peer cluster with peerId=" + peerId + ". Error message="
          + e.getMessage());
      LOG.debug("Failure details to get valid replication peers.", e);
      continue;
    }
  }
  return listOfPeers;
}
 
开发者ID:fengchen8086,项目名称:ditb,代码行数:26,代码来源:ReplicationAdmin.java


示例4: preLogRoll

import org.apache.hadoop.hbase.replication.ReplicationException; //导入依赖的package包/类
void preLogRoll(Path newLog) throws IOException {
  synchronized (this.walsById) {
    String name = newLog.getName();
    for (ReplicationSourceInterface source : this.sources) {
      try {
        this.replicationQueues.addLog(source.getPeerClusterZnode(), name);
      } catch (ReplicationException e) {
        throw new IOException("Cannot add log to replication queue with id="
            + source.getPeerClusterZnode() + ", filename=" + name, e);
      }
    }
    for (SortedSet<String> wals : this.walsById.values()) {
      if (this.sources.isEmpty()) {
        // If there's no slaves, don't need to keep the old wals since
        // we only consider the last one when a new slave comes in
        wals.clear();
      }
      wals.add(name);
    }
  }

  this.latestPath = newLog;
}
 
开发者ID:grokcoder,项目名称:pbase,代码行数:24,代码来源:ReplicationSourceManager.java


示例5: ReplicationAdmin

import org.apache.hadoop.hbase.replication.ReplicationException; //导入依赖的package包/类
/**
 * Constructor that creates a connection to the local ZooKeeper ensemble.
 * @param conf Configuration to use
 * @throws IOException if an internal replication error occurs
 * @throws RuntimeException if replication isn't enabled.
 */
public ReplicationAdmin(Configuration conf) throws IOException {
  if (!conf.getBoolean(HConstants.REPLICATION_ENABLE_KEY,
      HConstants.REPLICATION_ENABLE_DEFAULT)) {
    throw new RuntimeException("hbase.replication isn't true, please " +
        "enable it in order to use replication");
  }
  this.connection = ConnectionFactory.createConnection(conf);
  zkw = createZooKeeperWatcher();
  try {
    this.replicationPeers = ReplicationFactory.getReplicationPeers(zkw, conf, this.connection);
    this.replicationPeers.init();
    this.replicationQueuesClient =
        ReplicationFactory.getReplicationQueuesClient(zkw, conf, this.connection);
    this.replicationQueuesClient.init();

  } catch (ReplicationException e) {
    throw new IOException("Error initializing the replication admin client.", e);
  }
}
 
开发者ID:grokcoder,项目名称:pbase,代码行数:26,代码来源:ReplicationAdmin.java


示例6: init

import org.apache.hadoop.hbase.replication.ReplicationException; //导入依赖的package包/类
/**
 * Adds a normal source per registered peer cluster and tries to process all
 * old region server hlog queues
 */
protected void init() throws IOException, ReplicationException {
  for (String id : this.replicationPeers.getConnectedPeers()) {
    addSource(id);
  }
  List<String> currentReplicators = this.replicationQueues.getListOfReplicators();
  if (currentReplicators == null || currentReplicators.size() == 0) {
    return;
  }
  List<String> otherRegionServers = replicationTracker.getListOfRegionServers();
  LOG.info("Current list of replicators: " + currentReplicators + " other RSs: "
      + otherRegionServers);

  // Look if there's anything to process after a restart
  for (String rs : currentReplicators) {
    if (!otherRegionServers.contains(rs)) {
      transferQueues(rs);
    }
  }
}
 
开发者ID:tenggyut,项目名称:HIndex,代码行数:24,代码来源:ReplicationSourceManager.java


示例7: preLogRoll

import org.apache.hadoop.hbase.replication.ReplicationException; //导入依赖的package包/类
void preLogRoll(Path newLog) throws IOException {

    synchronized (this.hlogsById) {
      String name = newLog.getName();
      for (ReplicationSourceInterface source : this.sources) {
        try {
          this.replicationQueues.addLog(source.getPeerClusterZnode(), name);
        } catch (ReplicationException e) {
          throw new IOException("Cannot add log to replication queue with id="
              + source.getPeerClusterZnode() + ", filename=" + name, e);
        }
      }
      for (SortedSet<String> hlogs : this.hlogsById.values()) {
        if (this.sources.isEmpty()) {
          // If there's no slaves, don't need to keep the old hlogs since
          // we only consider the last one when a new slave comes in
          hlogs.clear();
        }
        hlogs.add(name);
      }
    }

    this.latestPath = newLog;
  }
 
开发者ID:tenggyut,项目名称:HIndex,代码行数:25,代码来源:ReplicationSourceManager.java


示例8: ReplicationAdmin

import org.apache.hadoop.hbase.replication.ReplicationException; //导入依赖的package包/类
/**
 * Constructor that creates a connection to the local ZooKeeper ensemble.
 * @param conf Configuration to use
 * @throws IOException if an internal replication error occurs
 * @throws RuntimeException if replication isn't enabled.
 */
public ReplicationAdmin(Configuration conf) throws IOException {
  if (!conf.getBoolean(HConstants.REPLICATION_ENABLE_KEY,
      HConstants.REPLICATION_ENABLE_DEFAULT)) {
    throw new RuntimeException("hbase.replication isn't true, please " +
        "enable it in order to use replication");
  }
  this.connection = HConnectionManager.getConnection(conf);
  ZooKeeperWatcher zkw = createZooKeeperWatcher();
  try {
    this.replicationPeers = ReplicationFactory.getReplicationPeers(zkw, conf, this.connection);
    this.replicationPeers.init();
    this.replicationQueuesClient =
        ReplicationFactory.getReplicationQueuesClient(zkw, conf, this.connection);
    this.replicationQueuesClient.init();

  } catch (ReplicationException e) {
    throw new IOException("Error initializing the replication admin client.", e);
  }
}
 
开发者ID:tenggyut,项目名称:HIndex,代码行数:26,代码来源:ReplicationAdmin.java


示例9: isFileDeletable

import org.apache.hadoop.hbase.replication.ReplicationException; //导入依赖的package包/类
@Override
public boolean isFileDeletable(FileStatus fStat) {
  Set<String> hfileRefsFromQueue;
  // all members of this class are null if replication is disabled,
  // so do not stop from deleting the file
  if (getConf() == null) {
    return true;
  }

  try {
    hfileRefsFromQueue = rqs.getAllHFileRefs();
  } catch (ReplicationException e) {
    LOG.warn("Failed to read hfile references from zookeeper, skipping checking deletable "
        + "file for " + fStat.getPath());
    return false;
  }
  return !hfileRefsFromQueue.contains(fStat.getPath().getName());
}
 
开发者ID:apache,项目名称:hbase,代码行数:19,代码来源:ReplicationHFileCleaner.java


示例10: adoptAbandonedQueues

import org.apache.hadoop.hbase.replication.ReplicationException; //导入依赖的package包/类
private void adoptAbandonedQueues() {
  List<ServerName> currentReplicators = null;
  try {
    currentReplicators = queueStorage.getListOfReplicators();
  } catch (ReplicationException e) {
    server.abort("Failed to get all replicators", e);
    return;
  }
  if (currentReplicators == null || currentReplicators.isEmpty()) {
    return;
  }
  List<ServerName> otherRegionServers = replicationTracker.getListOfRegionServers().stream()
      .map(ServerName::valueOf).collect(Collectors.toList());
  LOG.info(
    "Current list of replicators: " + currentReplicators + " other RSs: " + otherRegionServers);

  // Look if there's anything to process after a restart
  for (ServerName rs : currentReplicators) {
    if (!otherRegionServers.contains(rs)) {
      transferQueues(rs);
    }
  }
}
 
开发者ID:apache,项目名称:hbase,代码行数:24,代码来源:ReplicationSourceManager.java


示例11: addHFileRefs

import org.apache.hadoop.hbase.replication.ReplicationException; //导入依赖的package包/类
@Override
public void addHFileRefs(TableName tableName, byte[] family, List<Pair<Path, Path>> pairs)
    throws ReplicationException {
  Map<TableName, List<String>> tableCFMap = replicationPeer.getTableCFs();
  if (tableCFMap != null) {
    List<String> tableCfs = tableCFMap.get(tableName);
    if (tableCFMap.containsKey(tableName)
        && (tableCfs == null || tableCfs.contains(Bytes.toString(family)))) {
      this.queueStorage.addHFileRefs(peerId, pairs);
      metrics.incrSizeOfHFileRefsQueue(pairs.size());
    } else {
      LOG.debug("HFiles will not be replicated belonging to the table " + tableName + " family "
          + Bytes.toString(family) + " to peer id " + peerId);
    }
  } else {
    // user has explicitly not defined any table cfs for replication, means replicate all the
    // data
    this.queueStorage.addHFileRefs(peerId, pairs);
    metrics.incrSizeOfHFileRefsQueue(pairs.size());
  }
}
 
开发者ID:apache,项目名称:hbase,代码行数:22,代码来源:ReplicationSource.java


示例12: refreshPeerState

import org.apache.hadoop.hbase.replication.ReplicationException; //导入依赖的package包/类
private void refreshPeerState(String peerId) throws ReplicationException, IOException {
  PeerState newState;
  Lock peerLock = peersLock.acquireLock(peerId);
  try {
    ReplicationPeerImpl peer = replicationSourceManager.getReplicationPeers().getPeer(peerId);
    if (peer == null) {
      throw new ReplicationException("Peer with id=" + peerId + " is not cached.");
    }
    PeerState oldState = peer.getPeerState();
    newState = replicationSourceManager.getReplicationPeers().refreshPeerState(peerId);
    // RS need to start work with the new replication state change
    if (oldState.equals(PeerState.ENABLED) && newState.equals(PeerState.DISABLED)) {
      replicationSourceManager.refreshSources(peerId);
    }
  } finally {
    peerLock.unlock();
  }
}
 
开发者ID:apache,项目名称:hbase,代码行数:19,代码来源:PeerProcedureHandlerImpl.java


示例13: updatePeerConfig

import org.apache.hadoop.hbase.replication.ReplicationException; //导入依赖的package包/类
@Override
public void updatePeerConfig(String peerId) throws ReplicationException, IOException {
  Lock peerLock = peersLock.acquireLock(peerId);
  try {
    ReplicationPeerImpl peer = replicationSourceManager.getReplicationPeers().getPeer(peerId);
    if (peer == null) {
      throw new ReplicationException("Peer with id=" + peerId + " is not cached.");
    }
    ReplicationPeerConfig oldConfig = peer.getPeerConfig();
    ReplicationPeerConfig newConfig =
        replicationSourceManager.getReplicationPeers().refreshPeerConfig(peerId);
    // RS need to start work with the new replication config change
    if (!ReplicationUtils.isKeyConfigEqual(oldConfig, newConfig)) {
      replicationSourceManager.refreshSources(peerId);
    }
  } finally {
    peerLock.unlock();
  }
}
 
开发者ID:apache,项目名称:hbase,代码行数:20,代码来源:PeerProcedureHandlerImpl.java


示例14: checkQueuesDeleted

import org.apache.hadoop.hbase.replication.ReplicationException; //导入依赖的package包/类
private void checkQueuesDeleted(String peerId)
    throws ReplicationException, DoNotRetryIOException {
  for (ServerName replicator : queueStorage.getListOfReplicators()) {
    List<String> queueIds = queueStorage.getAllQueues(replicator);
    for (String queueId : queueIds) {
      ReplicationQueueInfo queueInfo = new ReplicationQueueInfo(queueId);
      if (queueInfo.getPeerId().equals(peerId)) {
        throw new DoNotRetryIOException("undeleted queue for peerId: " + peerId +
          ", replicator: " + replicator + ", queueId: " + queueId);
      }
    }
  }
  if (queueStorage.getAllPeersFromHFileRefsQueue().contains(peerId)) {
    throw new DoNotRetryIOException("Undeleted queue for peer " + peerId + " in hfile-refs");
  }
}
 
开发者ID:apache,项目名称:hbase,代码行数:17,代码来源:ReplicationPeerManager.java


示例15: checkUnDeletedQueues

import org.apache.hadoop.hbase.replication.ReplicationException; //导入依赖的package包/类
public void checkUnDeletedQueues() throws ReplicationException {
  undeletedQueueIds = getUnDeletedQueues();
  undeletedQueueIds.forEach((replicator, queueIds) -> {
    queueIds.forEach(queueId -> {
      ReplicationQueueInfo queueInfo = new ReplicationQueueInfo(queueId);
      String msg = "Undeleted replication queue for removed peer found: " +
        String.format("[removedPeerId=%s, replicator=%s, queueId=%s]", queueInfo.getPeerId(),
          replicator, queueId);
      errorReporter.reportError(HBaseFsck.ErrorReporter.ERROR_CODE.UNDELETED_REPLICATION_QUEUE,
        msg);
    });
  });
  undeletedHFileRefsPeerIds = getUndeletedHFileRefsPeers();
  undeletedHFileRefsPeerIds.stream()
      .map(
        peerId -> "Undeleted replication hfile-refs queue for removed peer " + peerId + " found")
      .forEach(msg -> errorReporter
          .reportError(HBaseFsck.ErrorReporter.ERROR_CODE.UNDELETED_REPLICATION_QUEUE, msg));
}
 
开发者ID:apache,项目名称:hbase,代码行数:20,代码来源:ReplicationChecker.java


示例16: testIsFileDeletable

import org.apache.hadoop.hbase.replication.ReplicationException; //导入依赖的package包/类
@Test
public void testIsFileDeletable() throws IOException, ReplicationException {
  // 1. Create a file
  Path file = new Path(root, "testIsFileDeletableWithNoHFileRefs");
  fs.createNewFile(file);
  // 2. Assert file is successfully created
  assertTrue("Test file not created!", fs.exists(file));
  ReplicationHFileCleaner cleaner = new ReplicationHFileCleaner();
  cleaner.setConf(conf);
  // 3. Assert that file as is should be deletable
  assertTrue("Cleaner should allow to delete this file as there is no hfile reference node "
      + "for it in the queue.",
    cleaner.isFileDeletable(fs.getFileStatus(file)));

  List<Pair<Path, Path>> files = new ArrayList<>(1);
  files.add(new Pair<>(null, file));
  // 4. Add the file to hfile-refs queue
  rq.addHFileRefs(peerId, files);
  // 5. Assert file should not be deletable
  assertFalse("Cleaner should not allow to delete this file as there is a hfile reference node "
      + "for it in the queue.",
    cleaner.isFileDeletable(fs.getFileStatus(file)));
}
 
开发者ID:apache,项目名称:hbase,代码行数:24,代码来源:TestReplicationHFileCleaner.java


示例17: appendReplicationPeerTableCFs

import org.apache.hadoop.hbase.replication.ReplicationException; //导入依赖的package包/类
@Override
public CompletableFuture<Void> appendReplicationPeerTableCFs(String id,
    Map<TableName, List<String>> tableCfs) {
  if (tableCfs == null) {
    return failedFuture(new ReplicationException("tableCfs is null"));
  }

  CompletableFuture<Void> future = new CompletableFuture<Void>();
  getReplicationPeerConfig(id).whenComplete((peerConfig, error) -> {
    if (!completeExceptionally(future, error)) {
      ReplicationPeerConfig newPeerConfig =
          ReplicationPeerConfigUtil.appendTableCFsToReplicationPeerConfig(tableCfs, peerConfig);
      updateReplicationPeerConfig(id, newPeerConfig).whenComplete((result, err) -> {
        if (!completeExceptionally(future, error)) {
          future.complete(result);
        }
      });
    }
  });
  return future;
}
 
开发者ID:apache,项目名称:hbase,代码行数:22,代码来源:RawAsyncHBaseAdmin.java


示例18: addSource

import org.apache.hadoop.hbase.replication.ReplicationException; //导入依赖的package包/类
/**
 * Add sources for the given peer cluster on this region server. For the newly added peer, we only
 * need to enqueue the latest log of each wal group and do replication
 * @param id the id of the peer cluster
 * @return the source that was created
 * @throws IOException
 */
protected ReplicationSourceInterface addSource(String id) throws IOException,
    ReplicationException {
  ReplicationPeerConfig peerConfig = replicationPeers.getReplicationPeerConfig(id);
  ReplicationPeer peer = replicationPeers.getPeer(id);
  ReplicationSourceInterface src =
      getReplicationSource(this.conf, this.fs, this, this.replicationQueues,
        this.replicationPeers, server, id, this.clusterId, peerConfig, peer);
  synchronized (this.walsById) {
    this.sources.add(src);
    Map<String, SortedSet<String>> walsByGroup = new HashMap<String, SortedSet<String>>();
    this.walsById.put(id, walsByGroup);
    // Add the latest wal to that source's queue
    synchronized (latestPaths) {
      if (this.latestPaths.size() > 0) {
        for (Path logPath : latestPaths) {
          String name = logPath.getName();
          String walPrefix = DefaultWALProvider.getWALPrefixFromWALName(name);
          SortedSet<String> logs = new TreeSet<String>();
          logs.add(name);
          walsByGroup.put(walPrefix, logs);
          try {
            this.replicationQueues.addLog(id, name);
          } catch (ReplicationException e) {
            String message =
                "Cannot add log to queue when creating a new source, queueId=" + id
                    + ", filename=" + name;
            server.stop(message);
            throw e;
          }
          src.enqueueLog(logPath);
        }
      }
    }
  }
  src.startup();
  return src;
}
 
开发者ID:fengchen8086,项目名称:ditb,代码行数:45,代码来源:ReplicationSourceManager.java


示例19: startReplicationService

import org.apache.hadoop.hbase.replication.ReplicationException; //导入依赖的package包/类
/**
 * If replication is enabled and this cluster is a master,
 * it starts
 * @throws IOException
 */
public void startReplicationService() throws IOException {
  if (this.replication) {
    try {
      this.replicationManager.init();
    } catch (ReplicationException e) {
      throw new IOException(e);
    }
    this.replicationSink = new ReplicationSink(this.conf, this.server);
    this.scheduleThreadPool.scheduleAtFixedRate(
      new ReplicationStatisticsThread(this.replicationSink, this.replicationManager),
      statsThreadPeriod, statsThreadPeriod, TimeUnit.SECONDS);
  }
}
 
开发者ID:fengchen8086,项目名称:ditb,代码行数:19,代码来源:Replication.java


示例20: testRegionReplicaReplicationPeerIsCreated

import org.apache.hadoop.hbase.replication.ReplicationException; //导入依赖的package包/类
@Test
public void testRegionReplicaReplicationPeerIsCreated() throws IOException, ReplicationException {
  // create a table with region replicas. Check whether the replication peer is created
  // and replication started.
  ReplicationAdmin admin = new ReplicationAdmin(HTU.getConfiguration());
  String peerId = "region_replica_replication";

  if (admin.getPeerConfig(peerId) != null) {
    admin.removePeer(peerId);
  }

  HTableDescriptor htd = HTU.createTableDescriptor(
    "testReplicationPeerIsCreated_no_region_replicas");
  HTU.getHBaseAdmin().createTable(htd);
  ReplicationPeerConfig peerConfig = admin.getPeerConfig(peerId);
  assertNull(peerConfig);

  htd = HTU.createTableDescriptor("testReplicationPeerIsCreated");
  htd.setRegionReplication(2);
  HTU.getHBaseAdmin().createTable(htd);

  // assert peer configuration is correct
  peerConfig = admin.getPeerConfig(peerId);
  assertNotNull(peerConfig);
  assertEquals(peerConfig.getClusterKey(), ZKConfig.getZooKeeperClusterKey(
      HTU.getConfiguration()));
  assertEquals(peerConfig.getReplicationEndpointImpl(),
    RegionReplicaReplicationEndpoint.class.getName());
  admin.close();
}
 
开发者ID:fengchen8086,项目名称:ditb,代码行数:31,代码来源:TestRegionReplicaReplicationEndpoint.java



注:本文中的org.apache.hadoop.hbase.replication.ReplicationException类示例整理自Github/MSDocs等源码及文档管理平台,相关代码片段筛选自各路编程大神贡献的开源项目,源码版权归原作者所有,传播和使用请参考对应项目的License;未经允许,请勿转载。


鲜花

握手

雷人

路过

鸡蛋
该文章已有0人参与评论

请发表评论

全部评论

专题导读
上一篇:
Java PagingState类代码示例发布时间:2022-05-23
下一篇:
Java Plane类代码示例发布时间:2022-05-23
热门推荐
阅读排行榜

扫描微信二维码

查看手机版网站

随时了解更新最新资讯

139-2527-9053

在线客服(服务时间 9:00~18:00)

在线QQ客服
地址:深圳市南山区西丽大学城创智工业园
电邮:jeky_zhao#qq.com
移动电话:139-2527-9053

Powered by 互联科技 X3.4© 2001-2213 极客世界.|Sitemap