• 设为首页
  • 点击收藏
  • 手机版
    手机扫一扫访问
    迪恩网络手机版
  • 关注官方公众号
    微信扫一扫关注
    迪恩网络公众号

Java TaskTracker类代码示例

原作者: [db:作者] 来自: [db:来源] 收藏 邀请

本文整理汇总了Java中org.apache.hadoop.mapreduce.server.jobtracker.TaskTracker的典型用法代码示例。如果您正苦于以下问题:Java TaskTracker类的具体用法?Java TaskTracker怎么用?Java TaskTracker使用的例子?那么恭喜您, 这里精选的类代码示例或许可以为您提供帮助。



TaskTracker类属于org.apache.hadoop.mapreduce.server.jobtracker包,在下文中一共展示了TaskTracker类的20个代码示例,这些例子默认根据受欢迎程度排序。您可以为喜欢或者感觉有用的代码点赞,您的评价将有助于我们的系统推荐出更棒的Java代码示例。

示例1: activeTaskTrackers

import org.apache.hadoop.mapreduce.server.jobtracker.TaskTracker; //导入依赖的package包/类
/**
 * Get the active task tracker statuses in the cluster
 *  
 * @return {@link Collection} of active {@link TaskTrackerStatus} 
 */
// This method is synchronized to make sure that the locking order 
// "taskTrackers lock followed by faultyTrackers.potentiallyFaultyTrackers 
// lock" is under JobTracker lock to avoid deadlocks.
synchronized public Collection<TaskTrackerStatus> activeTaskTrackers() {
  Collection<TaskTrackerStatus> activeTrackers = 
    new ArrayList<TaskTrackerStatus>();
  synchronized (taskTrackers) {
    for ( TaskTracker tt : taskTrackers.values()) {
      TaskTrackerStatus status = tt.getStatus();
      if (!faultyTrackers.isBlacklisted(status.getHost())) {
        activeTrackers.add(status);
      }
    }
  }
  return activeTrackers;
}
 
开发者ID:Nextzero,项目名称:hadoop-2.6.0-cdh5.4.3,代码行数:22,代码来源:JobTracker.java


示例2: taskTrackerNames

import org.apache.hadoop.mapreduce.server.jobtracker.TaskTracker; //导入依赖的package包/类
/**
 * Get the active and blacklisted task tracker names in the cluster. The first
 * element in the returned list contains the list of active tracker names.
 * The second element in the returned list contains the list of blacklisted
 * tracker names. 
 */
// This method is synchronized to make sure that the locking order 
// "taskTrackers lock followed by faultyTrackers.potentiallyFaultyTrackers 
// lock" is under JobTracker lock to avoid deadlocks.
synchronized public List<List<String>> taskTrackerNames() {
  List<String> activeTrackers = 
    new ArrayList<String>();
  List<String> blacklistedTrackers = 
    new ArrayList<String>();
  synchronized (taskTrackers) {
    for (TaskTracker tt : taskTrackers.values()) {
      TaskTrackerStatus status = tt.getStatus();
      if (!faultyTrackers.isBlacklisted(status.getHost())) {
        activeTrackers.add(status.getTrackerName());
      } else {
        blacklistedTrackers.add(status.getTrackerName());
      }
    }
  }
  List<List<String>> result = new ArrayList<List<String>>(2);
  result.add(activeTrackers);
  result.add(blacklistedTrackers);
  return result;
}
 
开发者ID:Nextzero,项目名称:hadoop-2.6.0-cdh5.4.3,代码行数:30,代码来源:JobTracker.java


示例3: blacklistedTaskTrackers

import org.apache.hadoop.mapreduce.server.jobtracker.TaskTracker; //导入依赖的package包/类
/**
 * Get the blacklisted task tracker statuses in the cluster
 *  
 * @return {@link Collection} of blacklisted {@link TaskTrackerStatus} 
 */
// This method is synchronized to make sure that the locking order 
// "taskTrackers lock followed by faultyTrackers.potentiallyFaultyTrackers 
// lock" is under JobTracker lock to avoid deadlocks.
synchronized public Collection<TaskTrackerStatus> blacklistedTaskTrackers() {
  Collection<TaskTrackerStatus> blacklistedTrackers = 
    new ArrayList<TaskTrackerStatus>();
  synchronized (taskTrackers) {
    for (TaskTracker tt : taskTrackers.values()) {
      TaskTrackerStatus status = tt.getStatus(); 
      if (faultyTrackers.isBlacklisted(status.getHost())) {
        blacklistedTrackers.add(status);
      }
    }
  }    
  return blacklistedTrackers;
}
 
开发者ID:Nextzero,项目名称:hadoop-2.6.0-cdh5.4.3,代码行数:22,代码来源:JobTracker.java


示例4: addNewTracker

import org.apache.hadoop.mapreduce.server.jobtracker.TaskTracker; //导入依赖的package包/类
/**
 * Adds a new node to the jobtracker. It involves adding it to the expiry
 * thread and adding it for resolution
 * 
 * Assumes JobTracker, taskTrackers and trackerExpiryQueue is locked on entry
 * 
 * @param status Task Tracker's status
 */
private void addNewTracker(TaskTracker taskTracker) {
  TaskTrackerStatus status = taskTracker.getStatus();
  trackerExpiryQueue.add(status);

  //  Register the tracker if its not registered
  String hostname = status.getHost();
  if (getNode(status.getTrackerName()) == null) {
    // Making the network location resolution inline .. 
    resolveAndAddToTopology(hostname);
  }

  // add it to the set of tracker per host
  Set<TaskTracker> trackers = hostnameToTaskTracker.get(hostname);
  if (trackers == null) {
    trackers = Collections.synchronizedSet(new HashSet<TaskTracker>());
    hostnameToTaskTracker.put(hostname, trackers);
  }
  statistics.taskTrackerAdded(status.getTrackerName());
  getInstrumentation().addTrackers(1);
  LOG.info("Adding tracker " + status.getTrackerName() + " to host " 
           + hostname);
  trackers.add(taskTracker);
}
 
开发者ID:Nextzero,项目名称:hadoop-2.6.0-cdh5.4.3,代码行数:32,代码来源:JobTracker.java


示例5: refreshHosts

import org.apache.hadoop.mapreduce.server.jobtracker.TaskTracker; //导入依赖的package包/类
private synchronized void refreshHosts() throws IOException {
  // Reread the config to get mapred.hosts and mapred.hosts.exclude filenames.
  // Update the file names and refresh internal includes and excludes list
  LOG.info("Refreshing hosts information");
  Configuration conf = new Configuration();

  hostsReader.updateFileNames(conf.get("mapred.hosts",""), 
                              conf.get("mapred.hosts.exclude", ""));
  hostsReader.refresh();
  
  Set<String> excludeSet = new HashSet<String>();
  for(Map.Entry<String, TaskTracker> eSet : taskTrackers.entrySet()) {
    String trackerName = eSet.getKey();
    TaskTrackerStatus status = eSet.getValue().getStatus();
    // Check if not include i.e not in host list or in hosts list but excluded
    if (!inHostsList(status) || inExcludedHostsList(status)) {
        excludeSet.add(status.getHost()); // add to rejected trackers
    }
  }
  decommissionNodes(excludeSet);
}
 
开发者ID:Nextzero,项目名称:hadoop-2.6.0-cdh5.4.3,代码行数:22,代码来源:JobTracker.java


示例6: decommissionNodes

import org.apache.hadoop.mapreduce.server.jobtracker.TaskTracker; //导入依赖的package包/类
synchronized void decommissionNodes(Set<String> hosts) 
throws IOException {  
  LOG.info("Decommissioning " + hosts.size() + " nodes");
  // create a list of tracker hostnames
  synchronized (taskTrackers) {
    synchronized (trackerExpiryQueue) {
      int trackersDecommissioned = 0;
      for (String host : hosts) {
        LOG.info("Decommissioning host " + host);
        Set<TaskTracker> trackers = hostnameToTaskTracker.remove(host);
        if (trackers != null) {
          for (TaskTracker tracker : trackers) {
            LOG.info("Decommission: Losing tracker " + tracker.getTrackerName() + 
                     " on host " + host);
            removeTracker(tracker);
          }
          trackersDecommissioned += trackers.size();
        }
        LOG.info("Host " + host + " is ready for decommissioning");
      }
      getInstrumentation().setDecommissionedTrackers(trackersDecommissioned);
    }
  }
}
 
开发者ID:Nextzero,项目名称:hadoop-2.6.0-cdh5.4.3,代码行数:25,代码来源:JobTracker.java


示例7: FakeTaskTrackerManager

import org.apache.hadoop.mapreduce.server.jobtracker.TaskTracker; //导入依赖的package包/类
public FakeTaskTrackerManager() {
  JobConf conf = new JobConf();
  queueManager = new QueueManager(conf);
  
  TaskTracker tt1 = new TaskTracker("tt1");
  tt1.setStatus(new TaskTrackerStatus("tt1", "http", "tt1.host", 1,
                new ArrayList<TaskStatus>(), 0, 0,
                maxMapTasksPerTracker, maxReduceTasksPerTracker));
  trackers.put("tt1", tt1);
  
  TaskTracker tt2 = new TaskTracker("tt2");
  tt2.setStatus(new TaskTrackerStatus("tt2", "http", "tt2.host", 2,
                new ArrayList<TaskStatus>(), 0, 0,
                maxMapTasksPerTracker, maxReduceTasksPerTracker));
  trackers.put("tt2", tt2);
}
 
开发者ID:Nextzero,项目名称:hadoop-2.6.0-cdh5.4.3,代码行数:17,代码来源:TestJobQueueTaskScheduler.java


示例8: assignTasks

import org.apache.hadoop.mapreduce.server.jobtracker.TaskTracker; //导入依赖的package包/类
@Override
public List<Task> assignTasks(TaskTracker tt) {
  if (unreserveSlots) {
    tt.unreserveSlots(TaskType.MAP, fakeJob);
    tt.unreserveSlots(TaskType.REDUCE, fakeJob);
  } else {
    int currCount = 1;
    if (reservedCounts.containsKey(tt)) {
      currCount = reservedCounts.get(tt) + 1;
    }
    reservedCounts.put(tt, currCount);
    tt.reserveSlots(TaskType.MAP, fakeJob, currCount);
    tt.reserveSlots(TaskType.REDUCE, fakeJob, currCount);
  }
  return new ArrayList<Task>();  
}
 
开发者ID:Nextzero,项目名称:hadoop-2.6.0-cdh5.4.3,代码行数:17,代码来源:TestClusterStatus.java


示例9: testDefaultResourceValues

import org.apache.hadoop.mapreduce.server.jobtracker.TaskTracker; //导入依赖的package包/类
/**
 * Test that verifies default values are configured and reported correctly.
 * 
 * @throws Exception
 */
public void testDefaultResourceValues()
    throws Exception {
  JobConf conf = new JobConf();
  try {
    // Memory values are disabled by default.
    conf.setClass(
        org.apache.hadoop.mapred.TaskTracker.TT_RESOURCE_CALCULATOR_PLUGIN,       
        DummyResourceCalculatorPlugin.class, ResourceCalculatorPlugin.class);
    setUpCluster(conf);
    JobConf jobConf = miniMRCluster.createJobConf();
    jobConf.setClass(
        org.apache.hadoop.mapred.TaskTracker.TT_RESOURCE_CALCULATOR_PLUGIN,
        DummyResourceCalculatorPlugin.class, ResourceCalculatorPlugin.class);
    runSleepJob(jobConf);
    verifyTestResults();
  } finally {
    tearDownCluster();
  }
}
 
开发者ID:Nextzero,项目名称:hadoop-2.6.0-cdh5.4.3,代码行数:25,代码来源:TestTTResourceReporting.java


示例10: FakeTaskTrackerManager

import org.apache.hadoop.mapreduce.server.jobtracker.TaskTracker; //导入依赖的package包/类
public FakeTaskTrackerManager() {
  TaskTracker tt1 = new TaskTracker("tt1");
  tt1.setStatus(new TaskTrackerStatus("tt1", "http", "tt1.host", 1,
                                      new ArrayList<TaskStatus>(), 0, 0,
                                      maxMapTasksPerTracker, 
                                      maxReduceTasksPerTracker));
  trackers.put("tt1", tt1);
  
  TaskTracker tt2 = new TaskTracker("tt2");
  tt2.setStatus(new TaskTrackerStatus("tt2", "http", "tt2.host", 2,
                                      new ArrayList<TaskStatus>(), 0, 0,
                                      maxMapTasksPerTracker, 
                                      maxReduceTasksPerTracker));
  trackers.put("tt2", tt2);

}
 
开发者ID:Nextzero,项目名称:hadoop-2.6.0-cdh5.4.3,代码行数:17,代码来源:TestFairScheduler.java


示例11: activeTaskTrackers

import org.apache.hadoop.mapreduce.server.jobtracker.TaskTracker; //导入依赖的package包/类
/**
 * Get the active task tracker statuses in the cluster
 *
 * @return {@link Collection} of active {@link TaskTrackerStatus}
 */
// This method is synchronized to make sure that the locking order
// "taskTrackers lock followed by faultyTrackers.potentiallyFaultyTrackers
// lock" is under JobTracker lock to avoid deadlocks.
synchronized public Collection<TaskTrackerStatus> activeTaskTrackers() {
  Collection<TaskTrackerStatus> activeTrackers =
    new ArrayList<TaskTrackerStatus>();
  synchronized (taskTrackers) {
    for ( TaskTracker tt : taskTrackers.values()) {
      TaskTrackerStatus status = tt.getStatus();
      if (!faultyTrackers.isBlacklisted(status.getHost())) {
        activeTrackers.add(status);
      }
    }
  }
  return activeTrackers;
}
 
开发者ID:rhli,项目名称:hadoop-EAR,代码行数:22,代码来源:JobTracker.java


示例12: taskTrackerNames

import org.apache.hadoop.mapreduce.server.jobtracker.TaskTracker; //导入依赖的package包/类
/**
 * Get the active and blacklisted task tracker names in the cluster. The first
 * element in the returned list contains the list of active tracker names.
 * The second element in the returned list contains the list of blacklisted
 * tracker names.
 */
// This method is synchronized to make sure that the locking order
// "taskTrackers lock followed by faultyTrackers.potentiallyFaultyTrackers
// lock" is under JobTracker lock to avoid deadlocks.
synchronized public List<List<String>> taskTrackerNames() {
  List<String> activeTrackers =
    new ArrayList<String>();
  List<String> blacklistedTrackers =
    new ArrayList<String>();
  synchronized (taskTrackers) {
    for (TaskTracker tt : taskTrackers.values()) {
      TaskTrackerStatus status = tt.getStatus();
      if (!faultyTrackers.isBlacklisted(status.getHost())) {
        activeTrackers.add(status.getTrackerName());
      } else {
        blacklistedTrackers.add(status.getTrackerName());
      }
    }
  }
  List<List<String>> result = new ArrayList<List<String>>(2);
  result.add(activeTrackers);
  result.add(blacklistedTrackers);
  return result;
}
 
开发者ID:rhli,项目名称:hadoop-EAR,代码行数:30,代码来源:JobTracker.java


示例13: blacklistedTaskTrackers

import org.apache.hadoop.mapreduce.server.jobtracker.TaskTracker; //导入依赖的package包/类
/**
 * Get the blacklisted task tracker statuses in the cluster
 *
 * @return {@link Collection} of blacklisted {@link TaskTrackerStatus}
 */
// This method is synchronized to make sure that the locking order
// "taskTrackers lock followed by faultyTrackers.potentiallyFaultyTrackers
// lock" is under JobTracker lock to avoid deadlocks.
synchronized public Collection<TaskTrackerStatus> blacklistedTaskTrackers() {
  Collection<TaskTrackerStatus> blacklistedTrackers =
    new ArrayList<TaskTrackerStatus>();
  synchronized (taskTrackers) {
    for (TaskTracker tt : taskTrackers.values()) {
      TaskTrackerStatus status = tt.getStatus();
      if (faultyTrackers.isBlacklisted(status.getHost())) {
        blacklistedTrackers.add(status);
      }
    }
  }
  return blacklistedTrackers;
}
 
开发者ID:rhli,项目名称:hadoop-EAR,代码行数:22,代码来源:JobTracker.java


示例14: addNewTracker

import org.apache.hadoop.mapreduce.server.jobtracker.TaskTracker; //导入依赖的package包/类
/**
 * Adds a new node to the jobtracker. It involves adding it to the expiry
 * thread and adding it for resolution
 *
 * Assumes JobTracker, taskTrackers and trackerExpiryQueue is locked on entry
 *
 * @param taskTracker Task Tracker
 */
void addNewTracker(TaskTracker taskTracker) {
  TaskTrackerStatus status = taskTracker.getStatus();
  trackerExpiryQueue.add(status);

  //  Register the tracker if its not registered
  String hostname = status.getHost();
  if (getNode(status.getTrackerName()) == null) {
    // Making the network location resolution inline ..
    resolveAndAddToTopology(hostname);
  }

  // add it to the set of tracker per host
  Set<TaskTracker> trackers = hostnameToTaskTracker.get(hostname);
  if (trackers == null) {
    trackers = Collections.synchronizedSet(new HashSet<TaskTracker>());
    hostnameToTaskTracker.put(hostname, trackers);
  }
  statistics.taskTrackerAdded(status.getTrackerName());
  getInstrumentation().addTrackers(1);
  LOG.info("Adding tracker " + status.getTrackerName() + " to host "
           + hostname);
  trackers.add(taskTracker);
}
 
开发者ID:rhli,项目名称:hadoop-EAR,代码行数:32,代码来源:JobTracker.java


示例15: refreshHosts

import org.apache.hadoop.mapreduce.server.jobtracker.TaskTracker; //导入依赖的package包/类
private synchronized void refreshHosts() throws IOException {
  // Reread the config to get mapred.hosts and mapred.hosts.exclude filenames.
  // Update the file names and refresh internal includes and excludes list
  LOG.info("Refreshing hosts information");
  Configuration conf = new Configuration();

  hostsReader.updateFileNames(conf.get("mapred.hosts",""),
                              conf.get("mapred.hosts.exclude", ""));
  hostsReader.refresh();

  Set<String> excludeSet = new HashSet<String>();
  for(Map.Entry<String, TaskTracker> eSet : taskTrackers.entrySet()) {
    String trackerName = eSet.getKey();
    TaskTrackerStatus status = eSet.getValue().getStatus();
    // Check if not include i.e not in host list or in hosts list but excluded
    if (!inHostsList(status) || inExcludedHostsList(status)) {
        excludeSet.add(status.getHost()); // add to rejected trackers
    }
  }
  decommissionNodes(excludeSet);
  int totalExcluded = hostsReader.getExcludedHosts().size();
  getInstrumentation().setDecommissionedTrackers(totalExcluded);
}
 
开发者ID:rhli,项目名称:hadoop-EAR,代码行数:24,代码来源:JobTracker.java


示例16: decommissionNodes

import org.apache.hadoop.mapreduce.server.jobtracker.TaskTracker; //导入依赖的package包/类
synchronized void decommissionNodes(Set<String> hosts)
throws IOException {
  LOG.info("Decommissioning " + hosts.size() + " nodes");
  // create a list of tracker hostnames
  synchronized (taskTrackers) {
    synchronized (trackerExpiryQueue) {
      int trackersDecommissioned = 0;
      for (String host : hosts) {
        LOG.info("Decommissioning host " + host);
        Set<TaskTracker> trackers = hostnameToTaskTracker.remove(host);
        if (trackers != null) {
          for (TaskTracker tracker : trackers) {
            LOG.info("Decommission: Losing tracker " + tracker.getTrackerName() +
                     " on host " + host);
            removeTracker(tracker);
          }
          trackersDecommissioned += trackers.size();
        }
        LOG.info("Host " + host + " is ready for decommissioning");
      }
    }
  }
}
 
开发者ID:rhli,项目名称:hadoop-EAR,代码行数:24,代码来源:JobTracker.java


示例17: getDeadNodes

import org.apache.hadoop.mapreduce.server.jobtracker.TaskTracker; //导入依赖的package包/类
/**
 * Returns a set of dead nodes. (nodes that are expected to be alive)
 */
public Collection<String> getDeadNodes() {
  List<String> activeHosts = new ArrayList<String>();
  synchronized(taskTrackers) {
    for (TaskTracker tt : taskTrackers.values()) {
      activeHosts.add(tt.getStatus().getHost());
    }
  }
  // dead hosts are the difference between active and known hosts
  // We don't consider a blacklisted host to be dead.
  Set<String> knownHosts = new HashSet<String>(hostsReader.getHosts());
  knownHosts.removeAll(activeHosts);
  // Also remove the excluded nodes as getHosts() returns them as well
  knownHosts.removeAll(hostsReader.getExcludedHosts());
  Set<String> deadHosts = knownHosts;
  return deadHosts;
}
 
开发者ID:rhli,项目名称:hadoop-EAR,代码行数:20,代码来源:JobTracker.java


示例18: FakeTaskTrackerManager

import org.apache.hadoop.mapreduce.server.jobtracker.TaskTracker; //导入依赖的package包/类
public FakeTaskTrackerManager() {
  JobConf conf = new JobConf();
  queueManager = new QueueManager(conf);
  
  TaskTracker tt1 = new TaskTracker("tt1");
  tt1.setStatus(new TaskTrackerStatus("tt1", "tt1.host", 1,
                new ArrayList<TaskStatus>(), 0,
                maxMapTasksPerTracker, maxReduceTasksPerTracker));
  trackers.put("tt1", tt1);
  
  TaskTracker tt2 = new TaskTracker("tt2");
  tt2.setStatus(new TaskTrackerStatus("tt2", "tt2.host", 2,
                new ArrayList<TaskStatus>(), 0,
                maxMapTasksPerTracker, maxReduceTasksPerTracker));
  trackers.put("tt2", tt2);
}
 
开发者ID:rhli,项目名称:hadoop-EAR,代码行数:17,代码来源:TestJobQueueTaskScheduler.java


示例19: testDefaultResourceValues

import org.apache.hadoop.mapreduce.server.jobtracker.TaskTracker; //导入依赖的package包/类
/**
 * Test that verifies default values are configured and reported correctly.
 * 
 * @throws Exception
 */
@Test(timeout=60000)
public void testDefaultResourceValues()
    throws Exception {
  JobConf conf = new JobConf();
  try {
    // Memory values are disabled by default.
    conf.setClass(
        org.apache.hadoop.mapred.TaskTracker.MAPRED_TASKTRACKER_MEMORY_CALCULATOR_PLUGIN_PROPERTY,
        DummyResourceCalculatorPlugin.class, ResourceCalculatorPlugin.class);
    setUpCluster(conf);
    JobConf jobConf = miniMRCluster.createJobConf();
    jobConf.setClass(
        org.apache.hadoop.mapred.TaskTracker.MAPRED_TASKTRACKER_MEMORY_CALCULATOR_PLUGIN_PROPERTY,
        DummyResourceCalculatorPlugin.class, ResourceCalculatorPlugin.class);
    runSleepJob(jobConf);
    verifyTestResults();
  } finally {
    tearDownCluster();
  }
}
 
开发者ID:rhli,项目名称:hadoop-EAR,代码行数:26,代码来源:TestTTResourceReporting.java


示例20: getClosestLocality

import org.apache.hadoop.mapreduce.server.jobtracker.TaskTracker; //导入依赖的package包/类
private int getClosestLocality(TaskTracker taskTracker, RawSplit split) {
  int locality = 2;

  Node taskTrackerNode = jobtracker
      .getNode(taskTracker.getStatus().getHost());
  if (taskTrackerNode == null) {
    throw new IllegalArgumentException(
        "Cannot determine network topology node for TaskTracker "
            + taskTracker.getTrackerName());
  }
  for (String location : split.getLocations()) {
    Node dataNode = jobtracker.getNode(location);
    if (dataNode == null) {
      throw new IllegalArgumentException(
          "Cannot determine network topology node for split location "
              + location);
    }
    locality = Math.min(locality, jobtracker.clusterMap.getDistance(
        taskTrackerNode, dataNode));
  }
  return locality;
}
 
开发者ID:rhli,项目名称:hadoop-EAR,代码行数:23,代码来源:SimulatorJobInProgress.java



注:本文中的org.apache.hadoop.mapreduce.server.jobtracker.TaskTracker类示例整理自Github/MSDocs等源码及文档管理平台,相关代码片段筛选自各路编程大神贡献的开源项目,源码版权归原作者所有,传播和使用请参考对应项目的License;未经允许,请勿转载。


鲜花

握手

雷人

路过

鸡蛋
该文章已有0人参与评论

请发表评论

全部评论

专题导读
上一篇:
Java FileChangeAdapter类代码示例发布时间:2022-05-22
下一篇:
Java TargetLocator类代码示例发布时间:2022-05-22
热门推荐
阅读排行榜

扫描微信二维码

查看手机版网站

随时了解更新最新资讯

139-2527-9053

在线客服(服务时间 9:00~18:00)

在线QQ客服
地址:深圳市南山区西丽大学城创智工业园
电邮:jeky_zhao#qq.com
移动电话:139-2527-9053

Powered by 互联科技 X3.4© 2001-2213 极客世界.|Sitemap