• 设为首页
  • 点击收藏
  • 手机版
    手机扫一扫访问
    迪恩网络手机版
  • 关注官方公众号
    微信扫一扫关注
    公众号

Java Partition类代码示例

原作者: [db:作者] 来自: [db:来源] 收藏 邀请

本文整理汇总了Java中org.apache.hadoop.hive.metastore.api.Partition的典型用法代码示例。如果您正苦于以下问题:Java Partition类的具体用法?Java Partition怎么用?Java Partition使用的例子?那么恭喜您, 这里精选的类代码示例或许可以为您提供帮助。



Partition类属于org.apache.hadoop.hive.metastore.api包,在下文中一共展示了Partition类的20个代码示例,这些例子默认根据受欢迎程度排序。您可以为喜欢或者感觉有用的代码点赞,您的评价将有助于我们的系统推荐出更棒的Java代码示例。

示例1: PartitionedTablePathResolver

import org.apache.hadoop.hive.metastore.api.Partition; //导入依赖的package包/类
PartitionedTablePathResolver(IMetaStoreClient metastore, Table table)
    throws NoSuchObjectException, MetaException, TException {
  this.metastore = metastore;
  this.table = table;
  LOG.debug("Table '{}' is partitioned", Warehouse.getQualifiedName(table));
  tableBaseLocation = locationAsPath(table);
  List<Partition> onePartition = metastore.listPartitions(table.getDbName(), table.getTableName(), (short) 1);
  if (onePartition.isEmpty()) {
    LOG.warn("Table '{}' has no partitions, perhaps you can simply delete: {}.", Warehouse.getQualifiedName(table),
        tableBaseLocation);
    throw new ConfigurationException();
  }
  Path partitionLocation = locationAsPath(onePartition.get(0));
  int branches = partitionLocation.depth() - tableBaseLocation.depth();
  String globSuffix = StringUtils.repeat("*", "/", branches);
  globPath = new Path(tableBaseLocation, globSuffix);
}
 
开发者ID:HotelsDotCom,项目名称:circus-train,代码行数:18,代码来源:PartitionedTablePathResolver.java


示例2: addPartition

import org.apache.hadoop.hive.metastore.api.Partition; //导入依赖的package包/类
@Test
public void addPartition() throws Exception {
  String tableName = "my_table";
  createPartitionedTable(DATABASE, tableName);

  try (Connection connection = DriverManager.getConnection(server.connectionURL());
      Statement statement = connection.createStatement()) {
    String addPartitionHql = String.format("ALTER TABLE %s.%s ADD PARTITION (partcol=1)", DATABASE, tableName);
    statement.execute(addPartitionHql);
  }

  HiveMetaStoreClient client = server.newClient();
  try {
    List<Partition> partitions = client.listPartitions(DATABASE, tableName, (short) -1);
    assertThat(partitions.size(), is(1));
    assertThat(partitions.get(0).getDbName(), is(DATABASE));
    assertThat(partitions.get(0).getTableName(), is(tableName));
    assertThat(partitions.get(0).getValues(), is(Arrays.asList("1")));
    assertThat(partitions.get(0).getSd().getLocation(),
        is(String.format("file:%s/%s/%s/partcol=1", server.temporaryFolder.getRoot(), DATABASE, tableName)));
  } finally {
    client.close();
  }
}
 
开发者ID:HotelsDotCom,项目名称:beeju,代码行数:25,代码来源:HiveServer2JUnitRuleTest.java


示例3: getMetastorePaths

import org.apache.hadoop.hive.metastore.api.Partition; //导入依赖的package包/类
@Override
public Set<Path> getMetastorePaths(short batchSize, int expectedPathCount)
  throws NoSuchObjectException, MetaException, TException {
  Set<Path> metastorePaths = new HashSet<>(expectedPathCount);
  PartitionIterator partitionIterator = new PartitionIterator(metastore, table, batchSize);
  while (partitionIterator.hasNext()) {
    Partition partition = partitionIterator.next();
    Path location = PathUtils.normalise(locationAsPath(partition));
    if (!location.toString().toLowerCase().startsWith(tableBaseLocation.toString().toLowerCase())) {
      LOG.error("Check your configuration: '{}' does not appear to be part of '{}'.", location, tableBaseLocation);
      throw new ConfigurationException();
    }
    metastorePaths.add(location);
  }
  return metastorePaths;
}
 
开发者ID:HotelsDotCom,项目名称:circus-train,代码行数:17,代码来源:PartitionedTablePathResolver.java


示例4: HivePartition

import org.apache.hadoop.hive.metastore.api.Partition; //导入依赖的package包/类
@JsonCreator
public HivePartition(@JsonProperty("values") List<String> values, @JsonProperty("tableName") String tableName, @JsonProperty("dbName") String dbName, @JsonProperty("createTime") int createTime,
                     @JsonProperty("lastAccessTime") int lastAccessTime,  @JsonProperty("sd") StorageDescriptorWrapper sd,
                     @JsonProperty("parameters") Map<String, String> parameters
) {
  this.values = values;
  this.tableName = tableName;
  this.dbName = dbName;
  this.createTime = createTime;
  this.lastAccessTime = lastAccessTime;
  this.sd = sd;
  this.parameters = parameters;

  StorageDescriptor sdUnwrapped = sd.getSd();
  this.partition = new org.apache.hadoop.hive.metastore.api.Partition(values, tableName, dbName, createTime, lastAccessTime, sdUnwrapped, parameters);
}
 
开发者ID:skhalifa,项目名称:QDrill,代码行数:17,代码来源:HiveTable.java


示例5: hasNext

import org.apache.hadoop.hive.metastore.api.Partition; //导入依赖的package包/类
@Override
public boolean hasNext() {
  if (batch.hasNext()) {
    return true;
  }
  if (partitionNames.hasNext()) {
    List<String> names = partitionNames.next();
    try {
      List<Partition> partitions = metastore.getPartitionsByNames(table.getDbName(), table.getTableName(), names);
      count += partitions.size();
      LOG.debug("Retrieved {} partitions, total: {}.", partitions.size(), count);
      batch = partitions.iterator();
    } catch (TException e) {
      throw new RuntimeException(e);
    }
  }
  return batch.hasNext();
}
 
开发者ID:HotelsDotCom,项目名称:circus-train,代码行数:19,代码来源:PartitionIterator.java


示例6: tablesAreDifferent

import org.apache.hadoop.hive.metastore.api.Partition; //导入依赖的package包/类
@Test
public void tablesAreDifferent() throws Exception {
  Table sourceTable = catalog.client().getTable(DATABASE, SOURCE_TABLE);
  sourceTable.getParameters().put("com.company.team", "value");
  catalog.client().alter_table(DATABASE, SOURCE_TABLE, sourceTable);

  // Reload table object
  sourceTable = catalog.client().getTable(DATABASE, SOURCE_TABLE);
  Table replicaTable = catalog.client().getTable(DATABASE, REPLICA_TABLE);

  HiveDifferences
      .builder(diffListener)
      .comparatorRegistry(comparatorRegistry)
      .source(configuration, sourceTable, new PartitionIterator(catalog.client(), sourceTable, PARTITION_BATCH_SIZE))
      .replica(Optional.of(replicaTable),
          Optional.of(new BufferedPartitionFetcher(catalog.client(), replicaTable, PARTITION_BATCH_SIZE)))
      .checksumFunction(checksumFunction)
      .build()
      .run();
  verify(diffListener, times(1)).onChangedTable(anyList());
  verify(diffListener, never()).onNewPartition(anyString(), any(Partition.class));
  verify(diffListener, never()).onChangedPartition(anyString(), any(Partition.class), anyList());
  verify(diffListener, never()).onDataChanged(anyString(), any(Partition.class));
}
 
开发者ID:HotelsDotCom,项目名称:circus-train,代码行数:25,代码来源:HiveDifferencesIntegrationTest.java


示例7: HdfsSnapshotLocationManager

import org.apache.hadoop.hive.metastore.api.Partition; //导入依赖的package包/类
HdfsSnapshotLocationManager(
    HiveConf sourceHiveConf,
    String eventId,
    Table sourceTable,
    boolean snapshotsDisabled,
    String tableBasePath,
    SourceCatalogListener sourceCatalogListener) throws IOException {
  this(sourceHiveConf, eventId, sourceTable, Collections.<Partition> emptyList(), snapshotsDisabled, tableBasePath,
      sourceCatalogListener);
}
 
开发者ID:HotelsDotCom,项目名称:circus-train,代码行数:11,代码来源:HdfsSnapshotLocationManager.java


示例8: getLocationManager

import org.apache.hadoop.hive.metastore.api.Partition; //导入依赖的package包/类
public SourceLocationManager getLocationManager(
    Table table,
    List<Partition> partitions,
    String eventId,
    Map<String, Object> copierOptions)
  throws IOException {
  if (MetaStoreUtils.isView(table)) {
    return new ViewLocationManager();
  }
  HdfsSnapshotLocationManager hdfsSnapshotLocationManager = new HdfsSnapshotLocationManager(getHiveConf(), eventId,
      table, partitions, snapshotsDisabled, sourceTableLocation, sourceCatalogListener);
  boolean ignoreMissingFolder = MapUtils.getBooleanValue(copierOptions,
      CopierOptions.IGNORE_MISSING_PARTITION_FOLDER_ERRORS, false);
  if (ignoreMissingFolder) {
    return new FilterMissingPartitionsLocationManager(hdfsSnapshotLocationManager, getHiveConf());
  }
  return hdfsSnapshotLocationManager;
}
 
开发者ID:HotelsDotCom,项目名称:circus-train,代码行数:19,代码来源:Source.java


示例9: setupHiveTables

import org.apache.hadoop.hive.metastore.api.Partition; //导入依赖的package包/类
private void setupHiveTables() throws TException, IOException {
  List<FieldSchema> partitionKeys = Lists.newArrayList(newFieldSchema("p1"), newFieldSchema("p2"));

  File tableLocation = new File("db1", "table1");
  StorageDescriptor sd = newStorageDescriptor(tableLocation, "col0");
  table1 = newTable("table1", "db1", partitionKeys, sd);
  Partition partition1 = newPartition(table1, "value1", "value2");
  Partition partition2 = newPartition(table1, "value11", "value22");
  table1Partitions = Arrays.asList(partition1, partition2); //
  table1PartitionNames = Arrays.asList(Warehouse.makePartName(partitionKeys, partition1.getValues()),
      Warehouse.makePartName(partitionKeys, partition2.getValues()));

  File tableLocation2 = new File("db2", "table2");
  StorageDescriptor sd2 = newStorageDescriptor(tableLocation2, "col0");
  table2 = newTable("table2", "db2", partitionKeys, sd2);
}
 
开发者ID:HotelsDotCom,项目名称:circus-train,代码行数:17,代码来源:DiffGeneratedPartitionPredicateTest.java


示例10: noMatchingPartitions

import org.apache.hadoop.hive.metastore.api.Partition; //导入依赖的package包/类
@Test
public void noMatchingPartitions() throws Exception {
  PartitionsAndStatistics emptyPartitionsAndStats = new PartitionsAndStatistics(sourceTable.getPartitionKeys(),
      Collections.<Partition> emptyList(), Collections.<String, List<ColumnStatisticsObj>> emptyMap());
  when(source.getPartitions(sourceTable, PARTITION_PREDICATE, MAX_PARTITIONS)).thenReturn(emptyPartitionsAndStats);
  when(source.getLocationManager(sourceTable, Collections.<Partition> emptyList(), EVENT_ID, copierOptions))
      .thenReturn(sourceLocationManager);

  PartitionedTableReplication replication = new PartitionedTableReplication(DATABASE, TABLE, partitionPredicate,
      source, replica, copierFactoryManager, eventIdFactory, targetTableLocation, DATABASE, TABLE, copierOptions,
      listener);
  replication.replicate();

  verifyZeroInteractions(copier);
  InOrder replicationOrder = inOrder(sourceLocationManager, replica, replicaLocationManager, listener);
  replicationOrder.verify(replica).validateReplicaTable(DATABASE, TABLE);
  replicationOrder.verify(replica).updateMetadata(EVENT_ID, sourceTableAndStatistics, DATABASE, TABLE,
      replicaLocationManager);
}
 
开发者ID:HotelsDotCom,项目名称:circus-train,代码行数:20,代码来源:PartitionedTableReplicationTest.java


示例11: noMatchingPartitions

import org.apache.hadoop.hive.metastore.api.Partition; //导入依赖的package包/类
@Test
public void noMatchingPartitions() throws Exception {
  PartitionsAndStatistics emptyPartitionsAndStats = new PartitionsAndStatistics(sourceTable.getPartitionKeys(),
      Collections.<Partition> emptyList(), Collections.<String, List<ColumnStatisticsObj>> emptyMap());
  when(source.getPartitions(sourceTable, PARTITION_PREDICATE, MAX_PARTITIONS)).thenReturn(emptyPartitionsAndStats);
  when(source.getLocationManager(sourceTable, Collections.<Partition> emptyList(), EVENT_ID, copierOptions))
      .thenReturn(sourceLocationManager);

  PartitionedTableMetadataUpdateReplication replication = new PartitionedTableMetadataUpdateReplication(DATABASE,
      TABLE, partitionPredicate, source, replica, eventIdFactory, replicaLocation, DATABASE, TABLE);
  replication.replicate();

  verify(replica).validateReplicaTable(DATABASE, TABLE);
  verify(replica).updateMetadata(eq(EVENT_ID), eq(sourceTableAndStatistics), eq(DATABASE), eq(TABLE),
      any(MetadataUpdateReplicaLocationManager.class));
}
 
开发者ID:HotelsDotCom,项目名称:circus-train,代码行数:17,代码来源:PartitionedTableMetadataUpdateReplicationTest.java


示例12: typical

import org.apache.hadoop.hive.metastore.api.Partition; //导入依赖的package包/类
@Test
public void typical() throws Exception {
  List<FieldSchema> partitionKeys = Lists.newArrayList(newFieldSchema("a"), newFieldSchema("c"));
  Table table = newTable("t1", "db1", partitionKeys, newStorageDescriptor(new File("bla"), "col1"));
  List<Partition> partitions = Lists.newArrayList(newPartition(table, "b", "d"));
  statisticsPerPartitionName.put("a=b/c=d", columnStats);

  PartitionsAndStatistics partitionsAndStatistics = new PartitionsAndStatistics(partitionKeys, partitions,
      statisticsPerPartitionName);
  List<String> expectedName = Lists.newArrayList("a=b/c=d");

  assertThat(partitionsAndStatistics.getPartitionNames(), is(expectedName));
  assertThat(partitionsAndStatistics.getPartitions(), is(partitions));
  ColumnStatisticsDesc statsDesc = new ColumnStatisticsDesc(false, "db1", "t1");
  statsDesc.setPartName("a=b/c=d");
  ColumnStatistics expectedStats = new ColumnStatistics(statsDesc, columnStats);
  assertThat(partitionsAndStatistics.getStatisticsForPartition(partitions.get(0)), is(expectedStats));
}
 
开发者ID:HotelsDotCom,项目名称:circus-train,代码行数:19,代码来源:PartitionsAndStatisticsTest.java


示例13: getPartitions

import org.apache.hadoop.hive.metastore.api.Partition; //导入依赖的package包/类
private static List<PartitionValue> getPartitions(Table table, Partition partition) {
  if(partition == null){
    return Collections.emptyList();
  }

  final List<String> partitionValues = partition.getValues();
  final List<PartitionValue> output = Lists.newArrayList();
  final List<FieldSchema> partitionKeys = table.getPartitionKeys();
  for(int i =0; i < partitionKeys.size(); i++){
    PartitionValue value = getPartitionValue(partitionKeys.get(i), partitionValues.get(i));
    if(value != null){
      output.add(value);
    }
  }
  return output;
}
 
开发者ID:dremio,项目名称:dremio-oss,代码行数:17,代码来源:DatasetBuilder.java


示例14: replicaTableDoesNotExist

import org.apache.hadoop.hive.metastore.api.Partition; //导入依赖的package包/类
@Test
public void replicaTableDoesNotExist() {
  hiveDifferences = HiveDifferences
      .builder(diffListener)
      .comparatorRegistry(comparatorRegistry)
      .source(sourceConfiguration, sourceTable, sourcePartitionIterable)
      .replica(Optional.<Table> absent(), Optional.<PartitionFetcher> absent())
      .checksumFunction(checksumFunction)
      .build();
  hiveDifferences.run();

  InOrder inOrder = inOrder(diffListener);
  inOrder.verify(diffListener).onDiffStart(any(TableAndMetadata.class), any(Optional.class));
  verify(diffListener, never()).onChangedTable(anyList());
  inOrder.verify(diffListener, times(1)).onNewPartition(anyString(), any(Partition.class));
  verify(diffListener, never()).onChangedPartition(anyString(), any(Partition.class), anyList());
  verify(diffListener, never()).onDataChanged(anyString(), any(Partition.class));
  inOrder.verify(diffListener).onDiffEnd();
}
 
开发者ID:HotelsDotCom,项目名称:circus-train,代码行数:20,代码来源:HiveDifferencesTest.java


示例15: dropPartition

import org.apache.hadoop.hive.metastore.api.Partition; //导入依赖的package包/类
@Test
public void dropPartition() throws Exception {
  String tableName = "my_table";
  HiveMetaStoreClient client = server.newClient();

  try {
    Table table = createPartitionedTable(DATABASE, tableName);

    Partition partition = new Partition();
    partition.setDbName(DATABASE);
    partition.setTableName(tableName);
    partition.setValues(Arrays.asList("1"));
    partition.setSd(new StorageDescriptor(table.getSd()));
    partition.getSd().setLocation(
        String.format("file:%s/%s/%s/partcol=1", server.temporaryFolder.getRoot(), DATABASE, tableName));
    client.add_partition(partition);

    try (Connection connection = DriverManager.getConnection(server.connectionURL());
        Statement statement = connection.createStatement()) {
      String addPartitionHql = String.format("ALTER TABLE %s.%s DROP PARTITION (partcol=1)", DATABASE, tableName);
      statement.execute(addPartitionHql);
    }

    List<Partition> partitions = client.listPartitions(DATABASE, tableName, (short) -1);
    assertThat(partitions.size(), is(0));
  } finally {
    client.close();
  }
}
 
开发者ID:HotelsDotCom,项目名称:beeju,代码行数:30,代码来源:HiveServer2JUnitRuleTest.java


示例16: onChangedPartition

import org.apache.hadoop.hive.metastore.api.Partition; //导入依赖的package包/类
@Test
public void onChangedPartition() throws Exception {
  Partition partition1 = new Partition(Lists.newArrayList("val1", "val2"), DB, TABLE, 1, 1, null, null);
  Partition partition2 = new Partition(Lists.newArrayList("val11", "val22"), DB, TABLE, 1, 1, null, null);
  listener.onDiffStart(source, replica);
  listener.onChangedPartition("p1", partition1, differences);
  listener.onChangedPartition("p2", partition2, differences);
  assertThat(listener.getPartitionSpecFilter(), is("(p1='val1' AND p2=val2) OR (p1='val11' AND p2=val22)"));
}
 
开发者ID:HotelsDotCom,项目名称:circus-train,代码行数:10,代码来源:PartitionSpecCreatingDiffListenerTest.java


示例17: replicaPartitionHasChanged

import org.apache.hadoop.hive.metastore.api.Partition; //导入依赖的package包/类
@Test
public void replicaPartitionHasChanged() throws Exception {
  Partition replicaPartition1 = catalog.client().getPartition(DATABASE, REPLICA_TABLE, "part=1");
  replicaPartition1.getSd().getCols().add(BAZ_COL);
  catalog.client().alter_partition(DATABASE, REPLICA_TABLE, replicaPartition1);

  Table sourceTable = catalog.client().getTable(DATABASE, SOURCE_TABLE);
  Table replicaTable = catalog.client().getTable(DATABASE, REPLICA_TABLE);

  HiveDifferences
      .builder(diffListener)
      .comparatorRegistry(comparatorRegistry)
      .source(configuration, sourceTable, new PartitionIterator(catalog.client(), sourceTable, PARTITION_BATCH_SIZE))
      .replica(Optional.of(replicaTable),
          Optional.of(new BufferedPartitionFetcher(catalog.client(), replicaTable, PARTITION_BATCH_SIZE)))
      .checksumFunction(checksumFunction)
      .build()
      .run();
  verify(diffListener, never()).onChangedTable(anyList());
  verify(diffListener, never()).onNewPartition(anyString(), any(Partition.class));
  verify(diffListener, times(1)).onChangedPartition("part=1",
      catalog.client().getPartition(DATABASE, SOURCE_TABLE, "part=1"),
      Arrays.<Diff<Object, Object>> asList(new BaseDiff<Object, Object>(
          "Collection partition.sd.cols of class java.util.ArrayList has different size: left.size()=2 and right.size()=3",
          Arrays.asList(FOO_COL, BAR_COL), Arrays.asList(FOO_COL, BAR_COL, BAZ_COL))));
  verify(diffListener, never()).onDataChanged(anyString(), any(Partition.class));
}
 
开发者ID:HotelsDotCom,项目名称:circus-train,代码行数:28,代码来源:HiveDifferencesIntegrationTest.java


示例18: run

import org.apache.hadoop.hive.metastore.api.Partition; //导入依赖的package包/类
@Override
public void run() throws CircusTrainException {
  out.println(String.format("Source catalog:        %s", source.getName()));
  out.println(String.format("Source MetaStore URIs: %s", source.getMetaStoreUris()));
  out.println(String.format("Source table:          %s", Warehouse.getQualifiedName(sourceTable)));
  out.println(String.format("Partition expression:  %s", partitionFilter));

  String parsedPartitionFilter = partitionPredicate.getPartitionPredicate();
  if (!Objects.equals(partitionFilter, parsedPartitionFilter)) {
    LOG.info("Evaluated expression to: {}", parsedPartitionFilter);
  }
  try {
    LOG.info("Executing filter with limit {} on: {}:{} ({})", partitionLimit, source.getName(),
        Warehouse.getQualifiedName(sourceTable), source.getMetaStoreUris());
    PartitionsAndStatistics partitions = source.getPartitions(sourceTable, parsedPartitionFilter, partitionLimit);
    LOG.info("Retrieved {} partition(s):", partitions.getPartitions().size());
    SortedSet<Partition> sorted = new TreeSet<>(PARTITION_COMPARATOR);
    sorted.addAll(partitions.getPartitions());
    List<List<String>> vals = new ArrayList<>();
    for (Partition partition : sorted) {
      vals.add(partition.getValues());
      LOG.info("{}", partition.getValues());
    }
    out.println(String.format("Partition filter:      %s", parsedPartitionFilter));
    out.println(String.format("Partition limit:       %s", partitionLimit));
    out.println(String.format("Partition(s) fetched:  %s", vals));
  } catch (TException e) {
    throw new CircusTrainException("Could not fetch partitions for filter: '" + parsedPartitionFilter + "'.", e);
  }
}
 
开发者ID:HotelsDotCom,项目名称:circus-train,代码行数:31,代码来源:FilterGeneratorImpl.java


示例19: fetch

import org.apache.hadoop.hive.metastore.api.Partition; //导入依赖的package包/类
@Override
public Partition fetch(String partitionName) {
  int partitionPosition = partitionNames.indexOf(partitionName);
  if (partitionPosition < 0) {
    throw new PartitionNotFoundException("Unknown partition " + partitionName);
  }

  if (!buffer.containsKey(partitionName)) {
    bufferPartitions(partitionPosition);
  }

  return buffer.get(partitionName);
}
 
开发者ID:HotelsDotCom,项目名称:circus-train,代码行数:14,代码来源:BufferedPartitionFetcher.java


示例20: newPartition

import org.apache.hadoop.hive.metastore.api.Partition; //导入依赖的package包/类
public static Partition newPartition() {
  Partition partition = new Partition();
  StorageDescriptor sd = new StorageDescriptor();
  SerDeInfo info = new SerDeInfo();
  info.setParameters(new HashMap<String, String>());
  sd.setSerdeInfo(info);
  partition.setSd(sd);
  partition.setParameters(new HashMap<String, String>());
  return partition;
}
 
开发者ID:HotelsDotCom,项目名称:circus-train,代码行数:11,代码来源:TestUtils.java



注:本文中的org.apache.hadoop.hive.metastore.api.Partition类示例整理自Github/MSDocs等源码及文档管理平台,相关代码片段筛选自各路编程大神贡献的开源项目,源码版权归原作者所有,传播和使用请参考对应项目的License;未经允许,请勿转载。


鲜花

握手

雷人

路过

鸡蛋
该文章已有0人参与评论

请发表评论

全部评论

专题导读
上一篇:
Java Alignment类代码示例发布时间:2022-05-21
下一篇:
Java CompilationUnitChange类代码示例发布时间:2022-05-21
热门推荐
阅读排行榜

扫描微信二维码

查看手机版网站

随时了解更新最新资讯

139-2527-9053

在线客服(服务时间 9:00~18:00)

在线QQ客服
地址:深圳市南山区西丽大学城创智工业园
电邮:jeky_zhao#qq.com
移动电话:139-2527-9053

Powered by 互联科技 X3.4© 2001-2213 极客世界.|Sitemap