• 设为首页
  • 点击收藏
  • 手机版
    手机扫一扫访问
    迪恩网络手机版
  • 关注官方公众号
    微信扫一扫关注
    公众号

Java INodeDirectory类代码示例

原作者: [db:作者] 来自: [db:来源] 收藏 邀请

本文整理汇总了Java中org.apache.hadoop.hdfs.server.namenode.INodeDirectory的典型用法代码示例。如果您正苦于以下问题:Java INodeDirectory类的具体用法?Java INodeDirectory怎么用?Java INodeDirectory使用的例子?那么恭喜您, 这里精选的类代码示例或许可以为您提供帮助。



INodeDirectory类属于org.apache.hadoop.hdfs.server.namenode包,在下文中一共展示了INodeDirectory类的20个代码示例,这些例子默认根据受欢迎程度排序。您可以为喜欢或者感觉有用的代码点赞,您的评价将有助于我们的系统推荐出更棒的Java代码示例。

示例1: checkNestedSnapshottable

import org.apache.hadoop.hdfs.server.namenode.INodeDirectory; //导入依赖的package包/类
private void checkNestedSnapshottable(INodeDirectory dir, String path)
    throws SnapshotException {
  if (allowNestedSnapshots) {
    return;
  }

  for(INodeDirectory s : snapshottables.values()) {
    if (s.isAncestorDirectory(dir)) {
      throw new SnapshotException(
          "Nested snapshottable directories not allowed: path=" + path
          + ", the subdirectory " + s.getFullPathName()
          + " is already a snapshottable directory.");
    }
    if (dir.isAncestorDirectory(s)) {
      throw new SnapshotException(
          "Nested snapshottable directories not allowed: path=" + path
          + ", the ancestor " + s.getFullPathName()
          + " is already a snapshottable directory.");
    }
  }
}
 
开发者ID:naver,项目名称:hadoop,代码行数:22,代码来源:SnapshotManager.java


示例2: setSnapshottable

import org.apache.hadoop.hdfs.server.namenode.INodeDirectory; //导入依赖的package包/类
/**
 * Set the given directory as a snapshottable directory.
 * If the path is already a snapshottable directory, update the quota.
 */
public void setSnapshottable(final String path, boolean checkNestedSnapshottable)
    throws IOException {
  final INodesInPath iip = fsdir.getINodesInPath4Write(path);
  final INodeDirectory d = INodeDirectory.valueOf(iip.getLastINode(), path);
  if (checkNestedSnapshottable) {
    checkNestedSnapshottable(d, path);
  }

  if (d.isSnapshottable()) {
    //The directory is already a snapshottable directory.
    d.setSnapshotQuota(DirectorySnapshottableFeature.SNAPSHOT_LIMIT);
  } else {
    d.addSnapshottableFeature();
  }
  addSnapshottable(d);
}
 
开发者ID:naver,项目名称:hadoop,代码行数:21,代码来源:SnapshotManager.java


示例3: resetSnapshottable

import org.apache.hadoop.hdfs.server.namenode.INodeDirectory; //导入依赖的package包/类
/**
 * Set the given snapshottable directory to non-snapshottable.
 * 
 * @throws SnapshotException if there are snapshots in the directory.
 */
public void resetSnapshottable(final String path) throws IOException {
  final INodesInPath iip = fsdir.getINodesInPath4Write(path);
  final INodeDirectory d = INodeDirectory.valueOf(iip.getLastINode(), path);
  DirectorySnapshottableFeature sf = d.getDirectorySnapshottableFeature();
  if (sf == null) {
    // the directory is already non-snapshottable
    return;
  }
  if (sf.getNumSnapshots() > 0) {
    throw new SnapshotException("The directory " + path + " has snapshot(s). "
        + "Please redo the operation after removing all the snapshots.");
  }

  if (d == fsdir.getRoot()) {
    d.setSnapshotQuota(0);
  } else {
    d.removeSnapshottableFeature();
  }
  removeSnapshottable(d);
}
 
开发者ID:naver,项目名称:hadoop,代码行数:26,代码来源:SnapshotManager.java


示例4: createSnapshot

import org.apache.hadoop.hdfs.server.namenode.INodeDirectory; //导入依赖的package包/类
/**
 * Create a snapshot of the given path.
 * It is assumed that the caller will perform synchronization.
 *
 * @param iip the INodes resolved from the snapshottable directory's path
 * @param snapshotName
 *          The name of the snapshot.
 * @throws IOException
 *           Throw IOException when 1) the given path does not lead to an
 *           existing snapshottable directory, and/or 2) there exists a
 *           snapshot with the given name for the directory, and/or 3)
 *           snapshot number exceeds quota
 */
public String createSnapshot(final INodesInPath iip, String snapshotRoot,
    String snapshotName) throws IOException {
  INodeDirectory srcRoot = getSnapshottableRoot(iip);

  if (snapshotCounter == getMaxSnapshotID()) {
    // We have reached the maximum allowable snapshot ID and since we don't
    // handle rollover we will fail all subsequent snapshot creation
    // requests.
    //
    throw new SnapshotException(
        "Failed to create the snapshot. The FileSystem has run out of " +
        "snapshot IDs and ID rollover is not supported.");
  }

  srcRoot.addSnapshot(snapshotCounter, snapshotName);
    
  //create success, update id
  snapshotCounter++;
  numSnapshots.getAndIncrement();
  return Snapshot.getSnapshotPath(snapshotRoot, snapshotName);
}
 
开发者ID:naver,项目名称:hadoop,代码行数:35,代码来源:SnapshotManager.java


示例5: diff

import org.apache.hadoop.hdfs.server.namenode.INodeDirectory; //导入依赖的package包/类
/**
 * Compute the difference between two snapshots of a directory, or between a
 * snapshot of the directory and its current tree.
 */
public SnapshotDiffReport diff(final INodesInPath iip,
    final String snapshotRootPath, final String from,
    final String to) throws IOException {
  // Find the source root directory path where the snapshots were taken.
  // All the check for path has been included in the valueOf method.
  final INodeDirectory snapshotRoot = getSnapshottableRoot(iip);

  if ((from == null || from.isEmpty())
      && (to == null || to.isEmpty())) {
    // both fromSnapshot and toSnapshot indicate the current tree
    return new SnapshotDiffReport(snapshotRootPath, from, to,
        Collections.<DiffReportEntry> emptyList());
  }
  final SnapshotDiffInfo diffs = snapshotRoot
      .getDirectorySnapshottableFeature().computeDiff(snapshotRoot, from, to);
  return diffs != null ? diffs.generateReport() : new SnapshotDiffReport(
      snapshotRootPath, from, to, Collections.<DiffReportEntry> emptyList());
}
 
开发者ID:naver,项目名称:hadoop,代码行数:23,代码来源:SnapshotManager.java


示例6: Root

import org.apache.hadoop.hdfs.server.namenode.INodeDirectory; //导入依赖的package包/类
Root(INodeDirectory other) {
  // Always preserve ACL, XAttr.
  super(other, false, Lists.newArrayList(
    Iterables.filter(Arrays.asList(other.getFeatures()), new Predicate<Feature>() {

      @Override
      public boolean apply(Feature input) {
        if (AclFeature.class.isInstance(input) 
            || XAttrFeature.class.isInstance(input)) {
          return true;
        }
        return false;
      }
      
    }))
    .toArray(new Feature[0]));
}
 
开发者ID:naver,项目名称:hadoop,代码行数:18,代码来源:Snapshot.java


示例7: loadCreated

import org.apache.hadoop.hdfs.server.namenode.INodeDirectory; //导入依赖的package包/类
/**
 * Load a node stored in the created list from fsimage.
 * @param createdNodeName The name of the created node.
 * @param parent The directory that the created list belongs to.
 * @return The created node.
 */
public static INode loadCreated(byte[] createdNodeName,
    INodeDirectory parent) throws IOException {
  // the INode in the created list should be a reference to another INode
  // in posterior SnapshotDiffs or one of the current children
  for (DirectoryDiff postDiff : parent.getDiffs()) {
    final INode d = postDiff.getChildrenDiff().search(ListType.DELETED,
        createdNodeName);
    if (d != null) {
      return d;
    } // else go to the next SnapshotDiff
  } 
  // use the current child
  INode currentChild = parent.getChild(createdNodeName,
      Snapshot.CURRENT_STATE_ID);
  if (currentChild == null) {
    throw new IOException("Cannot find an INode associated with the INode "
        + DFSUtil.bytes2String(createdNodeName)
        + " in created list while loading FSImage.");
  }
  return currentChild;
}
 
开发者ID:naver,项目名称:hadoop,代码行数:28,代码来源:SnapshotFSImageFormat.java


示例8: loadDeletedList

import org.apache.hadoop.hdfs.server.namenode.INodeDirectory; //导入依赖的package包/类
/**
 * Load the deleted list from the fsimage.
 * 
 * @param parent The directory that the deleted list belongs to.
 * @param createdList The created list associated with the deleted list in 
 *                    the same Diff.
 * @param in The {@link DataInput} to read.
 * @param loader The {@link Loader} instance.
 * @return The deleted list.
 */
private static List<INode> loadDeletedList(INodeDirectory parent,
    List<INode> createdList, DataInput in, FSImageFormat.Loader loader)
    throws IOException {
  int deletedSize = in.readInt();
  List<INode> deletedList = new ArrayList<INode>(deletedSize);
  for (int i = 0; i < deletedSize; i++) {
    final INode deleted = loader.loadINodeWithLocalName(true, in, true);
    deletedList.add(deleted);
    // set parent: the parent field of an INode in the deleted list is not 
    // useful, but set the parent here to be consistent with the original 
    // fsdir tree.
    deleted.setParent(parent);
    if (deleted.isFile()) {
      loader.updateBlocksMap(deleted.asFile());
    }
  }
  return deletedList;
}
 
开发者ID:naver,项目名称:hadoop,代码行数:29,代码来源:SnapshotFSImageFormat.java


示例9: loadSnapshotList

import org.apache.hadoop.hdfs.server.namenode.INodeDirectory; //导入依赖的package包/类
/**
 * Load snapshots and snapshotQuota for a Snapshottable directory.
 *
 * @param snapshottableParent
 *          The snapshottable directory for loading.
 * @param numSnapshots
 *          The number of snapshots that the directory has.
 * @param loader
 *          The loader
 */
public static void loadSnapshotList(INodeDirectory snapshottableParent,
    int numSnapshots, DataInput in, FSImageFormat.Loader loader)
    throws IOException {
  DirectorySnapshottableFeature sf = snapshottableParent
      .getDirectorySnapshottableFeature();
  Preconditions.checkArgument(sf != null);
  for (int i = 0; i < numSnapshots; i++) {
    // read snapshots
    final Snapshot s = loader.getSnapshot(in);
    s.getRoot().setParent(snapshottableParent);
    sf.addSnapshot(s);
  }
  int snapshotQuota = in.readInt();
  snapshottableParent.setSnapshotQuota(snapshotQuota);
}
 
开发者ID:naver,项目名称:hadoop,代码行数:26,代码来源:SnapshotFSImageFormat.java


示例10: loadDirectoryDiff

import org.apache.hadoop.hdfs.server.namenode.INodeDirectory; //导入依赖的package包/类
/**
 * Load {@link DirectoryDiff} from fsimage.
 * @param parent The directory that the SnapshotDiff belongs to.
 * @param in The {@link DataInput} instance to read.
 * @param loader The {@link Loader} instance that this loading procedure is 
 *               using.
 * @return A {@link DirectoryDiff}.
 */
private static DirectoryDiff loadDirectoryDiff(INodeDirectory parent,
    DataInput in, FSImageFormat.Loader loader) throws IOException {
  // 1. Read the full path of the Snapshot root to identify the Snapshot
  final Snapshot snapshot = loader.getSnapshot(in);

  // 2. Load DirectoryDiff#childrenSize
  int childrenSize = in.readInt();
  
  // 3. Load DirectoryDiff#snapshotINode 
  INodeDirectoryAttributes snapshotINode = loadSnapshotINodeInDirectoryDiff(
      snapshot, in, loader);
  
  // 4. Load the created list in SnapshotDiff#Diff
  List<INode> createdList = loadCreatedList(parent, in);
  
  // 5. Load the deleted list in SnapshotDiff#Diff
  List<INode> deletedList = loadDeletedList(parent, createdList, in, loader);
  
  // 6. Compose the SnapshotDiff
  List<DirectoryDiff> diffs = parent.getDiffs().asList();
  DirectoryDiff sdiff = new DirectoryDiff(snapshot.getId(), snapshotINode,
      diffs.isEmpty() ? null : diffs.get(0), childrenSize, createdList,
      deletedList, snapshotINode == snapshot.getRoot());
  return sdiff;
}
 
开发者ID:naver,项目名称:hadoop,代码行数:34,代码来源:SnapshotFSImageFormat.java


示例11: destroyCreatedList

import org.apache.hadoop.hdfs.server.namenode.INodeDirectory; //导入依赖的package包/类
/** clear the created list */
private QuotaCounts destroyCreatedList(
    final BlockStoragePolicySuite bsps,
    final INodeDirectory currentINode,
    final BlocksMapUpdateInfo collectedBlocks,
    final List<INode> removedINodes) {
  QuotaCounts counts = new QuotaCounts.Builder().build();
  final List<INode> createdList = getList(ListType.CREATED);
  for (INode c : createdList) {
    c.computeQuotaUsage(bsps, counts, true);
    c.destroyAndCollectBlocks(bsps, collectedBlocks, removedINodes);
    // c should be contained in the children list, remove it
    currentINode.removeChild(c);
  }
  createdList.clear();
  return counts;
}
 
开发者ID:naver,项目名称:hadoop,代码行数:18,代码来源:DirectoryWithSnapshotFeature.java


示例12: combinePosteriorAndCollectBlocks

import org.apache.hadoop.hdfs.server.namenode.INodeDirectory; //导入依赖的package包/类
@Override
QuotaCounts combinePosteriorAndCollectBlocks(
    final BlockStoragePolicySuite bsps,
    final INodeDirectory currentDir, final DirectoryDiff posterior,
    final BlocksMapUpdateInfo collectedBlocks,
    final List<INode> removedINodes) {
  final QuotaCounts counts = new QuotaCounts.Builder().build();
  diff.combinePosterior(posterior.diff, new Diff.Processor<INode>() {
    /** Collect blocks for deleted files. */
    @Override
    public void process(INode inode) {
      if (inode != null) {
        inode.computeQuotaUsage(bsps, counts, false);
        inode.destroyAndCollectBlocks(bsps, collectedBlocks, removedINodes);
      }
    }
  });
  return counts;
}
 
开发者ID:naver,项目名称:hadoop,代码行数:20,代码来源:DirectoryWithSnapshotFeature.java


示例13: getChild

import org.apache.hadoop.hdfs.server.namenode.INodeDirectory; //导入依赖的package包/类
/** @return the child with the given name. */
INode getChild(byte[] name, boolean checkPosterior,
    INodeDirectory currentDir) {
  for(DirectoryDiff d = this; ; d = d.getPosterior()) {
    final Container<INode> returned = d.diff.accessPrevious(name);
    if (returned != null) {
      // the diff is able to determine the inode
      return returned.getElement();
    } else if (!checkPosterior) {
      // Since checkPosterior is false, return null, i.e. not found.
      return null;
    } else if (d.getPosterior() == null) {
      // no more posterior diff, get from current inode.
      return currentDir.getChild(name, Snapshot.CURRENT_STATE_ID);
    }
  }
}
 
开发者ID:naver,项目名称:hadoop,代码行数:18,代码来源:DirectoryWithSnapshotFeature.java


示例14: removeChild

import org.apache.hadoop.hdfs.server.namenode.INodeDirectory; //导入依赖的package包/类
/**
 * Remove an inode from parent's children list. The caller of this method
 * needs to make sure that parent is in the given snapshot "latest".
 */
public boolean removeChild(INodeDirectory parent, INode child,
    int latestSnapshotId) {
  // For a directory that is not a renamed node, if isInLatestSnapshot returns
  // false, the directory is not in the latest snapshot, thus we do not need
  // to record the removed child in any snapshot.
  // For a directory that was moved/renamed, note that if the directory is in
  // any of the previous snapshots, we will create a reference node for the
  // directory while rename, and isInLatestSnapshot will return true in that
  // scenario (if all previous snapshots have been deleted, isInLatestSnapshot
  // still returns false). Thus if isInLatestSnapshot returns false, the
  // directory node cannot be in any snapshot (not in current tree, nor in
  // previous src tree). Thus we do not need to record the removed child in
  // any snapshot.
  ChildrenDiff diff = diffs.checkAndAddLatestSnapshotDiff(latestSnapshotId,
      parent).diff;
  UndoInfo<INode> undoInfo = diff.delete(child);

  final boolean removed = parent.removeChild(child);
  if (!removed && undoInfo != null) {
    // remove failed, undo
    diff.undoDelete(child, undoInfo);
  }
  return removed;
}
 
开发者ID:naver,项目名称:hadoop,代码行数:29,代码来源:DirectoryWithSnapshotFeature.java


示例15: saveChild2Snapshot

import org.apache.hadoop.hdfs.server.namenode.INodeDirectory; //导入依赖的package包/类
/** Used to record the modification of a symlink node */
public INode saveChild2Snapshot(INodeDirectory currentINode,
    final INode child, final int latestSnapshotId, final INode snapshotCopy) {
  Preconditions.checkArgument(!child.isDirectory(),
      "child is a directory, child=%s", child);
  Preconditions.checkArgument(latestSnapshotId != Snapshot.CURRENT_STATE_ID);
  
  final DirectoryDiff diff = diffs.checkAndAddLatestSnapshotDiff(
      latestSnapshotId, currentINode);
  if (diff.getChild(child.getLocalNameBytes(), false, currentINode) != null) {
    // it was already saved in the latest snapshot earlier.  
    return child;
  }

  diff.diff.modify(snapshotCopy, child);
  return child;
}
 
开发者ID:naver,项目名称:hadoop,代码行数:18,代码来源:DirectoryWithSnapshotFeature.java


示例16: loadSnapshotSection

import org.apache.hadoop.hdfs.server.namenode.INodeDirectory; //导入依赖的package包/类
/**
 * Load the snapshots section from fsimage. Also add snapshottable feature
 * to snapshottable directories.
 */
public void loadSnapshotSection(InputStream in) throws IOException {
  SnapshotManager sm = fsn.getSnapshotManager();
  SnapshotSection section = SnapshotSection.parseDelimitedFrom(in);
  int snum = section.getNumSnapshots();
  sm.setNumSnapshots(snum);
  sm.setSnapshotCounter(section.getSnapshotCounter());
  for (long sdirId : section.getSnapshottableDirList()) {
    INodeDirectory dir = fsDir.getInode(sdirId).asDirectory();
    if (!dir.isSnapshottable()) {
      dir.addSnapshottableFeature();
    } else {
      // dir is root, and admin set root to snapshottable before
      dir.setSnapshotQuota(DirectorySnapshottableFeature.SNAPSHOT_LIMIT);
    }
    sm.addSnapshottable(dir);
  }
  loadSnapshots(in, snum);
}
 
开发者ID:naver,项目名称:hadoop,代码行数:23,代码来源:FSImageFormatPBSnapshot.java


示例17: checkQuotaUsageComputation

import org.apache.hadoop.hdfs.server.namenode.INodeDirectory; //导入依赖的package包/类
private void checkQuotaUsageComputation(final Path dirPath,
    final long expectedNs, final long expectedDs) throws IOException {
  INodeDirectory dirNode = getDir(fsdir, dirPath);
  assertTrue(dirNode.isQuotaSet());
  QuotaCounts q = dirNode.getDirectoryWithQuotaFeature().getSpaceConsumed();
  assertEquals(dirNode.dumpTreeRecursively().toString(), expectedNs,
      q.getNameSpace());
  assertEquals(dirNode.dumpTreeRecursively().toString(), expectedDs,
      q.getStorageSpace());
  QuotaCounts counts = new QuotaCounts.Builder().build();
  dirNode.computeQuotaUsage(fsdir.getBlockStoragePolicySuite(), counts, false);
  assertEquals(dirNode.dumpTreeRecursively().toString(), expectedNs,
      counts.getNameSpace());
  assertEquals(dirNode.dumpTreeRecursively().toString(), expectedDs,
      counts.getStorageSpace());
}
 
开发者ID:naver,项目名称:hadoop,代码行数:17,代码来源:TestSnapshotDeletion.java


示例18: checkSnapshotList

import org.apache.hadoop.hdfs.server.namenode.INodeDirectory; //导入依赖的package包/类
/**
 * Check the correctness of snapshot list within snapshottable dir
 */
private void checkSnapshotList(INodeDirectory srcRoot,
    String[] sortedNames, String[] names) {
  assertTrue(srcRoot.isSnapshottable());
  ReadOnlyList<Snapshot> listByName = srcRoot
      .getDirectorySnapshottableFeature().getSnapshotList();
  assertEquals(sortedNames.length, listByName.size());
  for (int i = 0; i < listByName.size(); i++) {
    assertEquals(sortedNames[i], listByName.get(i).getRoot().getLocalName());
  }
  List<DirectoryDiff> listByTime = srcRoot.getDiffs().asList();
  assertEquals(names.length, listByTime.size());
  for (int i = 0; i < listByTime.size(); i++) {
    Snapshot s = srcRoot.getDirectorySnapshottableFeature().getSnapshotById(
        listByTime.get(i).getSnapshotId());
    assertEquals(names[i], s.getRoot().getLocalName());
  }
}
 
开发者ID:naver,项目名称:hadoop,代码行数:21,代码来源:TestSnapshotRename.java


示例19: testSnapshotList

import org.apache.hadoop.hdfs.server.namenode.INodeDirectory; //导入依赖的package包/类
/**
 * Rename snapshot(s), and check the correctness of the snapshot list within
 * {@link INodeDirectorySnapshottable}
 */
@Test (timeout=60000)
public void testSnapshotList() throws Exception {
  DFSTestUtil.createFile(hdfs, file1, BLOCKSIZE, REPLICATION, seed);
  // Create three snapshots for sub1
  SnapshotTestHelper.createSnapshot(hdfs, sub1, "s1");
  SnapshotTestHelper.createSnapshot(hdfs, sub1, "s2");
  SnapshotTestHelper.createSnapshot(hdfs, sub1, "s3");
  
  // Rename s3 to s22
  hdfs.renameSnapshot(sub1, "s3", "s22");
  // Check the snapshots list
  INodeDirectory srcRoot = fsdir.getINode(sub1.toString()).asDirectory();
  checkSnapshotList(srcRoot, new String[] { "s1", "s2", "s22" },
      new String[] { "s1", "s2", "s22" });
  
  // Rename s1 to s4
  hdfs.renameSnapshot(sub1, "s1", "s4");
  checkSnapshotList(srcRoot, new String[] { "s2", "s22", "s4" },
      new String[] { "s4", "s2", "s22" });
  
  // Rename s22 to s0
  hdfs.renameSnapshot(sub1, "s22", "s0");
  checkSnapshotList(srcRoot, new String[] { "s0", "s2", "s4" },
      new String[] { "s4", "s2", "s0" });
}
 
开发者ID:naver,项目名称:hadoop,代码行数:30,代码来源:TestSnapshotRename.java


示例20: testRenameFromNonSDir2SDir

import org.apache.hadoop.hdfs.server.namenode.INodeDirectory; //导入依赖的package包/类
/**
 * Test rename from a non-snapshottable dir to a snapshottable dir
 */
@Test (timeout=60000)
public void testRenameFromNonSDir2SDir() throws Exception {
  final Path sdir1 = new Path("/dir1");
  final Path sdir2 = new Path("/dir2");
  hdfs.mkdirs(sdir1);
  hdfs.mkdirs(sdir2);
  final Path foo = new Path(sdir1, "foo");
  final Path bar = new Path(foo, "bar");
  DFSTestUtil.createFile(hdfs, bar, BLOCKSIZE, REPL, SEED);
  
  SnapshotTestHelper.createSnapshot(hdfs, sdir2, snap1);
  
  final Path newfoo = new Path(sdir2, "foo");
  hdfs.rename(foo, newfoo);
  
  INode fooNode = fsdir.getINode4Write(newfoo.toString());
  assertTrue(fooNode instanceof INodeDirectory);
}
 
开发者ID:naver,项目名称:hadoop,代码行数:22,代码来源:TestRenameWithSnapshots.java



注:本文中的org.apache.hadoop.hdfs.server.namenode.INodeDirectory类示例整理自Github/MSDocs等源码及文档管理平台,相关代码片段筛选自各路编程大神贡献的开源项目,源码版权归原作者所有,传播和使用请参考对应项目的License;未经允许,请勿转载。


鲜花

握手

雷人

路过

鸡蛋
该文章已有0人参与评论

请发表评论

全部评论

专题导读
上一篇:
Java GIOPVersion类代码示例发布时间:2022-05-22
下一篇:
Java Request类代码示例发布时间:2022-05-22
热门推荐
阅读排行榜

扫描微信二维码

查看手机版网站

随时了解更新最新资讯

139-2527-9053

在线客服(服务时间 9:00~18:00)

在线QQ客服
地址:深圳市南山区西丽大学城创智工业园
电邮:jeky_zhao#qq.com
移动电话:139-2527-9053

Powered by 互联科技 X3.4© 2001-2213 极客世界.|Sitemap