• 设为首页
  • 点击收藏
  • 手机版
    手机扫一扫访问
    迪恩网络手机版
  • 关注官方公众号
    微信扫一扫关注
    公众号

Java BlockStoragePolicy类代码示例

原作者: [db:作者] 来自: [db:来源] 收藏 邀请

本文整理汇总了Java中org.apache.hadoop.hdfs.protocol.BlockStoragePolicy的典型用法代码示例。如果您正苦于以下问题:Java BlockStoragePolicy类的具体用法?Java BlockStoragePolicy怎么用?Java BlockStoragePolicy使用的例子?那么恭喜您, 这里精选的类代码示例或许可以为您提供帮助。



BlockStoragePolicy类属于org.apache.hadoop.hdfs.protocol包,在下文中一共展示了BlockStoragePolicy类的20个代码示例,这些例子默认根据受欢迎程度排序。您可以为喜欢或者感觉有用的代码点赞,您的评价将有助于我们的系统推荐出更棒的Java代码示例。

示例1: verifyQuotaForTruncate

import org.apache.hadoop.hdfs.protocol.BlockStoragePolicy; //导入依赖的package包/类
private void verifyQuotaForTruncate(INodesInPath iip, INodeFile file,
    long newLength, QuotaCounts delta) throws QuotaExceededException {
  if (!getFSNamesystem().isImageLoaded() || shouldSkipQuotaChecks()) {
    // Do not check quota if edit log is still being processed
    return;
  }
  final long diff = file.computeQuotaDeltaForTruncate(newLength);
  final short repl = file.getBlockReplication();
  delta.addStorageSpace(diff * repl);
  final BlockStoragePolicy policy = getBlockStoragePolicySuite()
      .getPolicy(file.getStoragePolicyID());
  List<StorageType> types = policy.chooseStorageTypes(repl);
  for (StorageType t : types) {
    if (t.supportTypeQuota()) {
      delta.addTypeSpace(t, diff);
    }
  }
  if (diff > 0) {
    readLock();
    try {
      verifyQuota(iip, iip.length() - 1, delta, null);
    } finally {
      readUnlock();
    }
  }
}
 
开发者ID:naver,项目名称:hadoop,代码行数:27,代码来源:FSDirectory.java


示例2: computeQuotaDeltaForUCBlock

import org.apache.hadoop.hdfs.protocol.BlockStoragePolicy; //导入依赖的package包/类
/** Compute quota change for converting a complete block to a UC block */
private QuotaCounts computeQuotaDeltaForUCBlock(INodeFile file) {
  final QuotaCounts delta = new QuotaCounts.Builder().build();
  final BlockInfoContiguous lastBlock = file.getLastBlock();
  if (lastBlock != null) {
    final long diff = file.getPreferredBlockSize() - lastBlock.getNumBytes();
    final short repl = file.getBlockReplication();
    delta.addStorageSpace(diff * repl);
    final BlockStoragePolicy policy = dir.getBlockStoragePolicySuite()
        .getPolicy(file.getStoragePolicyID());
    List<StorageType> types = policy.chooseStorageTypes(repl);
    for (StorageType t : types) {
      if (t.supportTypeQuota()) {
        delta.addTypeSpace(t, diff);
      }
    }
  }
  return delta;
}
 
开发者ID:naver,项目名称:hadoop,代码行数:20,代码来源:FSNamesystem.java


示例3: chooseTarget4NewBlock

import org.apache.hadoop.hdfs.protocol.BlockStoragePolicy; //导入依赖的package包/类
/**
 * Choose target datanodes for creating a new block.
 * 
 * @throws IOException
 *           if the number of targets < minimum replication.
 * @see BlockPlacementPolicy#chooseTarget(String, int, Node,
 *      Set, long, List, BlockStoragePolicy)
 */
public DatanodeStorageInfo[] chooseTarget4NewBlock(final String src,
    final int numOfReplicas, final Node client,
    final Set<Node> excludedNodes,
    final long blocksize,
    final List<String> favoredNodes,
    final byte storagePolicyID) throws IOException {
  List<DatanodeDescriptor> favoredDatanodeDescriptors = 
      getDatanodeDescriptors(favoredNodes);
  final BlockStoragePolicy storagePolicy = storagePolicySuite.getPolicy(storagePolicyID);
  final DatanodeStorageInfo[] targets = blockplacement.chooseTarget(src,
      numOfReplicas, client, excludedNodes, blocksize, 
      favoredDatanodeDescriptors, storagePolicy);
  if (targets.length < minReplication) {
    throw new IOException("File " + src + " could only be replicated to "
        + targets.length + " nodes instead of minReplication (="
        + minReplication + ").  There are "
        + getDatanodeManager().getNetworkTopology().getNumOfLeaves()
        + " datanode(s) running and "
        + (excludedNodes == null? "no": excludedNodes.size())
        + " node(s) are excluded in this operation.");
  }
  return targets;
}
 
开发者ID:naver,项目名称:hadoop,代码行数:32,代码来源:BlockManager.java


示例4: convert

import org.apache.hadoop.hdfs.protocol.BlockStoragePolicy; //导入依赖的package包/类
public static BlockStoragePolicyProto convert(BlockStoragePolicy policy) {
  BlockStoragePolicyProto.Builder builder = BlockStoragePolicyProto
      .newBuilder().setPolicyId(policy.getId()).setName(policy.getName());
  // creation storage types
  StorageTypesProto creationProto = convert(policy.getStorageTypes());
  Preconditions.checkArgument(creationProto != null);
  builder.setCreationPolicy(creationProto);
  // creation fallback
  StorageTypesProto creationFallbackProto = convert(
      policy.getCreationFallbacks());
  if (creationFallbackProto != null) {
    builder.setCreationFallbackPolicy(creationFallbackProto);
  }
  // replication fallback
  StorageTypesProto replicationFallbackProto = convert(
      policy.getReplicationFallbacks());
  if (replicationFallbackProto != null) {
    builder.setReplicationFallbackPolicy(replicationFallbackProto);
  }
  return builder.build();
}
 
开发者ID:naver,项目名称:hadoop,代码行数:22,代码来源:PBHelper.java


示例5: getStoragePolicies

import org.apache.hadoop.hdfs.protocol.BlockStoragePolicy; //导入依赖的package包/类
@Override
public GetStoragePoliciesResponseProto getStoragePolicies(
    RpcController controller, GetStoragePoliciesRequestProto request)
    throws ServiceException {
  try {
    BlockStoragePolicy[] policies = server.getStoragePolicies();
    GetStoragePoliciesResponseProto.Builder builder =
        GetStoragePoliciesResponseProto.newBuilder();
    if (policies == null) {
      return builder.build();
    }
    for (BlockStoragePolicy policy : policies) {
      builder.addPolicies(PBHelper.convert(policy));
    }
    return builder.build();
  } catch (IOException e) {
    throw new ServiceException(e);
  }
}
 
开发者ID:naver,项目名称:hadoop,代码行数:20,代码来源:ClientNamenodeProtocolServerSideTranslatorPB.java


示例6: run

import org.apache.hadoop.hdfs.protocol.BlockStoragePolicy; //导入依赖的package包/类
@Override
public int run(Configuration conf, List<String> args) throws IOException {
  final DistributedFileSystem dfs = AdminHelper.getDFS(conf);
  try {
    BlockStoragePolicy[] policies = dfs.getStoragePolicies();
    System.out.println("Block Storage Policies:");
    for (BlockStoragePolicy policy : policies) {
      if (policy != null) {
        System.out.println("\t" + policy);
      }
    }
  } catch (IOException e) {
    System.err.println(AdminHelper.prettifyException(e));
    return 2;
  }
  return 0;
}
 
开发者ID:naver,项目名称:hadoop,代码行数:18,代码来源:StoragePolicyAdmin.java


示例7: verifyFile

import org.apache.hadoop.hdfs.protocol.BlockStoragePolicy; //导入依赖的package包/类
private void verifyFile(final Path parent, final HdfsFileStatus status,
    final Byte expectedPolicyId) throws Exception {
  HdfsLocatedFileStatus fileStatus = (HdfsLocatedFileStatus) status;
  byte policyId = fileStatus.getStoragePolicy();
  BlockStoragePolicy policy = policies.getPolicy(policyId);
  if (expectedPolicyId != null) {
    Assert.assertEquals((byte)expectedPolicyId, policy.getId());
  }
  final List<StorageType> types = policy.chooseStorageTypes(
      status.getReplication());
  for(LocatedBlock lb : fileStatus.getBlockLocations().getLocatedBlocks()) {
    final Mover.StorageTypeDiff diff = new Mover.StorageTypeDiff(types,
        lb.getStorageTypes());
    Assert.assertTrue(fileStatus.getFullName(parent.toString())
        + " with policy " + policy + " has non-empty overlap: " + diff
        + ", the corresponding block is " + lb.getBlock().getLocalBlock(),
        diff.removeOverlap(true));
  }
}
 
开发者ID:naver,项目名称:hadoop,代码行数:20,代码来源:TestStorageMover.java


示例8: chooseTarget

import org.apache.hadoop.hdfs.protocol.BlockStoragePolicy; //导入依赖的package包/类
@Override
public DatanodeStorageInfo[] chooseTarget(String srcPath,
                                  int numOfReplicas,
                                  Node writer,
                                  List<DatanodeStorageInfo> chosenNodes,
                                  boolean returnChosenNodes,
                                  Set<Node> excludedNodes,
                                  long blocksize,
                                  final BlockStoragePolicy storagePolicy) {
  DatanodeStorageInfo[] results = super.chooseTarget(srcPath,
      numOfReplicas, writer, chosenNodes, returnChosenNodes, excludedNodes,
      blocksize, storagePolicy);
  try {
    Thread.sleep(3000);
  } catch (InterruptedException e) {}
  return results;
}
 
开发者ID:naver,项目名称:hadoop,代码行数:18,代码来源:TestDeleteRace.java


示例9: testMultipleHots

import org.apache.hadoop.hdfs.protocol.BlockStoragePolicy; //导入依赖的package包/类
@Test
public void testMultipleHots() {
  BlockStoragePolicySuite bsps = BlockStoragePolicySuite.createDefaultSuite();
  StoragePolicySummary sts = new StoragePolicySummary(bsps.getAllPolicies());
  BlockStoragePolicy hot = bsps.getPolicy("HOT");
  sts.add(new StorageType[]{StorageType.DISK},hot);
  sts.add(new StorageType[]{StorageType.DISK,StorageType.DISK},hot);
  sts.add(new StorageType[]{StorageType.DISK,
      StorageType.DISK,StorageType.DISK},hot);
  sts.add(new StorageType[]{StorageType.DISK,
      StorageType.DISK,StorageType.DISK,StorageType.DISK},hot);
  Map<String, Long> actualOutput = convertToStringMap(sts);
  Assert.assertEquals(4,actualOutput.size());
  Map<String, Long>  expectedOutput = new HashMap<>();
  expectedOutput.put("HOT|DISK:1(HOT)", 1l);
  expectedOutput.put("HOT|DISK:2(HOT)", 1l);
  expectedOutput.put("HOT|DISK:3(HOT)", 1l);
  expectedOutput.put("HOT|DISK:4(HOT)", 1l);
  Assert.assertEquals(expectedOutput,actualOutput);
}
 
开发者ID:naver,项目名称:hadoop,代码行数:21,代码来源:TestStoragePolicySummary.java


示例10: testMultipleHotsWithDifferentCounts

import org.apache.hadoop.hdfs.protocol.BlockStoragePolicy; //导入依赖的package包/类
@Test
public void testMultipleHotsWithDifferentCounts() {
  BlockStoragePolicySuite bsps = BlockStoragePolicySuite.createDefaultSuite();
  StoragePolicySummary sts = new StoragePolicySummary(bsps.getAllPolicies());
  BlockStoragePolicy hot = bsps.getPolicy("HOT");
  sts.add(new StorageType[]{StorageType.DISK},hot);
  sts.add(new StorageType[]{StorageType.DISK,StorageType.DISK},hot);
  sts.add(new StorageType[]{StorageType.DISK,StorageType.DISK},hot);
  sts.add(new StorageType[]{StorageType.DISK,
      StorageType.DISK,StorageType.DISK},hot);
  sts.add(new StorageType[]{StorageType.DISK,
      StorageType.DISK,StorageType.DISK},hot);
  sts.add(new StorageType[]{StorageType.DISK,
      StorageType.DISK,StorageType.DISK,StorageType.DISK},hot);
  Map<String, Long> actualOutput = convertToStringMap(sts);
  Assert.assertEquals(4,actualOutput.size());
  Map<String, Long> expectedOutput = new HashMap<>();
  expectedOutput.put("HOT|DISK:1(HOT)", 1l);
  expectedOutput.put("HOT|DISK:2(HOT)", 2l);
  expectedOutput.put("HOT|DISK:3(HOT)", 2l);
  expectedOutput.put("HOT|DISK:4(HOT)", 1l);
  Assert.assertEquals(expectedOutput,actualOutput);
}
 
开发者ID:naver,项目名称:hadoop,代码行数:24,代码来源:TestStoragePolicySummary.java


示例11: computeQuotaDeltaForUCBlock

import org.apache.hadoop.hdfs.protocol.BlockStoragePolicy; //导入依赖的package包/类
/** Compute quota change for converting a complete block to a UC block. */
private static QuotaCounts computeQuotaDeltaForUCBlock(FSNamesystem fsn,
    INodeFile file) {
  final QuotaCounts delta = new QuotaCounts.Builder().build();
  final BlockInfo lastBlock = file.getLastBlock();
  if (lastBlock != null) {
    final long diff = file.getPreferredBlockSize() - lastBlock.getNumBytes();
    final short repl = lastBlock.getReplication();
    delta.addStorageSpace(diff * repl);
    final BlockStoragePolicy policy = fsn.getFSDirectory()
        .getBlockStoragePolicySuite().getPolicy(file.getStoragePolicyID());
    List<StorageType> types = policy.chooseStorageTypes(repl);
    for (StorageType t : types) {
      if (t.supportTypeQuota()) {
        delta.addTypeSpace(t, diff);
      }
    }
  }
  return delta;
}
 
开发者ID:aliyun-beta,项目名称:aliyun-oss-hadoop-fs,代码行数:21,代码来源:FSDirAppendOp.java


示例12: computeContentSummary

import org.apache.hadoop.hdfs.protocol.BlockStoragePolicy; //导入依赖的package包/类
@Override
public final ContentSummaryComputationContext computeContentSummary(
    int snapshotId, final ContentSummaryComputationContext summary) {
  final ContentCounts counts = summary.getCounts();
  counts.addContent(Content.FILE, 1);
  final long fileLen = computeFileSize(snapshotId);
  counts.addContent(Content.LENGTH, fileLen);
  counts.addContent(Content.DISKSPACE, storagespaceConsumed(null)
      .getStorageSpace());

  if (getStoragePolicyID() != BLOCK_STORAGE_POLICY_ID_UNSPECIFIED){
    BlockStoragePolicy bsp = summary.getBlockStoragePolicySuite().
        getPolicy(getStoragePolicyID());
    List<StorageType> storageTypes = bsp.chooseStorageTypes(getFileReplication());
    for (StorageType t : storageTypes) {
      if (!t.supportTypeQuota()) {
        continue;
      }
      counts.addTypeSpace(t, fileLen);
    }
  }
  return summary;
}
 
开发者ID:aliyun-beta,项目名称:aliyun-oss-hadoop-fs,代码行数:24,代码来源:INodeFile.java


示例13: getStoragePolicy

import org.apache.hadoop.hdfs.protocol.BlockStoragePolicy; //导入依赖的package包/类
static BlockStoragePolicy getStoragePolicy(FSDirectory fsd, BlockManager bm,
    String path) throws IOException {
  FSPermissionChecker pc = fsd.getPermissionChecker();
  byte[][] pathComponents = FSDirectory
      .getPathComponentsForReservedPath(path);
  fsd.readLock();
  try {
    path = fsd.resolvePath(pc, path, pathComponents);
    final INodesInPath iip = fsd.getINodesInPath(path, false);
    if (fsd.isPermissionEnabled()) {
      fsd.checkPathAccess(pc, iip, FsAction.READ);
    }
    INode inode = iip.getLastINode();
    if (inode == null) {
      throw new FileNotFoundException("File/Directory does not exist: "
          + iip.getPath());
    }
    return bm.getStoragePolicy(inode.getStoragePolicyID());
  } finally {
    fsd.readUnlock();
  }
}
 
开发者ID:aliyun-beta,项目名称:aliyun-oss-hadoop-fs,代码行数:23,代码来源:FSDirAttrOp.java


示例14: cleanFile

import org.apache.hadoop.hdfs.protocol.BlockStoragePolicy; //导入依赖的package包/类
public void cleanFile(INode.ReclaimContext reclaimContext,
    final INodeFile file, final int snapshotId, int priorSnapshotId,
    byte storagePolicyId) {
  if (snapshotId == Snapshot.CURRENT_STATE_ID) {
    // delete the current file while the file has snapshot feature
    if (!isCurrentFileDeleted()) {
      file.recordModification(priorSnapshotId);
      deleteCurrentFile();
    }
    final BlockStoragePolicy policy = reclaimContext.storagePolicySuite()
        .getPolicy(storagePolicyId);
    QuotaCounts old = file.storagespaceConsumed(policy);
    collectBlocksAndClear(reclaimContext, file);
    QuotaCounts current = file.storagespaceConsumed(policy);
    reclaimContext.quotaDelta().add(old.subtract(current));
  } else { // delete the snapshot
    priorSnapshotId = getDiffs().updatePrior(snapshotId, priorSnapshotId);
    diffs.deleteSnapshotDiff(reclaimContext, snapshotId, priorSnapshotId,
        file);
  }
}
 
开发者ID:aliyun-beta,项目名称:aliyun-oss-hadoop-fs,代码行数:22,代码来源:FileWithSnapshotFeature.java


示例15: getStoragePolicies

import org.apache.hadoop.hdfs.protocol.BlockStoragePolicy; //导入依赖的package包/类
@Override
public GetStoragePoliciesResponseProto getStoragePolicies(
    RpcController controller, GetStoragePoliciesRequestProto request)
    throws ServiceException {
  try {
    BlockStoragePolicy[] policies = server.getStoragePolicies();
    GetStoragePoliciesResponseProto.Builder builder =
        GetStoragePoliciesResponseProto.newBuilder();
    if (policies == null) {
      return builder.build();
    }
    for (BlockStoragePolicy policy : policies) {
      builder.addPolicies(PBHelperClient.convert(policy));
    }
    return builder.build();
  } catch (IOException e) {
    throw new ServiceException(e);
  }
}
 
开发者ID:aliyun-beta,项目名称:aliyun-oss-hadoop-fs,代码行数:20,代码来源:ClientNamenodeProtocolServerSideTranslatorPB.java


示例16: getStoragePolicies

import org.apache.hadoop.hdfs.protocol.BlockStoragePolicy; //导入依赖的package包/类
/**
 * @return All the existing storage policies
 */
public BlockStoragePolicy[] getStoragePolicies() throws IOException {
  TraceScope scope = Trace.startSpan("getStoragePolicies", traceSampler);
  try {
    return namenode.getStoragePolicies();
  } finally {
    scope.close();
  }
}
 
开发者ID:naver,项目名称:hadoop,代码行数:12,代码来源:DFSClient.java


示例17: computeContentSummary

import org.apache.hadoop.hdfs.protocol.BlockStoragePolicy; //导入依赖的package包/类
@Override
public final ContentSummaryComputationContext computeContentSummary(
    final ContentSummaryComputationContext summary) {
  final ContentCounts counts = summary.getCounts();
  FileWithSnapshotFeature sf = getFileWithSnapshotFeature();
  long fileLen = 0;
  if (sf == null) {
    fileLen = computeFileSize();
    counts.addContent(Content.FILE, 1);
  } else {
    final FileDiffList diffs = sf.getDiffs();
    final int n = diffs.asList().size();
    counts.addContent(Content.FILE, n);
    if (n > 0 && sf.isCurrentFileDeleted()) {
      fileLen =  diffs.getLast().getFileSize();
    } else {
      fileLen = computeFileSize();
    }
  }
  counts.addContent(Content.LENGTH, fileLen);
  counts.addContent(Content.DISKSPACE, storagespaceConsumed());

  if (getStoragePolicyID() != ID_UNSPECIFIED){
    BlockStoragePolicy bsp = summary.getBlockStoragePolicySuite().
        getPolicy(getStoragePolicyID());
    List<StorageType> storageTypes = bsp.chooseStorageTypes(getFileReplication());
    for (StorageType t : storageTypes) {
      if (!t.supportTypeQuota()) {
        continue;
      }
      counts.addTypeSpace(t, fileLen);
    }
  }
  return summary;
}
 
开发者ID:naver,项目名称:hadoop,代码行数:36,代码来源:INodeFile.java


示例18: setStoragePolicy

import org.apache.hadoop.hdfs.protocol.BlockStoragePolicy; //导入依赖的package包/类
static HdfsFileStatus setStoragePolicy(
    FSDirectory fsd, BlockManager bm, String src, final String policyName)
    throws IOException {
  if (!fsd.isStoragePolicyEnabled()) {
    throw new IOException(
        "Failed to set storage policy since "
            + DFS_STORAGE_POLICY_ENABLED_KEY + " is set to false.");
  }
  FSPermissionChecker pc = fsd.getPermissionChecker();
  byte[][] pathComponents = FSDirectory.getPathComponentsForReservedPath(src);
  INodesInPath iip;
  fsd.writeLock();
  try {
    src = FSDirectory.resolvePath(src, pathComponents, fsd);
    iip = fsd.getINodesInPath4Write(src);

    if (fsd.isPermissionEnabled()) {
      fsd.checkPathAccess(pc, iip, FsAction.WRITE);
    }

    // get the corresponding policy and make sure the policy name is valid
    BlockStoragePolicy policy = bm.getStoragePolicy(policyName);
    if (policy == null) {
      throw new HadoopIllegalArgumentException(
          "Cannot find a block policy with the name " + policyName);
    }
    unprotectedSetStoragePolicy(fsd, bm, iip, policy.getId());
    fsd.getEditLog().logSetStoragePolicy(src, policy.getId());
  } finally {
    fsd.writeUnlock();
  }
  return fsd.getAuditFileInfo(iip);
}
 
开发者ID:naver,项目名称:hadoop,代码行数:34,代码来源:FSDirAttrOp.java


示例19: add

import org.apache.hadoop.hdfs.protocol.BlockStoragePolicy; //导入依赖的package包/类
void add(StorageType[] storageTypes, BlockStoragePolicy policy) {
  StorageTypeAllocation storageCombo = 
      new StorageTypeAllocation(storageTypes, policy);
  Long count = storageComboCounts.get(storageCombo);
  if (count == null) {
    storageComboCounts.put(storageCombo, 1l);
    storageCombo.setActualStoragePolicy(
        getStoragePolicy(storageCombo.getStorageTypes()));
  } else {
    storageComboCounts.put(storageCombo, count.longValue()+1);
  }
  totalBlocks++;
}
 
开发者ID:naver,项目名称:hadoop,代码行数:14,代码来源:StoragePolicySummary.java


示例20: getStoragePolicy

import org.apache.hadoop.hdfs.protocol.BlockStoragePolicy; //导入依赖的package包/类
/**
 * 
 * @param storageTypes - sorted array of storageTypes
 * @return Storage Policy which matches the specific storage Combination
 */
private BlockStoragePolicy getStoragePolicy(StorageType[] storageTypes) {
  for (BlockStoragePolicy storagePolicy:storagePolicies) {
    StorageType[] policyStorageTypes = storagePolicy.getStorageTypes();
    policyStorageTypes = Arrays.copyOf(policyStorageTypes, policyStorageTypes.length);
    Arrays.sort(policyStorageTypes);
    if (policyStorageTypes.length <= storageTypes.length) {
      int i = 0; 
      for (; i < policyStorageTypes.length; i++) {
        if (policyStorageTypes[i] != storageTypes[i]) {
          break;
        }
      }
      if (i < policyStorageTypes.length) {
        continue;
      }
      int j=policyStorageTypes.length;
      for (; j < storageTypes.length; j++) {
        if (policyStorageTypes[i-1] != storageTypes[j]) {
          break;
        }
      }

      if (j==storageTypes.length) {
        return storagePolicy;
      }
    }
  }
  return null;
}
 
开发者ID:naver,项目名称:hadoop,代码行数:35,代码来源:StoragePolicySummary.java



注:本文中的org.apache.hadoop.hdfs.protocol.BlockStoragePolicy类示例整理自Github/MSDocs等源码及文档管理平台,相关代码片段筛选自各路编程大神贡献的开源项目,源码版权归原作者所有,传播和使用请参考对应项目的License;未经允许,请勿转载。


鲜花

握手

雷人

路过

鸡蛋
该文章已有0人参与评论

请发表评论

全部评论

专题导读
上一篇:
Java ResettableInputStream类代码示例发布时间:2022-05-22
下一篇:
Java RemoveCacheDirectiveResponseProto类代码示例发布时间:2022-05-22
热门推荐
阅读排行榜

扫描微信二维码

查看手机版网站

随时了解更新最新资讯

139-2527-9053

在线客服(服务时间 9:00~18:00)

在线QQ客服
地址:深圳市南山区西丽大学城创智工业园
电邮:jeky_zhao#qq.com
移动电话:139-2527-9053

Powered by 互联科技 X3.4© 2001-2213 极客世界.|Sitemap