本文整理汇总了Java中org.apache.hadoop.hdfs.protocol.FSLimitException类的典型用法代码示例。如果您正苦于以下问题:Java FSLimitException类的具体用法?Java FSLimitException怎么用?Java FSLimitException使用的例子?那么恭喜您, 这里精选的类代码示例或许可以为您提供帮助。
FSLimitException类属于org.apache.hadoop.hdfs.protocol包,在下文中一共展示了FSLimitException类的3个代码示例,这些例子默认根据受欢迎程度排序。您可以为喜欢或者感觉有用的代码点赞,您的评价将有助于我们的系统推荐出更棒的Java代码示例。
示例1: verifySnapshotName
import org.apache.hadoop.hdfs.protocol.FSLimitException; //导入依赖的package包/类
/** Verify if the snapshot name is legal. */
static void verifySnapshotName(FSDirectory fsd, String snapshotName,
String path)
throws FSLimitException.PathComponentTooLongException {
if (snapshotName.contains(Path.SEPARATOR)) {
throw new HadoopIllegalArgumentException(
"Snapshot name cannot contain \"" + Path.SEPARATOR + "\"");
}
final byte[] bytes = DFSUtil.string2Bytes(snapshotName);
fsd.verifyINodeName(bytes);
fsd.verifyMaxComponentLength(bytes, path);
}
开发者ID:naver,项目名称:hadoop,代码行数:13,代码来源:FSDirSnapshotOp.java
示例2: verifyFsLimits
import org.apache.hadoop.hdfs.protocol.FSLimitException; //导入依赖的package包/类
/**
* Verify that filesystem limit constraints are not violated
*
* @throws PathComponentTooLongException
* child's name is too long
* @throws MaxDirectoryItemsExceededException
* items per directory is exceeded
*/
protected <T extends INode> void verifyFsLimits(INode[] pathComponents,
int pos, T child)
throws FSLimitException, StorageException, TransactionContextException {
boolean includeChildName = false;
try {
if (maxComponentLength != 0) {
int length = child.getLocalName().length();
if (length > maxComponentLength) {
includeChildName = true;
throw new PathComponentTooLongException(maxComponentLength, length);
}
}
if (maxDirItems != 0) {
INodeDirectory parent = (INodeDirectory) pathComponents[pos - 1];
int count = parent.getChildrenList().size();
if (count >= maxDirItems) {
throw new MaxDirectoryItemsExceededException(maxDirItems, count);
}
}
} catch (FSLimitException e) {
String badPath = getFullPathName(pathComponents, pos - 1);
if (includeChildName) {
badPath += Path.SEPARATOR + child.getLocalName();
}
e.setPathName(badPath);
// Do not throw if edits log is still being processed
if (ready) {
throw (e);
}
// log pre-existing paths that exceed limits
NameNode.LOG
.error("FSDirectory.verifyFsLimits - " + e.getLocalizedMessage());
}
}
开发者ID:hopshadoop,项目名称:hops,代码行数:43,代码来源:FSDirectory.java
示例3: verifyFsLimits
import org.apache.hadoop.hdfs.protocol.FSLimitException; //导入依赖的package包/类
@Override
public <T extends INode> void verifyFsLimits(INode[] pathComponents,
int pos, T child)
throws FSLimitException, StorageException, TransactionContextException {
super.verifyFsLimits(pathComponents, pos, child);
}
开发者ID:hopshadoop,项目名称:hops,代码行数:7,代码来源:TestFsLimits.java
注:本文中的org.apache.hadoop.hdfs.protocol.FSLimitException类示例整理自Github/MSDocs等源码及文档管理平台,相关代码片段筛选自各路编程大神贡献的开源项目,源码版权归原作者所有,传播和使用请参考对应项目的License;未经允许,请勿转载。 |
请发表评论