Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Producer condition change + Rename Azcopy tests #127

Merged
merged 41 commits into from
Jul 30, 2024
Merged
Show file tree
Hide file tree
Changes from 35 commits
Commits
Show all changes
41 commits
Select commit Hold shift + click to select a range
993d4f6
Azcopy Helper
Jun 11, 2024
b3ee83b
Test Infra For Implicit Scenarios
Jun 25, 2024
c7f348e
Assert Azcopy creates implicit paths
Jun 25, 2024
f4ec53a
Azcopy helper Refactored
Jul 1, 2024
2ca1b0e
Azcopy helper Test
Jul 1, 2024
b93a862
Singleton AzcopyHelper
Jul 2, 2024
3c841e2
Unused Iports
Jul 2, 2024
2105ca9
Merge branch 'azcopyHelperCode' into sp/azcopyTests
saxenapranav Jul 2, 2024
12e89de
explicit and implicit tests for rename
saxenapranav Jul 2, 2024
09f40f8
Azcopy Helper
Jun 11, 2024
ed652c8
Test Infra For Implicit Scenarios
Jun 25, 2024
c09c89b
Assert Azcopy creates implicit paths
Jun 25, 2024
ba9c62c
Azcopy helper Refactored
Jul 1, 2024
e8d5a35
Azcopy helper Test
Jul 1, 2024
37c70f1
Singleton AzcopyHelper
Jul 2, 2024
0ee6532
Unused Iports
Jul 2, 2024
299c86d
Resolved Comments
Jul 2, 2024
12d68ad
Azcopy Helper Works only on Blob Endpoint
Jul 2, 2024
138662e
test fixed
saxenapranav Jul 3, 2024
4cf26e5
Merge branch 'azcopyHelperCode' into sp/azcopyTests
saxenapranav Jul 3, 2024
76177f9
fixed some test; braught in tests
saxenapranav Jul 4, 2024
43e5793
handler implicit src dir. added new test in ITestExplicitImplicitRename
saxenapranav Jul 8, 2024
bfc9732
Merge branch 'wasbDepCodeReview' into sp/azcopyTests
saxenapranav Jul 8, 2024
10fe6b6
make azcopy commands in parallel
saxenapranav Jul 8, 2024
6a93c4c
create a method that can parallely create dir and file
saxenapranav Jul 8, 2024
e0cc653
parallelised test's az copy create file/dir
saxenapranav Jul 8, 2024
04bc2aa
code improvement wip
saxenapranav Jul 8, 2024
c203ce0
added javadocs; some test refactors
saxenapranav Jul 8, 2024
00cec1c
test run improvement + refactor
saxenapranav Jul 8, 2024
f618c73
assume if blob endpoint in testAtomicityRedoInvalidFile
saxenapranav Jul 9, 2024
3401e5c
add test for asserting getPathStatus and getListStatus doesnt resume …
saxenapranav Jul 9, 2024
0158163
maxConsumptionLag javadocs; from config
saxenapranav Jul 10, 2024
9bd4eb5
test added
saxenapranav Jul 16, 2024
8422d3e
refactors to get createNonRecursive running
saxenapranav Jul 23, 2024
4158fbf
added test
saxenapranav Jul 23, 2024
efc70fa
nit refactors
saxenapranav Jul 25, 2024
c0ef826
correct way of implementation for createNonRecursive. The aim is to h…
saxenapranav Jul 25, 2024
c37f21d
flow of statistic incremenet
saxenapranav Jul 26, 2024
74de696
clean changes
saxenapranav Jul 26, 2024
6a37197
asf
saxenapranav Jul 26, 2024
b2a78d9
nits
saxenapranav Jul 26, 2024
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Original file line number Diff line number Diff line change
Expand Up @@ -77,7 +77,9 @@
import static org.apache.hadoop.fs.CommonConfigurationKeysPublic.FS_DEFAULT_NAME_KEY;
import static org.apache.hadoop.fs.azurebfs.constants.AbfsHttpConstants.EMPTY_STRING;
import static org.apache.hadoop.fs.azurebfs.constants.ConfigurationKeys.*;
import static org.apache.hadoop.fs.azurebfs.constants.ConfigurationKeys.FS_AZURE_LEASE_CREATE_NON_RECURSIVE;
import static org.apache.hadoop.fs.azurebfs.constants.FileSystemConfigurations.*;
import static org.apache.hadoop.fs.azurebfs.constants.FileSystemConfigurations.DEFAULT_FS_AZURE_LEASE_CREATE_NON_RECURSIVE;

/**
* Configuration for Azure Blob FileSystem.
Expand Down Expand Up @@ -409,6 +411,10 @@ public class AbfsConfiguration{
FS_AZURE_PRODUCER_QUEUE_MAX_SIZE, DefaultValue = DEFAULT_FS_AZURE_PRODUCER_QUEUE_MAX_SIZE)
private int producerQueueMaxSize;

@IntegerConfigurationValidatorAnnotation(ConfigurationKey =
FS_AZURE_CONSUMER_MAX_LAG, DefaultValue = DEFAULT_FS_AZURE_CONSUMER_MAX_LAG)
private int listingMaxConsumptionLag;

@IntegerConfigurationValidatorAnnotation(ConfigurationKey =
FS_AZURE_BLOB_DIR_RENAME_MAX_THREAD, DefaultValue = DEFAULT_FS_AZURE_BLOB_RENAME_THREAD)
private int blobRenameDirConsumptionParallelism;
Expand All @@ -417,6 +423,10 @@ public class AbfsConfiguration{
FS_AZURE_BLOB_DIR_DELETE_MAX_THREAD, DefaultValue = DEFAULT_FS_AZURE_BLOB_DELETE_THREAD)
private int blobDeleteDirConsumptionParallelism;

@BooleanConfigurationValidatorAnnotation(ConfigurationKey =
FS_AZURE_LEASE_CREATE_NON_RECURSIVE, DefaultValue = DEFAULT_FS_AZURE_LEASE_CREATE_NON_RECURSIVE)
private boolean leaseOnCreateNonRecursive;
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nit: boolean variable. Better to change to isLeaseOnCreateNonRecursiveEnabled

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Taken.


private String clientProvidedEncryptionKey;
private String clientProvidedEncryptionKeySHA;

Expand Down Expand Up @@ -1446,11 +1456,19 @@ public int getProducerQueueMaxSize() {
return producerQueueMaxSize;
}

public int getListingMaxConsumptionLag() {
return listingMaxConsumptionLag;
}

public int getBlobRenameDirConsumptionParallelism() {
return blobRenameDirConsumptionParallelism;
}

public int getBlobDeleteDirConsumptionParallelism() {
return blobDeleteDirConsumptionParallelism;
}

public boolean isLeaseOnCreateNonRecursive() {
return leaseOnCreateNonRecursive;
}
}
Original file line number Diff line number Diff line change
Expand Up @@ -362,6 +362,14 @@ public FSDataOutputStream create(final Path f,
final short replication,
final long blockSize,
final Progressable progress) throws IOException {
return createInternal(f, permission, overwrite, blockSize, false);
}

private FSDataOutputStream createInternal(final Path f,
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Seems like this refraction we are doing only for Blob Endpoint. this new parameter is not used by DFS Client. Can we have these handling only in ABFSBlobClient?

Let's discuss this once offline if its possible.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The new field is just to tell BlobClient, if the createPath is for createNonRecursive HDFS API or create HDFS API. The required orchestration for blob on createNonRecursive is done in blobClient only. This field is just to propagate the information to the client about what HDFS API has invoked it.

final FsPermission permission,
final boolean overwrite,
final long blockSize,
final boolean isNonRecursiveCreate) throws IOException {
LOG.debug("AzureBlobFileSystem.create path: {} permission: {} overwrite: {} bufferSize: {}",
f,
permission,
Expand All @@ -382,9 +390,10 @@ public FSDataOutputStream create(final Path f,
try {
TracingContext tracingContext = new TracingContext(clientCorrelationId,
fileSystemId, FSOperationType.CREATE, overwrite, tracingHeaderFormat, listener);
OutputStream outputStream = getAbfsStore().createFile(qualifiedPath, statistics, overwrite,
OutputStream outputStream = getAbfsStore().createFile(qualifiedPath, statistics,
overwrite,
permission == null ? FsPermission.getFileDefault() : permission,
FsPermission.getUMask(getConf()), tracingContext);
FsPermission.getUMask(getConf()), isNonRecursiveCreate, tracingContext);
statIncrement(FILES_CREATED);
return new FSDataOutputStream(outputStream, statistics);
} catch (AzureBlobFileSystemException ex) {
Expand Down Expand Up @@ -417,7 +426,7 @@ public FSDataOutputStream createNonRecursive(final Path f, final FsPermission pe
+ f.getName() + " because parent folder does not exist.");
}

return create(f, permission, overwrite, bufferSize, replication, blockSize, progress);
return createInternal(f, permission, overwrite, blockSize, true);
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Seems like these parameters were not used. Should we still keep them to reduce unnecessary diffs??
No, issues in removing as well.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Taken.

}

@Override
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -603,7 +603,7 @@ public void deleteFilesystem(TracingContext tracingContext)
public OutputStream createFile(final Path path,
final FileSystem.Statistics statistics, final boolean overwrite,
final FsPermission permission, final FsPermission umask,
TracingContext tracingContext) throws IOException {
final boolean isRecursiveCreate, TracingContext tracingContext) throws IOException {
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The parameter passed to store has name isNonRecursiveCreate and parameter accepted in store has name isRecursiveCreate.
This seems confusing

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Seems buggy as well...

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

+1 with the confusion. It has to be all about nonRecursiveCreate. Can you please explain what seems wrong here please :).

try (AbfsPerfInfo perfInfo = startTracking("createFile", "createPath")) {
AbfsClient createClient = getClientHandler().getIngressClient();
boolean isNamespaceEnabled = getIsNamespaceEnabled(tracingContext);
Expand Down Expand Up @@ -644,6 +644,7 @@ public OutputStream createFile(final Path path,
new Permissions(isNamespaceEnabled, permission, umask),
isAppendBlob,
contextEncryptionAdapter,
isRecursiveCreate,
tracingContext
);

Expand All @@ -654,6 +655,7 @@ public OutputStream createFile(final Path path,
isAppendBlob,
null,
contextEncryptionAdapter,
isRecursiveCreate,
tracingContext, isNamespaceEnabled);

}
Expand All @@ -678,27 +680,32 @@ public OutputStream createFile(final Path path,
/**
* Conditional create overwrite flow ensures that create overwrites is done
* only if there is match for eTag of existing file.
*
* @param relativePath
* @param statistics
* @param permissions contains permission and umask
* @param isAppendBlob
* @param isRecursiveCreate
*
* @return
*
* @throws AzureBlobFileSystemException
*/
private AbfsRestOperation conditionalCreateOverwriteFile(final String relativePath,
final FileSystem.Statistics statistics,
final Permissions permissions,
final boolean isAppendBlob,
final ContextEncryptionAdapter contextEncryptionAdapter,
final TracingContext tracingContext) throws IOException {
final boolean isRecursiveCreate, final TracingContext tracingContext) throws IOException {
AbfsRestOperation op;
AbfsClient createClient = getClientHandler().getIngressClient();
try {
// Trigger a create with overwrite=false first so that eTag fetch can be
// avoided for cases when no pre-existing file is present (major portion
// of create file traffic falls into the case of no pre-existing file).
op = createClient.createPath(relativePath, true, false, permissions,
isAppendBlob, null, contextEncryptionAdapter, tracingContext, getIsNamespaceEnabled(tracingContext));
isAppendBlob, null, contextEncryptionAdapter, isRecursiveCreate,
tracingContext, getIsNamespaceEnabled(tracingContext));

} catch (AbfsRestOperationException e) {
if (e.getStatusCode() == HttpURLConnection.HTTP_CONFLICT) {
Expand All @@ -722,7 +729,8 @@ private AbfsRestOperation conditionalCreateOverwriteFile(final String relativePa
try {
// overwrite only if eTag matches with the file properties fetched befpre
op = createClient.createPath(relativePath, true, true, permissions,
isAppendBlob, eTag, contextEncryptionAdapter, tracingContext, getIsNamespaceEnabled(tracingContext));
isAppendBlob, eTag, contextEncryptionAdapter, isRecursiveCreate,
tracingContext, getIsNamespaceEnabled(tracingContext));
} catch (AbfsRestOperationException ex) {
if (ex.getStatusCode() == HttpURLConnection.HTTP_PRECON_FAILED) {
// Is a parallel access case, as file with eTag was just queried
Expand Down Expand Up @@ -827,7 +835,8 @@ public void createDirectory(final Path path, final FsPermission permission,
Permissions permissions = new Permissions(isNamespaceEnabled,
permission, umask);
final AbfsRestOperation op = createClient.createPath(getRelativePath(path),
false, overwrite, permissions, false, null, null, tracingContext, isNamespaceEnabled);
false, overwrite, permissions, false, null, null, false,
tracingContext, isNamespaceEnabled);
perfInfo.registerResult(op.getResult()).registerSuccess(true);
}
}
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -357,9 +357,17 @@ public static String accountProperty(String property, String account) {
= "fs.azure.blob.atomic.rename.lease.refresh.duration";
/**Maximum number of blob information enqueued in memory for rename or delete orchestration {@value}*/
public static final String FS_AZURE_PRODUCER_QUEUE_MAX_SIZE = "fs.azure.blob.dir.list.producer.queue.max.size";
/**
* Maximum consumer lag (count of blob information which is yet to be taken for operation)
* in blob listing which can be tolerated before making producer to wait for
* consumer lag to become tolerable: {@value}.
*/
public static final String FS_AZURE_CONSUMER_MAX_LAG = "fs.azure.blob.dir.list.consumer.max.lag";
/**Maximum number of thread per blob-rename orchestration {@value}*/
public static final String FS_AZURE_BLOB_DIR_RENAME_MAX_THREAD = "fs.azure.blob.dir.rename.max.thread";
/**Maximum number of thread per blob-delete orchestration {@value}*/
public static final String FS_AZURE_BLOB_DIR_DELETE_MAX_THREAD = "fs.azure.blob.dir.delete.max.thread";
/**Define if lease to be taken on parent of atomic-directory on non-recursive create path {@value }.*/
public static final String FS_AZURE_LEASE_CREATE_NON_RECURSIVE = "fs.azure.lease.create.non.recursive";
private ConfigurationKeys() {}
}
Original file line number Diff line number Diff line change
Expand Up @@ -36,7 +36,7 @@ public final class FileSystemConfigurations {
public static final boolean DEFAULT_FS_AZURE_ACCOUNT_IS_EXPECT_HEADER_ENABLED = true;
public static final String USER_HOME_DIRECTORY_PREFIX = "/user";

private static final int SIXTY_SECONDS = 60_000;
public static final int SIXTY_SECONDS = 60_000;

// Retry parameter defaults.
public static final int DEFAULT_MIN_BACKOFF_INTERVAL = 3_000; // 3s
Expand Down Expand Up @@ -170,10 +170,12 @@ public final class FileSystemConfigurations {
public static final long DEFAULT_AZURE_BLOB_COPY_PROGRESS_WAIT_MILLIS = 1_000L;
public static final long
DEFAULT_AZURE_BLOB_ATOMIC_RENAME_LEASE_REFRESH_DURATION = 60_000L;
public static final int DEFAULT_FS_AZURE_PRODUCER_QUEUE_MAX_SIZE = 10000;

public static final int DEFAULT_FS_AZURE_BLOB_RENAME_THREAD = 5;
public static final int DEFAULT_FS_AZURE_BLOB_DELETE_THREAD = 5;
public static final int DEFAULT_FS_AZURE_PRODUCER_QUEUE_MAX_SIZE = 2 * DEFAULT_AZURE_LIST_MAX_RESULTS;
public static final int DEFAULT_FS_AZURE_CONSUMER_MAX_LAG = DEFAULT_AZURE_LIST_MAX_RESULTS;
public static final int DEFAULT_FS_AZURE_LISTING_ACTION_THREADS = 5;
public static final int DEFAULT_FS_AZURE_BLOB_RENAME_THREAD = DEFAULT_FS_AZURE_LISTING_ACTION_THREADS;
public static final int DEFAULT_FS_AZURE_BLOB_DELETE_THREAD = DEFAULT_FS_AZURE_LISTING_ACTION_THREADS;
public static final boolean DEFAULT_FS_AZURE_LEASE_CREATE_NON_RECURSIVE = false;
public static final int BLOCK_ID_LENGTH = 60;
private FileSystemConfigurations() {}
}
Original file line number Diff line number Diff line change
Expand Up @@ -117,6 +117,7 @@
import static org.apache.hadoop.fs.azurebfs.constants.AbfsHttpConstants.XMS_PROPERTIES_ENCODING_ASCII;
import static org.apache.hadoop.fs.azurebfs.constants.AbfsHttpConstants.XMS_PROPERTIES_ENCODING_UNICODE;
import static org.apache.hadoop.fs.azurebfs.constants.AbfsHttpConstants.ZERO;
import static org.apache.hadoop.fs.azurebfs.constants.FileSystemConfigurations.SIXTY_SECONDS;
import static org.apache.hadoop.fs.azurebfs.constants.HttpHeaderConfigurations.ACCEPT;
import static org.apache.hadoop.fs.azurebfs.constants.HttpHeaderConfigurations.CONTENT_LENGTH;
import static org.apache.hadoop.fs.azurebfs.constants.HttpHeaderConfigurations.CONTENT_TYPE;
Expand Down Expand Up @@ -312,9 +313,13 @@ public AbfsRestOperation deleteFilesystem(TracingContext tracingContext)
/**
* Get Rest Operation for API <a href = https://learn.microsoft.com/en-us/rest/api/storageservices/put-blob></a>.
* Creates a file or directory(marker file) at specified path.
*
* @param path of the directory to be created.
* @param isRecursiveCreate
* @param tracingContext
*
* @return executed rest operation containing response from server.
*
* @throws AzureBlobFileSystemException if rest operation fails.
*/
@Override
Expand All @@ -325,11 +330,31 @@ public AbfsRestOperation createPath(final String path,
final boolean isAppendBlob,
final String eTag,
final ContextEncryptionAdapter contextEncryptionAdapter,
final boolean isRecursiveCreate,
final TracingContext tracingContext, final boolean isNamespaceEnabled)
throws AzureBlobFileSystemException {
return createPath(path, isFile, overwrite, permissions, isAppendBlob, eTag,
contextEncryptionAdapter, tracingContext, isNamespaceEnabled, false);
AbfsLease abfsLease = null;
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I might be missing something...
Can you help me recall why we need this lease business now and not earlier??

Or was it just missed earlier?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yet it was missed. In case of createNonRecursive on atomic path, we have to take lease on the parent directory.

if (isRecursiveCreate && abfsConfiguration.isLeaseOnCreateNonRecursive()) {
abfsLease = takeAbfsLease(new Path(path).getParent().toUri().getPath(), SIXTY_SECONDS, tracingContext);
}
try {
return createPath(path, isFile, overwrite, permissions, isAppendBlob,
eTag,
contextEncryptionAdapter, tracingContext, isNamespaceEnabled, false);
} finally {
if(abfsLease != null) {
abfsLease.free();
}
}
}

@VisibleForTesting
public AbfsLease takeAbfsLease(final String path,
final long timeDuration,
final TracingContext tracingContext) throws AzureBlobFileSystemException {
return new AbfsLease(this, path, false, timeDuration, null, tracingContext);
}

/**
* Get Rest Operation for API <a href = https://learn.microsoft.com/en-us/rest/api/storageservices/put-blob></a>.
* Creates a file or directory(marker file) at specified path.
Expand Down Expand Up @@ -1489,7 +1514,7 @@ public boolean isAtomicRenameKey(String key) {
*
* @throws AzureBlobFileSystemException server error or the path is renamePending json file and action is taken.
*/
public void takeGetPathStatusAtomicRenameKeyAction(final Path path,
private void takeGetPathStatusAtomicRenameKeyAction(final Path path,
final TracingContext tracingContext) throws AzureBlobFileSystemException {
if (path == null || path.isRoot() || !isAtomicRenameKey(path.toUri().getPath())) {
return;
Expand Down Expand Up @@ -1583,7 +1608,7 @@ public boolean takeListPathAtomicRenameKeyAction(final Path path,
}

@VisibleForTesting
RenameAtomicity getRedoRenameAtomicity(final Path path, int fileLen,
public RenameAtomicity getRedoRenameAtomicity(final Path path, int fileLen,
final TracingContext tracingContext) {
RenameAtomicity renameAtomicity = new RenameAtomicity(path,
fileLen,
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -44,7 +44,6 @@

import org.apache.hadoop.classification.VisibleForTesting;

import org.apache.hadoop.fs.Path;
import org.apache.hadoop.fs.azurebfs.constants.FSOperationType;
import org.apache.hadoop.fs.azurebfs.contracts.exceptions.AbfsInvalidChecksumException;
import org.apache.hadoop.fs.azurebfs.contracts.exceptions.AbfsDriverException;
Expand Down Expand Up @@ -423,22 +422,25 @@ public abstract AbfsRestOperation deleteFilesystem(TracingContext tracingContext
* <li>create new directory</li>
* </ol>
*
* @param path: path of the file / directory to be created / overwritten.
* @param isFile: defines if file or directory has to be created / overwritten.
* @param overwrite: defines if the file / directory to be overwritten.
* @param permissions: contains permission and umask
* @param isAppendBlob: defines if directory in the path is enabled for appendBlob
* @param eTag: required in case of overwrite of file / directory. Path would be
* @param path : path of the file / directory to be created / overwritten.
* @param isFile : defines if file or directory has to be created / overwritten.
* @param overwrite : defines if the file / directory to be overwritten.
* @param permissions : contains permission and umask
* @param isAppendBlob : defines if directory in the path is enabled for appendBlob
* @param eTag : required in case of overwrite of file / directory. Path would be
* overwritten only if the provided eTag is equal to the one present in backend for
* the path.
* @param contextEncryptionAdapter: object that contains the encryptionContext and
* @param contextEncryptionAdapter : object that contains the encryptionContext and
* encryptionKey created from the developer provided implementation of
* {@link org.apache.hadoop.fs.azurebfs.extensions.EncryptionContextProvider}
* @param tracingContext: Object of {@link org.apache.hadoop.fs.azurebfs.utils.TracingContext}
* {@link EncryptionContextProvider}
* @param isRecursiveCreate
* @param tracingContext : Object of {@link TracingContext}
* correlating to the current fs.create() request.
*
* @return object of {@link AbfsRestOperation} which contain all the information
* about the communication with the server. The information is in
* {@link AbfsRestOperation#getResult()}
*
* @throws AzureBlobFileSystemException throws back the exception it receives from the
* {@link AbfsRestOperation#execute(TracingContext)} method call.
*/
Expand All @@ -449,6 +451,7 @@ public abstract AbfsRestOperation createPath(final String path,
final boolean isAppendBlob,
final String eTag,
final ContextEncryptionAdapter contextEncryptionAdapter,
final boolean isRecursiveCreate,
final TracingContext tracingContext, boolean isNamespaceEnabled) throws AzureBlobFileSystemException;

public abstract AbfsRestOperation acquireLease(final String path,
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -40,7 +40,6 @@
import com.fasterxml.jackson.core.JsonToken;
import com.fasterxml.jackson.databind.ObjectMapper;

import org.apache.hadoop.fs.Path;
import org.apache.hadoop.fs.azurebfs.AbfsConfiguration;
import org.apache.hadoop.fs.azurebfs.AzureBlobFileSystemStore;
import org.apache.hadoop.fs.azurebfs.constants.AbfsHttpConstants;
Expand Down Expand Up @@ -321,15 +320,19 @@ public AbfsRestOperation listPath(final String relativePath,
/**
* Get Rest Operation for API <a href = https://learn.microsoft.com/en-us/rest/api/storageservices/datalakestoragegen2/path/create></a>.
* Create a path (file or directory) in the current filesystem.
*
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No changes in this file, additional changes can be removed

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

taken.

* @param path to be created inside the filesystem.
* @param isFile to specify if the created path is file or directory.
* @param overwrite to specify if the path should be overwritten if it already exists.
* @param permissions to specify the permissions of the path.
* @param isAppendBlob to specify if the path to be created is an append blob.
* @param eTag to specify conditional headers.
* @param contextEncryptionAdapter to provide encryption context.
* @param isRecursiveCreate
* @param tracingContext
*
* @return executed rest operation containing response from server.
*
* @throws AzureBlobFileSystemException if rest operation fails.
*/
@Override
Expand All @@ -340,6 +343,7 @@ public AbfsRestOperation createPath(final String path,
final boolean isAppendBlob,
final String eTag,
final ContextEncryptionAdapter contextEncryptionAdapter,
final boolean isRecursiveCreate,
final TracingContext tracingContext, final boolean isNamespaceEnabled)
throws AzureBlobFileSystemException {
final List<AbfsHttpHeader> requestHeaders = createDefaultHeaders();
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -61,7 +61,7 @@ public class AbfsHttpOperation implements AbfsPerfLoggable {

private static final int CLEAN_UP_BUFFER_SIZE = 64 * 1024;

private static final int ONE_THOUSAND = 1000;
public static final int ONE_THOUSAND = 1000;
private static final int ONE_MILLION = ONE_THOUSAND * ONE_THOUSAND;

private final String method;
Expand Down
Loading
Loading