DevExtreme React - Manage Azure Blob Storage with DevExtreme Components

IMPORTANT

The code snippets in this article and all associated repositories are for informational purposes only. Security should be your #1 priority when using Azure Blob storage. You should consult a security expert or apply accepted best practices to maintain the highest security posture for your DevExtreme-powered web application. Remember, a secure web app demands careful consideration/understanding of potential attack vectors, the configuration of your development environment, and security posture of third-party service providers.

Microsoft Azure Blob Storage offers unstructured data storage services. Azure stores "blobs" (data objects) inside "containers" (virtual directories). Each Azure storage account can include multiple associated containers. The biggest benefit of Azure Blob Storage in comparison to its direct competitor — Amazon S3 — is its integration with the Microsoft ecosystem. As this help topic illustrates, you can use Azure Blob Storage in your DevExtreme-powered web application to streamline appropriate CRUD operations (remember, the code snippets and repositories referenced herein require you to investigate/implement/apply appropriate security-related processes/procedures).

This help topic documents key considerations when connecting DevExtreme UI components to Azure Blob Storage. Specifically, it describes two binding options when using the DevExtreme FileManager component (server-side binding and client-side binding) and setup requirements for the DevExtreme FileUploader component.

FileManager application with client-side binding:

View on GitHub

FileManager application with server-side binding:

View on GitHub

FileUploader application:

View on GitHub

Configure Azure

You will need a Microsoft Account to set up Microsoft Azure services. At the time of publication, Microsoft offers a free trial for most Azure products.

Create a new Storage account

  1. Once you have a Microsoft account, simply log into your Azure control panel and open the "Create a new resource" page.
  2. Navigate to the "Storage" category and select the "Storage account" option.
  3. Follow the instructions to create your new Storage account.
    • Specify an all-lowercase Storage account name.
    • Select a convenient geographic location for your primary storage.
    • Azure Blob Storage offers two performance tiers: Standard and Premium. Standard is sufficient for most use cases, including use cases documented herein.
    • Select a redundancy policy for your Azure storage. Azure can store copies of your data across multiple servers within the same data center, or different data centers in the same region (or two different regions). You cannot manually select the secondary storage region — for maximum efficiency, the system automatically determines its location based on your primary region

Create a Blob Container

  1. Locate the newly created Storage account in your Azure control panel.
  2. Open the Storage account page and add a new Blob container.
  3. Set container privacy to "Private" to prohibit anonymous storage access.
  4. Click "Create" to complete the process.

Copy Access Keys

Microsoft generates two access keys for each storage account. Your application needs to use these keys to authenticate Azure requests.

To copy the keys, click the "Access Keys" link in the "Security + Networking" section of your Storage account page.

Configure CORS (Server-side)

CORS policies give your Storage Blob an added layer of protection from unauthorized requests. Azure Storage Blobs do not offer an out-of-the-box CORS policy. Use the service properties of your blob to set establish a CORS policy.

If you expect to explore the strategies outlined in this document within a development environment, we recommend you allow requests from all origins, but limit the range of acceptable request methods.

Back-End Applicatoin (Basic Configuration)

AAll examples in this article require a server-side application and a client-side application.

All three back-end applications in these repositories use the .NET Framework to communicate with the Azure Blob Storage API. This article also uses .NET code examples to illustrate back-end development strategies.

Install the SDK

Microsoft maintains an Azure Storage SDK for the .NET Framework.

You can install this package via NuGet:

dotnet add Azure.Storage.Blobs --version 12.19.1

Set up CORS

The back-end application needs a CORS setup of its own.

Program.cs
builder.Services.AddCors(options => options.AddPolicy("CorsPolicy", builder => {
    builder
        .AllowAnyMethod()
        .AllowAnyHeader()
        .SetIsOriginAllowed(_ => true)
        .AllowCredentials();
}));
app.UseCors("CorsPolicy");

Set up Authentication

Our code examples include a separate class (AzureStorageAccount) that handles Blob Storage credentials.

You can load your Azure config from a standalone JSON file:

config.json
Program.cs
"AzureStorage": {
    "AccountName": "",
    "AccessKey": "",
    "FileManagerBlobContainerName": "",
    "FileUploaderBlobContainerName": ""
}
AzureStorageAccount.Load(app.Configuration.GetSection("AzureStorage"));

FileManager

Full code examples:

Azure Blob Storage does not expose a traditional file system to the end user. When you request to view the list of blob entries, Azure returns XML:

<?xml version="1.0" encoding="utf-8"?>
<EnumerationResults ServiceEndpoint="https://aspxdemos.blob.core.windows.net/" ContainerName="testfilemanager">
    <Blobs>
        <Blob>
            <Name>1.jpg</Name>
            <Properties>
                <Creation-Time>Mon, 04 Dec 2023 07:24:58 GMT</Creation-Time>
                <Last-Modified>Wed, 06 Dec 2023 06:29:09 GMT</Last-Modified>
                <Etag>0x8DBF624A7904382</Etag>
                <Content-Length>3126</Content-Length>
                <Content-Type>application/octet-stream</Content-Type>
                <Content-Encoding />
                <Content-Language />
                <Content-CRC64 />
                <Content-MD5 />
                <Cache-Control />
                <Content-Disposition />
                <BlobType>BlockBlob</BlobType>
                <LeaseStatus>unlocked</LeaseStatus>
                <LeaseState>available</LeaseState>
                <ServerEncrypted>true</ServerEncrypted>
            </Properties>
            <OrMetadata />
        </Blob>
... (other Blob records)
    </Blobs>
    <NextMarker />
</EnumerationResults>

The FileManager component cannot work with Azure’s XML data. As such, you will need to generate a FileManager-compatible file system representation of the XML.

This requirement can be executed on either the client or the server.

Client-side Data Handling

First, create an API endpoint that handles FileManager requests and directs them to the Blob Storage:

[Route("api/file-manager-azure-access", Name = "FileManagerAzureAccessApi")]
public object Process(string command, string blobName = "", string blobName2 = "")
{
    try
    {
        return ProcessCommand(command, blobName, blobName2);
    }
    catch
    {
        return CreateErrorResult();
    }
}
object ProcessCommand(string command, string blobName, string blobName2)
{
    switch (command)
    {
        case "BlobList":
            return GetBlobList();
        case "CreateDirectory":
            if (!AllowCreate)
                return CreateErrorResult();
            return CreateDirectory(blobName);
            ...
    }
}

The ProcessCommand object object defines the internal logic of the endpoint. It calls functions that interact with Blob entities:

object GetBlobList() {
    if (Container.CanGenerateSasUri) {
        var sasUri = Container.GenerateSasUri(BlobContainerSasPermissions.List, DateTimeOffset.UtcNow.AddHours(1));
        return CreateSuccessResult(sasUri);
    } else {
        return CreateErrorResult("BlobContainerClient cannot generate SasUri");
    }
}

In this example, the Container object is an instance of the BlobContainerClient class.

The client-side parser function is as follows:

getDataObjectsFromEntries(entries, prefix) {
    const result = [];
    const directories = {};
    entries.forEach((entry) => {
    const restName = entry.name.substr(prefix.length);
    const parts = restName.split('/');
    if (parts.length === 1) {
        if (restName !== this.EMPTY_DIR_DUMMY_BLOB_NAME) {
        const obj = {
            name: restName,
            isDirectory: false,
            dateModified: entry.lastModified,
            size: entry.length,
        };
        result.push(obj);
        }
    } else {
        const dirName = parts[0];
        let directory = directories[dirName];
        if (!directory) {
        directory = {
            name: dirName,
            isDirectory: true,
        };
        directories[dirName] = directory;
        result.push(directory);
        }
        if (!directory.hasSubDirectories) {
        directory.hasSubDirectories = parts.length > 2;
        }
    }
    });
    result.sort(this.compareDataObjects);
    return result;
}

It outputs FileManager-compatible JSON:

[
    {
        "name": "folder12",
        "isDirectory": true,
        "hasSubDirectories": true
    },
    ... other records
    {
        "name": "testflowers_137KB.jpgd1578202-9d72-4806-9ac2-90551d598e70",
        "isDirectory": false,
        "dateModified": "2022-12-21T14:57:55.000Z",
        "size": 140998
    }
]

Server-side Data Handling

To process Blob data on the server, create a class that implements the interface requirements of the FileManager component...

public class AzureBlobFileProvider : IFileSystemItemLoader, IFileSystemItemEditor, IFileUploader, IFileContentProvider {
    ...
    public AzureBlobFileProvider(string storageAccountName, string storageAccessKey, string containerName, string tempDirPath) {
        ...
    }
...
}

...and create function implementations for the Azure API:

public IEnumerable<FileSystemItem> GetItems(FileSystemLoadItemOptions options) {
    var result = new List<FileSystemItem>();
    string dirKey = GetFileItemPath(options.Directory);
    var oneLevelItemsList = GetOneLevelHierarchyBlobs(dirKey);
    foreach(BlobHierarchyItem hierarchyItem in oneLevelItemsList) {
        var fileItem = GetFileSystemItem(hierarchyItem);
        if(fileItem != null) {
            result.Add(fileItem);
        }
    }
    return result.OrderByDescending(item => item.IsDirectory)
        .ThenBy(item => item.Name)
        .ToList();
}

The FileSystem endpoint will expose an instance of the AzureBlobFileProvider class to the client-side application:

[Route("api/file-manager-azure", Name = "FileManagerAzureProviderApi")]
public object FileSystem(FileSystemCommand command, string arguments) {
    var config = new FileSystemConfiguration {
        Request = Request,
        FileSystemProvider = AzureFileProvider,
        ...
        UploadConfiguration = new UploadConfiguration {
            MaxFileSize = 1048576
        },
        TempDirectory = WebHostEnvironment.ContentRootPath + "/UploadTemp"
    };
    var processor = new FileSystemCommandProcessor(config);
    var result = processor.Execute(command, arguments);
    return result.GetClientCommandResult();
}

This particular technique simplifies component setup. You can create a RemoteFileSystemProvider that uses the newly created file system endpoint:

const provider = new DevExpress.fileManagement.RemoteFileSystemProvider({
endpointUrl: `${baseUrl}file-manager-azure`,
});

Note that the fileSystemProvider component option needs to reference the following provider:

jQuery
index.js
$('#file-manager').dxFileManager({
name: 'fileManager',
fileSystemProvider: provider,
permissions: {
    download: true,
    ...
},
allowedFileExtensions: [],
});
Angular
app.component.html
<dx-file-manager
id="file-manager"
[fileSystemProvider]="fileSystemProvider"
[allowedFileExtensions]="allowedFileExtensions"
>
    <dxo-permissions 
        [download]="true"
        ...
    >
    </dxo-permissions>
</dx-file-manager>
Vue
App.vue
<DxFileManager
  id="file-manager"
  :file-system-provider="fileSystemProvider"
  :allowed-file-extensions="allowedFileExtensions"
>
    <DxPermissions>
        :download="true"
        ...
    </DxPermissions>
</DxFileManager>
React
App.js
<FileManager id="file-manager" fileSystemProvider={fileSystemProvider} allowedFileExtensions={allowedFileExtensions}>
    <Permissions
        download={true}
        ...
    >
    </Permissions>
</FileManager>

FileUploader

Full code example:

View on GitHub

The DevExtreme FileUploader component can upload blobs to Azure Storage using a multi-part upload technique.

  1. Preparation. Generate an SAS (Shared Access Signature) token for the Azure Blob Storage account or container. Azure requires this token to grant data upload authorization. Add a server-side method that generates the necessary token:

    object UploadBlob(string blobName) {
        if (blobName.Contains("/"))
            return CreateErrorResult("Invalid blob name.");
    
        string prefix = Guid.NewGuid().ToString("N");
        string fullBlobName = $"{prefix}_{blobName}";
        var blob = Container.GetBlockBlobClient(fullBlobName);
    
        if (blob.Exists() && blob.GetProperties().Value.ContentLength > MaxBlobSize) {
            return CreateErrorResult();
        }
        if (blob.CanGenerateSasUri) {
            var sasUri = blob.GenerateSasUri(BlobSasPermissions.Write, DateTimeOffset.UtcNow.AddHours(1));
            return CreateSuccessResult(sasUri.AbsoluteUri);
        } else {
                    return CreateErrorResult("BlobClient cannot generate SasUri");
        }
    
    }
  2. Chunk upload. Upload chunks one by one. Azure assigns a unique block ID for each chunk. The ID is stored inside the ETag header.

  3. Completion. To complete the upload, send all ETag headers collected during step 2 to the server. This action stitches chunks together.

    The following client-side function uploads chunks to Azure, saves the block ID, and reports block ID data back to Azure alongside the last chunk:

    function uploadChunk(file, uploadInfo) {
        let promise = null;
    
        if (uploadInfo.chunkIndex === 0) {
            promise = gateway.getUploadAccessUrl(file.name).then((accessUrl) => {
            uploadInfo.customData.accessUrl = accessUrl.url1;
            });
        } else {
            promise = Promise.resolve();
        }
    
        promise = promise.then(() => gateway.putBlock(
            uploadInfo.customData.accessUrl,
            uploadInfo.chunkIndex,
            uploadInfo.chunkBlob,
        ));
    
        if (uploadInfo.chunkIndex === uploadInfo.chunkCount - 1) {
            promise = promise.then(() => gateway.putBlockList(
            uploadInfo.customData.accessUrl,
            uploadInfo.chunkCount,
            ));
        }
        return promise;
    
    }