DevExtreme Angular - Manage Amazon S3 Storage with DevExtreme Components

IMPORTANT

The code snippets in this article and all associated repositories are for informational purposes only. Security should be your #1 priority when using Amazon S3 storage. You should consult a security expert or apply accepted best practices to maintain the highest security posture for your DevExtreme-powered web application. Remember, a secure web app demands careful consideration/understanding of potential attack vectors, the configuration of your development environment, and security posture of third-party service providers.

Amazon's Simple Storage Service (S3) plays a critical role in today’s enterprise. It offers high-availability and trusted access to large databases/file archives. You can integrate S3 storage into your DevExtreme-powered web application to simplify CRUD- operations against user files.

This help topic documents key considerations when connecting DevExtreme UI components to Amazon S3 storage. Specifically, it describes a back-end application designed to communicate with AWS, and two client-side applications — one with the DevExtreme FileManager component, and the other with our FileUploader.

FileManager application:

View on GitHub

FileUploader application:

View on GitHub

Table of Contents

  1. Amazon Setup

    You will need an Amazon Web Services account with an active subscription to set up an S3 bucket. The first section of this tutorial describes use of the AWS management console (to configure S3 resources).

  2. Back-end application walkthrough

    In this section, we’ll set up a back-end .NET application that uses the Amazon SDK to query S3 APIs. You can review source code used on GitHub.

  3. Configure multi-part upload (client-side)

    This section configures your client-side application to support multi-part AWS uploads.

  4. File Manager

    This section creates a client-side application using the DevExtreme FileManager component (bound to S3). You can explore the complete project on GitHub. The repository includes applications for all DevExtreme supported frameworks: React, Angular, Vue, jQuery, and ASP.NET Core.

  5. File Uploader

    This section creates a client-side application using the DevExtreme FileUploader component. You can explore the complete project on GitHub. The repository includes applications for all DevExtreme supported frameworks: React, Angular, Vue, jQuery, and ASP.NET Core.

Amazon Setup

Create a New Bucket

Buckets are virtual storage containers. Each bucket can store an unlimited number of objects up to 5TB in size. Each object can represent a file or a different data entity.

Follow the Amazon tutorial to create a new S3 bucket.

  • Select a DNS-compliant bucket name between 3 and 63 characters in length. The name can include numbers, hyphens, and periods. Do not use uppercase letters or underscores.
  • Select a convenient geographic location for your bucket. Buckets do not offer Content Delivery Network capabilities — each bucket stores files in a single location.
NOTE
Additional video tutorials: @TinyTechnicalTutorials, @CodeWithOtabek

Set Up User Permissions

For the bucket to be secure, you need to properly set up access permissions. Amazon calls this process IAM (Identity and Access Management). For each user, you can set granular rules that limit access to storage buckets and objects. Review the following Amazon AWS article if you are new to IAM: Controlling access to a bucket with user policies.

The following Amazon AWS article explains how to create a new IAM user for your application: Creating an IAM user in your AWS account. Amazon generates a unique secret key for each IAM user. The secret key and access ID are an integral part of the authentication process.

Use the AWS Management Console to create a new access policy with full access to your S3 bucket. Once complete, attach this policy to the user you created earlier.

Configure CORS (AWS Management Console)

Last but not least, you need to configure a set of CORS (Cross-Origin Resource Sharing) policies for your bucket. To protect your bucket from unauthorized access, CORS policies limit the range of acceptable request origins and HTTP headers. Amazon references the bucket's CORS policies when it receives incoming requests.

Since a DevExtreme-powered application will not run on the same server as your bucket, you need to explicitly allow requests from third-party origins. The configuration below opens up a bucket to requests of all origins, and limits the number of allowed HTTP methods:

[{
    "AllowedHeaders": ["*"], 
    "AllowedMethods": ["GET", "PUT", "POST"], 
    "AllowedOrigins": ["*"],
    "ExposeHeaders": ["etag" ... other tags ]
}]

Back-End Application Walkthrough

DevExtreme components cannot interact with the S3 API directly. You need to create a back-end application that uses the AWS SDK to communicate with Amazon servers.

For the purposes of this tutorial, we created a .NET application. You can view its source code on GitHub. Both the FileManager repository and the FileUploader repository include the source code for the back-end application (in the Amazon_Backend folder).

Install the SDK

Amazon maintains AWS Software Development Kits for popular languages and frameworks. SDKs include the tools necessary to handle AWS authentication, sign requests, and parse S3 server responses.

Since we want to create a .NET application, we need to install the .NET SDK package from Nuget:

dotnet add AWSSDK.S3 --version 3.7.308.2

Configure CORS (server-side)

The back-end application needs a CORS setup of its own. You need to enable the following headers:

  • The ETag header helps the application handle multi-part uploads without mid-air collisions.
  • The Content-Type header is necessary to handle file downloads.

Example:

program.cs
public static void Main(string[] args) {
    var builder = WebApplication.CreateBuilder(args);
    builder.Services.AddCors(options => options.AddPolicy("CorsPolicy", builder => {
        builder
            .AllowAnyMethod()
            .AllowAnyHeader()
            .SetIsOriginAllowed(_ => true)
            .AllowCredentials() // Required to expose Content-Disposition and ETag headers
            .WithExposedHeaders(new string[] { "Content-Disposition", "ETag"});
    }));
    var app = builder.Build();
    app.UseCors("CorsPolicy");
}

Signature Configuration

For security reasons, most AWS regions require you to activate the UseSignatureVersion4 configuration option. This option ensures that your application uses the fourth version of Amazon's authentication signature standard (to sign S3 requests). Unlike earlier standards, version 4 signatures may include a payload, and allow multiple header requests.

Add the following line to your program.cs file:

program.cs
AWSConfigsS3.UseSignatureVersion4 = true; 

Configure Object Access

S3 buckets do not expose a traditional file system to the end user. They store data in object-key pairs. To simulate a file hierarchy, you can use the key to store the file path, and the object to store file content.

AmazonS3Provier.cs
public async Task<IEnumerable<FileSystemItem>> GetItemsAsync(string? path)
{
    List<FileSystemItem> result = new List<FileSystemItem>(); ListObjectsV2Request request = new ListObjectsV2Request {
... };
    ListObjectsV2Response response; do

        response = await Client.ListObjectsV2Async(request);
        var directories = await GetDirectoriesFromCommonPrefixes(response.CommonPrefixes);
        ...
        request.ContinuationToken = response.NextContinuationToken;
    } while (response.IsTruncated);
    return result;
}
...
public async Task<List<FileSystemItem>> GetDirectoriesFromCommonPrefixes(List<string> prefixes) {
    var result = new List<FileSystemItem>();
    foreach (var item in prefixes) {
        result.Add(new FileSystemItem() {
            Name = GetDirectoryNameFromKey(item),
            Key = item,
            Size = 0,
            IsDirectory = true,
            DateModified = DateTime.UtcNow,
            HasSubDirectories = await HasDirectorySubDirectoriesAsync(item),
        });
    }
    return result;
}

return result; }

Review our AmazonS3Provider.cs file for guidance.

Enable Pre-Signed URLs

App performance may suffer if every request includes an authentication signature. To address this issue, S3 can generate presigned URLs. You can request authorization for a file action and receive a time-limited URL that does not require an authentication signature. The default validity period for the URL is 15 minutes. Attempts to access an expired URL will fail. You can use the Expired property to modify the duration of the validity period.

AmazonS3Provider.cs
public async Task<string> GetPresignedUrlAsync(string uploadId, string key, int partNumber) {
            GetPreSignedUrlRequest request = new GetPreSignedUrlRequest {
                BucketName = BucketName,
                Key = key,
                Verb = HttpVerb.PUT,
                UploadId = uploadId,
                /* Uncomment the next line to set a 5-minute URL validity period:
                Expires = DateTime.UtcNow.AddSeconds(300), */
                PartNumber = partNumber + 1
            };

            return await Client.GetPreSignedURLAsync(request);
        }

View the Code Examples section of the Amazon S3 user guide for additional information on AWS APIs.

Configure Multi-Part Upload (Client-Side)

S3 supports multiple file upload techniques: single-try uploads, multipart uploads, and resumable uploads. For the purposes of this tutorial, we'll implement object access methods to facilitate multipart uploads.

Multipart uploads involve three steps:

  1. Initiation. In this step, you contact the server to establish a new upload attempt. The server returns a unique ID for your upload.
  2. Chunk upload. As you upload the file part by part, you need to label each part with an upload ID. The server returns a unique ETag for each chunk it receives.
  3. Completion. To complete the upload process, send all the ETag headers you collected in step 2 and the upload ID to the server.

Example:

async uploadFileChunk(fileData, uploadInfo, destinationDirectory) {
  if (uploadInfo.chunkIndex === 0) {
    await this.gateway.initUpload(fileData, destinationDirectory); // Initiate the upload before the first chunk.
  }

  await this.gateway.uploadPart(fileData, uploadInfo, destinationDirectory);

  if (uploadInfo.chunkCount === uploadInfo.chunkIndex + 1) {
    await this.gateway.completeUpload(fileData, uploadInfo, destinationDirectory); // Complete the upload after the last chunk.
  }
}

You can use component options to set maximum chunk size.

The DevExtreme FileManager component includes an upload.chunkSize property. Similarly, the DevExtreme FileUploader component includes a chunkSize property.

If a file does not exceed the maximum chunk size, the upload consists of a single chunk.

Our multi-part upload implementation is included in linked repositories.

Common Techniques

Two classes will handle communication between the component and your bucket. The AmazonFileSystem class will pass data from the component to AmazonGateway. The AmazonGateway class includes all methods to query Amazon APIs. This separation of duties makes code simpler/easier to maintain.

class AmazonFileSystem {
  gateway = null;

  constructor(baseUrl, onRequestExecuted) {
    this.gateway = new AmazonGateway(baseUrl, onRequestExecuted);
  }

  getItems(path) {
    return this.gateway.getItems(path);
  }

  createDirectory(key, name) {
    return this.gateway.createDirectory(key, name);
  }
...

The FileManager component allows users to download multiple files simultaneously. You can bundle these files into an archive on the server:

AmazonS3Provider.cs
public async Task<FileContentResult> DownloadItemsAsync(string[] keys) {
    if (keys == null || keys.Length == 0)
        return null;

    if (keys.Length > 1) {
        return await DownloadFilesAsArchive(keys);
    } 

    return await DownloadSingleFile(keys[0]);
}

public async Task<FileContentResult> DownloadFilesAsArchive(string[] keys) {
    using (var memoryStream = new MemoryStream()) {
        using (var zipArchive = new ZipArchive(memoryStream, ZipArchiveMode.Create, true)) {
            foreach (var file in keys) {
                ...

Make sure to modify your client-side code accordingly:

amazon.filesystem.js
async downloadItems(items) {
    const keys = items.map((x) => x.key);
    const fileName = keys.length > 1 ? 'archive.zip' : this.getFileNameFromKey(keys[0]);
    ...

If you need to abort uploads midway, you can use the abortFileUpload method (AWS API). This option is critical for graceful handling of upload interruptions. Create a function that sends an upload termination request to the Amazon server:

async abortFileUpload(fileData, uploadInfo, destinationDirectory) {
  const key = `${destinationDirectory?.key ?? ''}${fileData.name}`;
  const uploadId = this.getUploadId(fileData.name);
  const params = { uploadId, key };
  const requestOptions = {
    method: 'POST',
    headers: this.defaultHeaders,
  };
  return this.makeRequest('abortUpload', params, requestOptions);
}

Pass this function to the onUploadAborted event of the FileUploader component or use it to construct a custom file system provider for the FileManager.

FileManager

The DevExtreme FileManager component expects the data source to have a traditional file system structure. We can use the CustomFileSystemProvider object to simulate such a file system with S3 data.

const provider = new DevExpress.fileManagement.CustomFileSystemProvider({
  getItems,
  createDirectory,
  renameItem,
  deleteItem,
  copyItem,
  moveItem,
  uploadFileChunk,
  downloadItems,
  abortFileUpload,
});

In this particular scenario, final component configuration is straightforward:

jQuery
index.js
$('#file-manager').dxFileManager({
  fileSystemProvider: provider,

  allowedFileExtensions: [],
  upload: {
    chunkSize: 5242880,
  },
  permissions: {
    download: true,
    create: true,
    copy: true,
    move: true,
    delete: true,
    rename: true,
    upload: true,
  },
});
Angular
app.component.html
<dx-file-manager
    id="file-manager"
    [fileSystemProvider]="fileSystemProvider"
    [allowedFileExtensions]="allowedFileExtensions"
>
    <dxo-upload [chunkSize]="5242880"></dxo-upload>
    <dxo-permissions
    [create]="true"
    [copy]="true"
    [move]="true"
    [delete]="true"
    [rename]="true"
    [upload]="true"
    [download]="true"
    >
    </dxo-permissions>
</dx-file-manager>
Vue
App.vue
<DxFileManager
  :file-system-provider="fileSystemProvider"
  :allowed-file-extensions="allowedFileExtensions"
>
  <DxUpload :chunk-size="5242880"/>
  <DxPermissions/>
  <DxPermissions
    :create="true"
    :copy="true"
    :move="true"
    :delete="true"
    :rename="true"
    :upload="true"
    :download="true"
  />
</DxFileManager>
React
App.js
<FileManager
    id="file-manager"
    fileSystemProvider={fileSystemProvider}
    allowedFileExtensions={allowedFileExtensions}
  >
    <Upload chunkSize={5242880}></Upload>
    <Permissions download={true}></Permissions>
    <Permissions
      create={true}
      copy={true}
      move={true}
      delete={true}
      rename={true}
      upload={true}
      download={true}>
    </Permissions>
  </FileManager>

FileUploader

The FileUploader app allows users to download a file immediately following upload. To implement this option, we need to retrieve a pre-signed download URL from Amazon:

async function onUploaded(e) {
  const url = await amazon.getPresignedDownloadUrl(e.file.name);
  showPresignedUrl(url, e.file.name);
}