Blog

Embedding Private Media Files Securely in a React Frontend with Amazon S3 and AWS Lambda

Embedding Private Media Files Securely in a React Frontend with Amazon S3 and AWS Lambda

Amazon Simple Storage Service (S3) is a low-cost service for storing and serving unstructured data, which makes it perfect for hosting any media that will be displayed or referenced in your React frontend (PDFs, images, etc.). However, if the media fetched by your frontend is read-restricted to only authenticated users of your application, then embedding this content gets tricky. In this post, I'll expand on my previous post about externally authenticated access to private S3 objects with an outline for a generalized solution to embedding private S3 content into your React components.

Object Retrieval Pattern

We'll be using presigned S3 object URLs to fetch our private S3 data. See my previous post linked above to learn more about this design. Below is a diagram visualizing the dataflow of authorizing the React frontend to fetch an image file, using presigned URLs.

Initial Implementation

Let's set up a simple component with a single <img> element that we'd like to link to our desired resource, with the resource URL passed as a prop to the component.

const S3Image = ({ objectUrl }) => {

    return <img src={objectUrl} alt="Image" />;

}

export default S3Image;

If we want to fetch the presigned URL for the corresponding object pointed to by objectUrl, we need to ensure that the URL embedded into the <img> element is up to date with the object specified in the props. We can set up some local state to store the signed URL and a useEffect to fetch the URL when the objectPath prop changes.

import React, { useState, useEffect } from 'react';

const S3Image = ({ objectPath }) => {

    const [signedUrl, setSignedUrl] = useState('');

    useEffect( async () => {

        try {

            const response = await fetch('/get-signed-url', { objectPath });

            const data = await response.json();

            setSignedUrl(data.url);

        } catch (error) {

            console.error('Error fetching signed URL:', error);

        }

    }, [objectPath]);

 

  return (

      <img src={signedUrl} alt="Image" />

  );

};

export default S3Image;

For simplicity, we'll handle the signed URL creation in lambda function running Node.js. This assumes that an API Gateway is set up to proxy the HTTP request from the react app to this Lambda function, but this pattern could be implemented with other AWS compute resources/VPC connection methods if more applicable. To see more on how to implement this API Gateway + Lambda stack, see my blog on Cloud APIs in minutes with Serverless Framework and AWS Lambda.

We'll use the AWS SDK to generate a presigned URL for this image file.

const AWS = require('aws-sdk');

AWS.config.update({

    accessKeyId: // IAM user credentials that have access to the S3 bucket

    secretAccessKey: // IAM user secret key

    signatureVersion: 'v4',

    region: // enter your region here

});

 

const s3 = new AWS.S3();

 

module.exports = async (event) => {

    try {

        // get request parameters

        const body = JSON.parse(event.body);

       

        if(!(body.objectPath?.length)){

            return { statusCode: 400, body: JSON.stringify({error: "Invalid object path"}) };  

        }

 

        const url = s3.getSignedUrl(input.object_method, {

            Bucket: // specify the bucket we're pulling from

            Key: body.object_path,

            Expires: //specify the lifetime of the URL (in milliseconds)

        });

 

        return { statusCode: 200, body: JSON.stringify({url}) };

    } catch (error) {

        return { statusCode: 500, body: JSON.stringify(error) };

    }

}

And there it is! With this implementation alone, you'll be able to display that image in your app, even though it's not publicly accessible in S3.

Caveat - Signed URL Expiration

There's one large issue with this implementation—the presigned URLs expire! Each presigned URL generated has a lifespan of a duration (configurable when the URL is generated), and after this time, requests using that URL will fail. This is to prevent persisting public access to the object. Remember, anyone with the presigned URL can use it, as long as it has not expired!

This will not be an issue for any media fetched and displayed in the browser on page load (like our image component above), but what if you want to provide a download link to the image file? The user may linger on the page after it's initially loaded and only request the file from S3 by clicking the link after the signature has expired. This leads to a nasty error screen and is probably not what your users want to see.

So, we don't want to let presigned URLs embedded into the page expire, but we also don't want to disable the presigned URL's timeout. To get around this, we'll add logic to periodically re-generate the presigned URL for the given asset. Below is our component, updated with the logic to re-fetch the URL periodically (using the useInterval custom hook from the use-interval package).

import React, { useState, useCallback } from 'react';

import useInterval from 'use-interval';

 

const S3Image = ({ objectPath }) => {

    const [signedUrl, setSignedUrl] = useState('');

    // define the function used to fetch the signed URL

    const getSignedUrl = useCallback(async () => {

        try {

            const response = await fetch('/get-signed-url', { objectPath });

            const data = await response.json();

            setSignedUrl(data.url);

        } catch (error) {

            console.error('Error fetching signed URL:', error);

        }

    }, [objectPath]);

    // call the function defined above every second

    useInterval( getSignedUrl, 1000, true );

    return <img src={signedUrl} alt="Image" />;

};

export default S3Image;

This implementation can be further ruggedized by modifying our Lambda function to return the lifetime along with the URL itself, such that the client knows exactly when it will need to regenerate it. However, regenerating the URL is not a time or compute intensive operation, so simply regenerating more often than you would ever expect the presigned URL timeout to be set to is most likely a reliable solution.

Abstracting as a Custom Hook

As a one-off use of this procedure, this implementation is perfectly suitable now. However, this logic would muddy your state management design if added in multiple locations or more complex components. To avoid this, let's abstract this logic into a custom hook that provides the presigned URL for the S3 object path passed to it.

import React, { useState, useCallback } from 'react';

import useInterval from 'use-interval';

 

export const useSignedUrl = (objectPath) => {

    const [signedUrl, setSignedUrl] = useState('');

    const getSignedUrl = useCallback(async () => {

        try {

            const response = await fetch('/get-signed-url', { objectPath });

            const data = await response.json();

            setSignedUrl(data.url);

        } catch (error) {

            console.error('Error fetching signed URL:', error);

        }

    }, [objectPath]);

    useInterval( getSignedUrl, 1000, true );

    return signedUrl;

};

This simplifies our component definition significantly.

import useSignedUrl from './useSignedUrl';

 

const S3Image = ({ objectPath }) => {

    const signedUrl = useSignedUrl(objectPath);

    return <img src={signedUrl} alt="Image" />;

};

export default S3Image;

With this, we have a clean, generalized solution for embedding restricted S3 content in our components.

Learn more about DMC's Application Development services and contact us for your next project.

Comments

There are currently no comments, be the first to post one.

Post a comment

Name (required)

Email (required)

CAPTCHA image
Enter the code shown above:

Related Blog Posts