Amazon S3¶
Usage¶
There is only one supported backend for interacting with Amazon’s S3,
S3Boto3Storage
, based on the boto3 library.
The minimum required version of boto3
is 1.4.4 although we always recommend
the most recent.
Settings¶
To upload your media files to S3 set:
# django < 4.2
DEFAULT_FILE_STORAGE = 'storages.backends.s3boto3.S3Boto3Storage'
# django >= 4.2
STORAGES = {"default": "storages.backends.s3boto3.S3Boto3Storage"}
To allow django-admin collectstatic
to automatically put your static files in your bucket set the following in your settings.py:
# django < 4.2
STATICFILES_STORAGE = 'storages.backends.s3boto3.S3StaticStorage'
# django >= 4.2
STORAGES = {"staticfiles": "storages.backends.s3boto3.S3StaticStorage"}
If you want to use something like ManifestStaticFilesStorage then you must instead use:
# django < 4.2
STATICFILES_STORAGE = 'storages.backends.s3boto3.S3ManifestStaticStorage'
# django >= 4.2
STORAGES = {"staticfiles": "storages.backends.s3boto3.S3ManifestStaticStorage"}
There are several different methods for specifying the AWS credentials used to create the S3 client. In the order that S3Boto3Storage
searches for them:
AWS_S3_SESSION_PROFILE
AWS_S3_ACCESS_KEY_ID
andAWS_S3_SECRET_ACCESS_KEY
AWS_ACCESS_KEY_ID
andAWS_SECRET_ACCESS_KEY
- The environment variables AWS_S3_ACCESS_KEY_ID and AWS_S3_SECRET_ACCESS_KEY
- The environment variables AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY
- Use Boto3’s default session
AWS_S3_SESSION_PROFILE
- The AWS profile to use instead of
AWS_ACCESS_KEY_ID
andAWS_SECRET_ACCESS_KEY
. All configuration information other than the key id and secret key is ignored in favor of the other settings specified below.
Note
If this is set, then it is a configuration error to also set AWS_S3_ACCESS_KEY_ID
and AWS_S3_SECRET_ACCESS_KEY
.
AWS_ACCESS_KEY_ID
and AWS_SECRET_ACCESS_KEY
are ignored
AWS_S3_ACCESS_KEY_ID or AWS_ACCESS_KEY_ID
- Your Amazon Web Services access key, as a string.
AWS_S3_SECRET_ACCESS_KEY or AWS_SECRET_ACCESS_KEY
- Your Amazon Web Services secret access key, as a string.
AWS_STORAGE_BUCKET_NAME
- Your Amazon Web Services storage bucket name, as a string.
AWS_S3_OBJECT_PARAMETERS
(optional, default{}
)Use this to set parameters on all objects. To set these on a per-object basis, subclass the backend and override
S3Boto3Storage.get_object_parameters
.To view a full list of possible parameters (there are many) see the Boto3 docs for uploading files; an incomplete list includes:
CacheControl
,SSEKMSKeyId
,StorageClass
,Tagging
andMetadata
.
AWS_DEFAULT_ACL
(optional; default is None
which means the file will be private
per Amazon’s default)
Use this to set an ACL on your file such as
public-read
. If not set the file will beprivate
per Amazon’s default. If theACL
parameter is set inAWS_S3_OBJECT_PARAMETERS
, then this setting is ignored.Options such as
public-read
andprivate
come from the list of canned ACLs.
AWS_QUERYSTRING_AUTH
(optional; default isTrue
)- Setting
AWS_QUERYSTRING_AUTH
toFalse
to remove query parameter authentication from generated URLs. This can be useful if your S3 buckets are public. AWS_S3_MAX_MEMORY_SIZE
(optional; default is0
- do not roll over)- The maximum amount of memory (in bytes) a file can take up before being rolled over into a temporary file on disk.
AWS_QUERYSTRING_EXPIRE
(optional; default is 3600 seconds)- The number of seconds that a generated URL is valid for.
AWS_S3_URL_PROTOCOL
(optional: default ishttps:
)- The protocol to use when constructing a custom domain,
AWS_S3_CUSTOM_DOMAIN
must beTrue
for this to have any effect. AWS_S3_FILE_OVERWRITE
(optional: default isTrue
)- By default files with the same name will overwrite each other. Set this to
False
to have extra characters appended. AWS_LOCATION
(optional: default is ‘’)- A path prefix that will be prepended to all uploads
AWS_IS_GZIPPED
(optional: default isFalse
)- Whether or not to enable gzipping of content types specified by
GZIP_CONTENT_TYPES
GZIP_CONTENT_TYPES
(optional: default istext/css
,text/javascript
,application/javascript
,application/x-javascript
,image/svg+xml
)- When
AWS_IS_GZIPPED
is set toTrue
the content types which will be gzipped AWS_S3_REGION_NAME
(optional: default isNone
)- Name of the AWS S3 region to use (eg. eu-west-1)
AWS_S3_USE_SSL
(optional: default isTrue
)- Whether or not to use SSL when connecting to S3, this is passed to the boto3 session resource constructor.
AWS_S3_VERIFY
(optional: default isNone
)- Whether or not to verify the connection to S3. Can be set to False to not verify certificates or a path to a CA cert bundle.
AWS_S3_ENDPOINT_URL
(optional: default isNone
)- Custom S3 URL to use when connecting to S3, including scheme. Overrides
AWS_S3_REGION_NAME
andAWS_S3_USE_SSL
. To avoidAuthorizationQueryParametersError
error,AWS_S3_REGION_NAME
should also be set. AWS_S3_ADDRESSING_STYLE
(optional: default isNone
)- Possible values
virtual
andpath
. AWS_S3_PROXIES
(optional: default isNone
)- A dictionary of proxy servers to use by protocol or endpoint, e.g.: {‘http’: ‘foo.bar:3128’, ‘http://hostname’: ‘foo.bar:4012’}.
AWS_S3_SIGNATURE_VERSION
(optional)
As of
boto3
version 1.13.21 the default signature version used for generating presigned urls is stillv2
. To be able to access your s3 objects in all regions through presigned urls, explicitly set this tos3v4
.Set this to use an alternate version such as
s3
. Note that only certain regions support the legacys3
(also known asv2
) version. You can check to see if your region is one of them in the S3 region list.
Note
The signature versions are not backwards compatible so be careful about url endpoints if making this change for legacy projects.
CloudFront¶
If you’re using S3 as a CDN (via CloudFront), you’ll probably want this storage to serve those files using that:
AWS_S3_CUSTOM_DOMAIN = 'cdn.mydomain.com'
Warning
Django’s STATIC_URL
must end in a slash and the AWS_S3_CUSTOM_DOMAIN
must not. It is best to set this variable independently of STATIC_URL
.
Keep in mind you’ll have to configure CloudFront to use the proper bucket as an origin manually for this to work.
If you need to use multiple storages that are served via CloudFront, pass the custom_domain parameter to their constructors.
CloudFront Signed Urls¶
If you want django-storages to generate Signed Cloudfront Urls, you can do so by following these steps:
modify settings.py to include:
AWS_CLOUDFRONT_KEY = os.environ.get('AWS_CLOUDFRONT_KEY', None).encode('ascii') AWS_CLOUDFRONT_KEY_ID = os.environ.get('AWS_CLOUDFRONT_KEY_ID', None)
Generate a CloudFront Key Pair as specified in the AWS Doc to create CloudFront key pairs.
Updated ENV vars with the corresponding values:
AWS_CLOUDFRONT_KEY=-----BEGIN RSA PRIVATE KEY----- ... -----END RSA PRIVATE KEY----- AWS_CLOUDFRONT_KEY_ID=APK....
django-storages will now generate signed cloudfront urls
Note
You must install one of cryptography or rsa to use signed URLs.
IAM Policy¶
The IAM policy permissions needed for most common use cases are:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "VisualEditor0",
"Effect": "Allow",
"Action": [
"s3:PutObject",
"s3:GetObjectAcl",
"s3:GetObject",
"s3:ListBucket",
"s3:DeleteObject",
"s3:PutObjectAcl"
],
"Principal": {
"AWS": "arn:aws:iam::example-AWS-account-ID:user/example-user-name"
},
"Resource": [
"arn:aws:s3:::example-bucket-name/*",
"arn:aws:s3:::example-bucket-name"
]
}
]
}
For more information about Principal, please refer to AWS JSON Policy Elements
Storage¶
Standard file access options are available, and work as expected:
>>> from django.core.files.storage import default_storage
>>> default_storage.exists('storage_test')
False
>>> file = default_storage.open('storage_test', 'w')
>>> file.write('storage contents')
>>> file.close()
>>> default_storage.exists('storage_test')
True
>>> file = default_storage.open('storage_test', 'r')
>>> file.read()
'storage contents'
>>> file.close()
>>> default_storage.delete('storage_test')
>>> default_storage.exists('storage_test')
False
Overriding the default Storage class¶
You can override the default Storage class and create your custom storage backend. Below provides some examples and common use cases to help you get started. This section assumes you have your AWS credentials configured, e.g. AWS_ACCESS_KEY_ID
and AWS_SECRET_ACCESS_KEY
.
To create a storage class using a specific bucket:
from storages.backends.s3boto3 import S3Boto3Storage
class MediaStorage(S3Boto3Storage):
bucket_name = 'my-media-bucket'
Assume that you store the above class MediaStorage
in a file called custom_storage.py
in the project directory tree like below:
| (your django project root directory)
| ├── manage.py
| ├── my_django_app
| │ ├── custom_storage.py
| │ └── ...
| ├── ...
You can now use your custom storage class for default file storage in Django settings like below:
# django < 4.2
DEFAULT_FILE_STORAGE = 'my_django_app.custom_storage.MediaStorage'
# django >= 4.2
STORAGES = {"default": "my_django_app.custom_storage.MediaStorage"}
Or you may want to upload files to the bucket in some view that accepts file upload request:
import os
from django.views import View
from django.http import JsonResponse
from django_backend.custom_storages import MediaStorage
class FileUploadView(View):
def post(self, requests, **kwargs):
file_obj = requests.FILES.get('file', '')
# do your validation here e.g. file size/type check
# organize a path for the file in bucket
file_directory_within_bucket = 'user_upload_files/{username}'.format(username=requests.user)
# synthesize a full file path; note that we included the filename
file_path_within_bucket = os.path.join(
file_directory_within_bucket,
file_obj.name
)
media_storage = MediaStorage()
if not media_storage.exists(file_path_within_bucket): # avoid overwriting existing file
media_storage.save(file_path_within_bucket, file_obj)
file_url = media_storage.url(file_path_within_bucket)
return JsonResponse({
'message': 'OK',
'fileUrl': file_url,
})
else:
return JsonResponse({
'message': 'Error: file {filename} already exists at {file_directory} in bucket {bucket_name}'.format(
filename=file_obj.name,
file_directory=file_directory_within_bucket,
bucket_name=media_storage.bucket_name
),
}, status=400)
A side note is that if you have AWS_S3_CUSTOM_DOMAIN
setup in your settings.py
, by default the storage class will always use AWS_S3_CUSTOM_DOMAIN
to generate url.
If your AWS_S3_CUSTOM_DOMAIN
is pointing to a different bucket than your custom storage class, the .url()
function will give you the wrong url. In such case, you will have to configure your storage class and explicitly specify custom_domain
as below:
class MediaStorage(S3Boto3Storage):
bucket_name = 'my-media-bucket'
custom_domain = '{}.s3.amazonaws.com'.format(bucket_name)
You can also decide to config your custom storage class to store files under a specific directory within the bucket:
class MediaStorage(S3Boto3Storage):
bucket_name = 'my-app-bucket'
location = 'media' # store files under directory `media/` in bucket `my-app-bucket`
This is especially useful when you want to have multiple storage classes share the same bucket:
class MediaStorage(S3Boto3Storage):
bucket_name = 'my-app-bucket'
location = 'media'
class StaticStorage(S3Boto3Storage):
bucket_name = 'my-app-bucket'
location = 'static'
So your bucket file can be organized like as below:
| my-app-bucket
| ├── media
| │ ├── user_video.mp4
| │ ├── user_file.pdf
| │ └── ...
| ├── static
| │ ├── app.js
| │ ├── app.css
| │ └── ...
Model¶
An object without a file has limited functionality:
from django.db import models
from django.core.files.base import ContentFile
class MyModel(models.Model):
normal = models.FileField()
>>> obj1 = MyModel()
>>> obj1.normal
<FieldFile: None>
>>> obj1.normal.size
Traceback (most recent call last):
...
ValueError: The 'normal' attribute has no file associated with it.
Saving a file enables full functionality:
>>> obj1.normal.save('django_test.txt', ContentFile(b'content'))
>>> obj1.normal
<FieldFile: tests/django_test.txt>
>>> obj1.normal.size
7
>>> obj1.normal.read()
'content'
Files can be read in a little at a time, if necessary:
>>> obj1.normal.open()
>>> obj1.normal.read(3)
'con'
>>> obj1.normal.read()
'tent'
>>> '-'.join(obj1.normal.chunks(chunk_size=2))
'co-nt-en-t'
Save another file with the same name:
>>> obj2 = MyModel()
>>> obj2.normal.save('django_test.txt', ContentFile(b'more content'))
>>> obj2.normal
<FieldFile: tests/django_test.txt>
>>> obj2.normal.size
12
Push the objects into the cache to make sure they pickle properly:
>>> cache.set('obj1', obj1)
>>> cache.set('obj2', obj2)
>>> cache.get('obj2').normal
<FieldFile: tests/django_test.txt>
Clean up the temporary files:
>>> obj1.normal.delete()
>>> obj2.normal.delete()