Amazon S3


There is one backend for interacting with S3 based on the boto library. A legacy backend backed on the Amazon S3 Python library was removed in version 1.2. Another for interacting via Boto3 was added in version 1.5


To use s3boto set:

DEFAULT_FILE_STORAGE = 'storages.backends.s3boto.S3BotoStorage'


Your Amazon Web Services access key, as a string.


Your Amazon Web Services secret access key, as a string.


Your Amazon Web Services storage bucket name, as a string.

AWS_DEFAULT_ACL (optional)

If set to private changes uploaded file’s Access Control List from the default permission public-read to give owner full control and remove read access from everyone else.


If set to True the bucket specified in AWS_STORAGE_BUCKET_NAME is automatically created.

AWS_HEADERS (optional)

If you’d like to set headers sent with each file of the storage:

# see
    'Expires': 'Thu, 15 Apr 2010 20:00:00 GMT',
    'Cache-Control': 'max-age=86400',

AWS_QUERYSTRING_AUTH (optional; default is True)

Setting AWS_QUERYSTRING_AUTH to False removes query parameter authentication from generated URLs. This can be useful if your S3 buckets are public.

AWS_QUERYSTRING_EXPIRE (optional; default is 3600 seconds)

The number of seconds that a generated URL with query parameter authentication is valid for.

To allow collectstatic to automatically put your static files in your bucket set the following in your

STATICFILES_STORAGE = 'storages.backends.s3boto.S3BotoStorage'

AWS_S3_ENCRYPTION (optional; default is False)

Enable server-side file encryption while at rest, by setting encrypt_key parameter to True. More info available here:

AWS_S3_FILE_OVERWRITE (optional: default is True)

By default files with the same name will overwrite each other. Set this to False to have extra characters appended.


If you’re using S3 as a CDN (via CloudFront), you’ll probably want this storage to serve those files using that:


Keep in mind you’ll have to configure CloudFront to use the proper bucket as an origin manually for this to work.

If you need to use multiple storages that are served via CloudFront, pass the custom_domain parameter to their constructors.


Once you’re done, default_storage will be the S3 storage:

>>> from import default_storage
>>> print default_storage.__class__
<class 'S3Storage.S3Storage'>

The above doesn’t seem to be true for django 1.3+ instead look at:

>>> from import default_storage
>>> print default_storage.connection

This way, if you define a new FileField, it will use the S3 storage:

>>> from django.db import models
>>> class Resume(models.Model):
...     pdf = models.FileField(upload_to='pdfs')
...     photos = models.ImageField(upload_to='photos')
>>> resume = Resume()
>>> print
<S3Storage.S3Storage object at ...>



>>> from import default_storage
>>> from django.core.files.base import ContentFile
>>> from django.core.cache import cache
>>> from models import MyStorage


Standard file access options are available, and work as expected:

>>> default_storage.exists('storage_test')
>>> file ='storage_test', 'w')
>>> file.write('storage contents')
>>> file.close()

>>> default_storage.exists('storage_test')
>>> file ='storage_test', 'r')
'storage contents'
>>> file.close()

>>> default_storage.delete('storage_test')
>>> default_storage.exists('storage_test')


An object without a file has limited functionality:

>>> obj1 = MyStorage()
>>> obj1.normal
<FieldFile: None>
>>> obj1.normal.size
Traceback (most recent call last):
ValueError: The 'normal' attribute has no file associated with it.

Saving a file enables full functionality:

>>>'django_test.txt', ContentFile('content'))
>>> obj1.normal
<FieldFile: tests/django_test.txt>
>>> obj1.normal.size

Files can be read in a little at a time, if necessary:

>>> '-'.join(obj1.normal.chunks(chunk_size=2))

Save another file with the same name:

>>> obj2 = MyStorage()
>>>'django_test.txt', ContentFile('more content'))
>>> obj2.normal
<FieldFile: tests/django_test_.txt>
>>> obj2.normal.size

Push the objects into the cache to make sure they pickle properly:

>>> cache.set('obj1', obj1)
>>> cache.set('obj2', obj2)
>>> cache.get('obj2').normal
<FieldFile: tests/django_test_.txt>

Deleting an object deletes the file it uses, if there are no other objects still using that file:

>>> obj2.delete()
>>>'django_test.txt', ContentFile('more content'))
>>> obj2.normal
<FieldFile: tests/django_test_.txt>

Default values allow an object to access a single file:

>>> obj3 = MyStorage.objects.create()
>>> obj3.default
<FieldFile: tests/default.txt>
'default content'

But it shouldn’t be deleted, even if there are no more objects using it:

>>> obj3.delete()
>>> obj3 = MyStorage()
'default content'

Verify the fix for #5655, making sure the directory is only determined once:

>>> obj4 = MyStorage()
>>>'random_file', ContentFile('random content'))
>>> obj4.random
<FieldFile: .../random_file>

Clean up the temporary files:

>>> obj1.normal.delete()
>>> obj2.normal.delete()
>>> obj3.default.delete()
>>> obj4.random.delete()