You are here

README.txt in S3 File System 7.3

S3 File System (s3fs) provides an additional file system to your Drupal site,
alongside the public and private file systems, which stores files in Amazon's
Simple Storage Service (S3) (or any S3-compatible storage service). You can set
your site to use S3 File System as the default, or use it only for individual
fields. This functionality is designed for sites which are load-balanced across
multiple servers, as the mechanism used by Drupal's default file systems is not
viable under such a configuration.

=========================================
== Dependencies and Other Requirements ==
=========================================
- Either the Composer Manager or Libraries module is required to manage
  the AWS SDK. Download and install one of these two options:
  - Composer Manager 1.x - https://drupal.org/project/composer_manager
  - Libraries API 2.x - https://drupal.org/project/libraries
- Note: if both Composer Manager and Libraries are installed, the s3fs module
  will use Composer Manager.
- AWS SDK for PHP 3.x - https://github.com/aws/aws-sdk-php/releases
- PHP 5.5+ is required. AWS SDK v3 will not work on earlier versions.
- Your PHP must be configured with "allow_url_fopen = On" in your php.ini file.
  Otherwise, PHP will be unable to open files that are in your S3 bucket.
- PHP must also have the SimpleXML extension enabled.
- See this page for additional recommendations:
  https://docs.aws.amazon.com/sdk-for-php/v3/developer-guide/getting-started_requirements.html

==================
== Installation ==
==================
FOR COMPOSER MANAGER MODULE:
1) Install composer manager and follow its instructions for installing the AWS
SDK PHP library. The composer.json file included with this module will set the
version to the latest 3.x.

FOR LIBRARIES MODULE:
1) Install Libraries version 2.x from http://drupal.org/project/libraries.

2) Install the AWS SDK for PHP.
  a) If you have drush, you can install the SDK with this command (executed
     from the root folder of your Drupal codebase):
     drush make --no-core sites/all/modules/s3fs/s3fs.make
  b) If you don't have drush, download the latest Version 3 aws.zip file
     from here:
     https://github.com/aws/aws-sdk-php/releases/latest/download/aws.zip
     Extract the zip file into your Drupal Libraries folder for AWS SDK
     (sites/all/libraries/awssdk) such that the path to aws-autoloader.php
     is: "sites/all/libraries/awssdk/aws-autoloader.php"

IN CASE OF TROUBLE DETECTING THE AWS SDK LIBRARY:
Ensure that the awssdk folder itself, and all the files within it, can be read
by your webserver. Usually this means that the user "apache" (or "_www" on OSX)
must have read permissions for the files, and read+execute permissions for all
the folders in the path leading to the awssdk files.

====================
== Initial Setup ==
====================
With the code installation complete, you must now configure s3fs to use your
Amazon Web Services credentials.

The preferred method is to use environment variables or IAM credentials as
outlined here: https://docs.aws.amazon.com/aws-sdk-php/v3/guide/guide/credentials.html

However, you can also set the credentials in the $conf array in
your site's settings.php file (sites/default/settings.php), like so:
$conf['awssdk_access_key'] = 'YOUR ACCESS KEY';
$conf['awssdk_secret_key'] = 'YOUR SECRET KEY';

Configure your settings for S3 File System (including your S3 bucket name) at
/admin/config/media/s3fs/settings. You can input your AWS credentials on this
page as well, but using the $conf array is recommended.

You can also configure the rest of your S3 preferences in the $conf array. See
the "Configuring S3FS in settings.php" section below for more info.

===================== ESSENTIAL STEP! DO NOT SKIP THIS! ======================
With the settings saved, go to /admin/config/media/s3fs/actions to refresh the
file metadata cache. This will copy the filenames and attributes for every
existing file in your S3 bucket into Drupal's database. This can take a
significant amount of time for very large buckets (thousands of files). If this
operation times out, you can also perform it using "drush s3fs-refresh-cache".

Please keep in mind that any time the contents of your S3 bucket change without
Drupal knowing about it (like if you copy some files into it manually using
another tool), you'll need to refresh the metadata cache again. S3FS assumes
that its cache is a canonical listing of every file in the bucket. Thus, Drupal
will not be able to access any files you copied into your bucket manually until
S3FS's cache learns of them. This is true of folders as well; s3fs will not be
able to copy files into folders that it doesn't know about.

============================================
== How to Configure Your Site to Use s3fs ==
============================================
Visit the admin/config/media/file-system page and set the "Default download
method" to "Amazon Simple Storage Service"
-and/or-
Add a field of type File, Image, etc. and set the "Upload destination" to
"Amazon Simple Storage Service" in the "Field Settings" tab.

This will configure your site to store new uploaded files in S3. Files which
your site creates automatically (such as aggregated CSS) will still be stored
in the server's local filesystem, because Drupal is hard-coded to use the
public:// filesystem for such files.

However, s3fs can be configured to handle these files, as well. On the s3fs
configuration page (admin/config/media/s3fs) you can enable the "Use S3 for
public:// files" and/or "Use S3 for private:// files" options to make s3fs
take over the job of the public and/or private file systems. This will cause
your site to store newly uploaded/generated files from the public/private file
system in S3 instead of the local file system. However, it will make any
existing files in those file systems become invisible to Drupal. To remedy
this, you'll need to copy those files into your S3 bucket.

You are strongly encouraged to use the drush command "drush s3fs-copy-local"
to do this, as it will copy all the files into the correct subfolders in your
bucket, according to your s3fs configuration, and will write them to the
metadata cache. If you don't have drush, you can use the buttons provided on
the S3FS Actions page (admin/config/media/s3fs/actions), though the copy
operation may fail if you have a lot of files, or very large files. The drush
command will cleanly handle any combination of files.

If you're using nginx rather than Apache, you probably have a config block
like this:

location ~ (^/sites/.*/files/imagecache/|^/sites/default/themes/.*/includes/fonts/|^/sites/.*/files/styles/) {
  expires max;
  try_files $uri @rewrite;
}

To make s3fs's custom image derivative mechanism work, you'll need to modify
that regex it with an additional path, like so:

location ~ (^/s3/files/styles/|^/sites/.*/files/imagecache/|^/sites/default/themes/.*/includes/fonts/|^/sites/.*/files/styles/) {
  expires max;
  try_files $uri @rewrite;
}

========================
== AWS Permissions ==
========================
For s3fs to be able to function, the AWS user identified by the configured
credentials should have the following User Policy set:

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Action": [
                "s3:ListAllMyBuckets"
            ],
            "Resource": "arn:aws:s3:::*"
        },
        {
            "Effect": "Allow",
            "Action": [
                "s3:*"
            ],
            "Resource": [
                "arn:aws:s3:::<bucket_name>",
                "arn:aws:s3:::<bucket_name>/*"
            ]
        }
    ]
}

This is not the precise list of permissions necessary, but it's broad enough
to allow s3fs to function while being strict enough to restrict access to other
services.

=================================
== Aggregated CSS and JS in S3 ==
=================================
If you want your site's aggregated CSS and JS files to be stored on S3, rather
than the default of storing them on the webserver's local filesystem, you'll
need to do two things:
1) Enable the "Use S3 for public:// files" option in the s3fs configuration,
   because Drupal always* puts aggregated CSS/JS into the public:// filesystem.
2) Because of the way browsers interpret relative URLs used in CSS files, and
   how they restrict requests made from external javascript files, you'll need
   to set up your webserver as a proxy for those files.

* When you've got a module like "Advanced CSS/JS Aggregation" installed, things
get hairy. For now, that module is not compatible with s3fs public:// takeover.

S3FS will present all css files in the taken over public:// filesystem with the
url prefix /s3fs-css/, and all javascript files with /s3fs-js/. So you need to
set up your webserver to proxy those URLs into your S3 bucket.

For Apache, add this code to the right location* in your server's config:

ProxyRequests Off
SSLProxyEngine on
<Proxy *>
    Order deny,allow
    Allow from all
</Proxy>
ProxyPass /s3fs-css/ https://YOUR-BUCKET.s3.amazonaws.com/s3fs-public/
ProxyPassReverse /s3fs-css/ https://YOUR-BUCKET.s3.amazonaws.com/s3fs-public/
ProxyPass /s3fs-js/ https://YOUR-BUCKET.s3.amazonaws.com/s3fs-public/
ProxyPassReverse /s3fs-js/ https://YOUR-BUCKET.s3.amazonaws.com/s3fs-public/

If you're using the "S3FS Root Folder" option, you'll need to insert that
folder before the /s3fs-public/ part of the target URLs. Like so:

ProxyPass /s3fs-css/ https://YOUR-BUCKET.s3.amazonaws.com/YOUR-ROOT-FOLDER/s3fs-public/
ProxyPassReverse /s3fs-css/ https://YOUR-BUCKET.s3.amazonaws.com/YOUR-ROOT-FOLDER/s3fs-public/

If you've set up a custom name for the public folder, you'll need to change the
's3fs-public' part of the URLs above to match your custom folder name.

* The "right location" is implementation-dependent. Normally, placing these
lines at the bottom of your httpd.conf file should be sufficient. However, if
your site is configured to use SSL, you'll need to put these lines in the
VirtualHost settings for both your normal and SSL sites.


For nginx, add this to your server config:

location ~* ^/(s3fs-css|s3fs-js)/(.*) {
  set $s3_base_path 'YOUR-BUCKET.s3.amazonaws.com/s3fs-public';
  set $file_path $2;

  resolver 8.8.4.4 8.8.8.8 valid=300s;
  resolver_timeout 10s;

  proxy_pass http://$s3_base_path/$file_path;
}

Again, be sure to take the S3FS Root Folder setting into account, here.

The /s3fs-public/ subfolder is where s3fs stores the files from the public://
filesystem, to avoid name conflicts with files from the s3:// filesystem.

If you're using the "Use a Custom Host" option to store your files in a
non-Amazon file service, you'll need to change the proxy target to the
appropriate URL for your service.

Under some domain name setups, you may be able to avoid the need for proxying
by having the same domain name as your site also point to your S3 bucket. If
that is the case with your site, enable the "Don't rewrite CSS/JS file paths"
option to prevent s3fs from prefixing the URLs for CSS/JS files.

======================================
== Configuring S3FS in settings.php ==
======================================
If you want to configure S3 File System entirely from settings.php, here are
examples of how to configure each setting:

// All the s3fs config settings start with "s3fs_"
$conf['s3fs_use_instance_profile'] = TRUE or FALSE;
$conf['s3fs_credentials_file'] = '/full/path/to/credentials.ini';
$conf['s3fs_bucket'] = 'YOUR BUCKET NAME';
$conf['s3fs_region'] = 'YOUR REGION';
$conf['s3fs_use_cname'] = TRUE or FALSE;
$conf['s3fs_domain'] = 'cdn.example.com';
$conf['s3fs_domain_root'] = 'none', 'root', 'public', or 'root_public';
$conf['s3fs_domain_s3_private'] = TRUE or FALSE;
$conf['s3fs_use_customhost'] = TRUE or FALSE;
$conf['s3fs_hostname'] = 'host.example.com';
$conf['s3fs_use_versioning'] = TRUE OR FALSE;
$conf['s3fs_cache_control_header'] = 'public, max-age=300';
$conf['s3fs_encryption'] = 'aws:kms';
$conf['s3fs_use_https'] = TRUE or FALSE;
$conf['s3fs_ignore_cache'] = TRUE or FALSE;
$conf['s3fs_use_s3_for_public'] = TRUE or FALSE;
$conf['s3fs_no_rewrite_cssjs'] = TRUE or FALSE;
$conf['s3fs_use_s3_for_private'] = TRUE or FALSE;
$conf['s3fs_root_folder'] = 'drupal-root';
$conf['s3fs_public_folder'] = 's3fs-public';
$conf['s3fs_private_folder'] = 's3fs-private';
$conf['s3fs_presigned_urls'] = "300|presigned-files/*\n60|other-presigned/*";
$conf['s3fs_saveas'] = "videos/*\nfull-size-images/*";
$conf['s3fs_torrents'] = "yarrr/*";

// AWS Credentials use a different prefix than the rest of s3fs's settings
$conf['awssdk_access_key'] = 'YOUR ACCESS KEY';
$conf['awssdk_secret_key'] = 'YOUR SECRET KEY';

===========================================
== Upgrading from S3 File System 7.x-1.x ==
===========================================
s3fs 7.x-2.x is not 100% backwards-compatible with 7.x-1.x. Most things will
work the same, but if you were using certain options in 1.x, you'll need to
perform some manual intervention to handle the upgrade to 2.x.

The Partial Refresh Prefix setting has been replaced with the Root Folder
setting. Root Folder fulfills the same purpose, but the implementation is
sufficiently different that you'll need to re-configure your site, and
possibly rearrange the files in your S3 bucket to make it work.

With Root Folder, *everything* s3fs does is contained to the specified folder
in your bucket. s3fs acts like the root folder is the bucket root, which means
that the URIs for your files will not reflect the root folder's existence.
Thus, you won't need to configure anything else, like the "file directory"
setting of file and image fields, to make it work.

This is different from how Partial Refresh Prefix worked, because that prefix
*was* reflected in the uris, and you had to configure your file and image
fields appropriately.

So, when upgrading to 7.x-2.x, you'll need to set the Root Folder option to the
same value that you had for Partial Refresh Prefix, and then remove that folder
from your fields' "File directory" settings. Then, move every file that s3fs
previously put into your bucket into the Root Folder. And if there are other
files in your bucket that you want s3fs to know about, move them into there,
too. Then do a metadata refresh.

===================================================
== Upgrading from AWS SDK Version 2 to Version 3 ==
===================================================
If you previously used AWS SDK Version 2 and are now upgrading to Version 3,
there are a few important points to consider.

First, if using the Libraries module to manage the SDK, make sure the libraries
subfolder name where AWS SDK stored is updated from "awssdk2" to "awssdk". Also,
after the s3fs module is updated and the new SDK code has been downloaded, it
is important to run a Drush database update (drush updatedb) to ensure database
variable names are properly updated.

If you have configuration settings in your settings.php file referencing
old s3fs variable names, please make sure these are updated to their new
names. Changes are as follows:
  - awssdk2_access_key --> awssdk_access_key
  - awssdk2_secret_key --> awssdk_access_key
  - awssdk2_use_instance_profile --> s3fs_use_instance_profile
  - awssdk2_default_cache_config --> s3fs_credentials_file

Finally, if you previously used the Default Cache Location setting to
define where the profile credentials should be cached, this has been
changed. It is recommended to use AWS IAM users to provide secure access.

If you would prefer a file-based approach, it is necessary to create a
credentials.ini file to be stored on your server using the new
"s3fs_credentials_file" variable. Possible options are discussed here:
https://docs.aws.amazon.com/aws-sdk-php/v3/guide/guide/credentials.html
https://docs.aws.amazon.com/sdk-for-php/v3/developer-guide/guide_configuration.html

==================
== Known Issues ==
==================
Some curl libraries, such as the one bundled with MAMP, do not come
with authoritative certificate files. See the following page for details:
http://dev.soup.io/post/56438473/If-youre-using-MAMP-and-doing-something

Because of a bizarre limitation regarding MySQL's maximum index length for
InnoDB tables, the maximum uri length that S3FS supports is 250 characters.
That includes the full path to the file in your bucket, as the full folder
path is part of the uri.

eAccelerator, a deprecated opcode cache plugin for PHP, is incompatible with
AWS SDK for PHP. eAccelerator will corrupt the configuration settings for
the SDK's s3 client object, causing a variety of different exceptions to be
thrown. If your server uses eAccelerator, it is highly recommended that you
replace it with a different opcode cache plugin, as its development was
abandoned several years ago.

=====================
== Acknowledgments ==
=====================
Special recognition goes to justafish, author of the AmazonS3 module:
http://drupal.org/project/amazons3
S3 File System started as a fork of her great module, but has evolved
dramatically since then, becoming a very different beast. The main benefit of
using S3 File System over AmazonS3 is performance, especially for image-
related operations, due to the metadata cache that is central to S3
File System's operation.

File

README.txt
View source
  1. S3 File System (s3fs) provides an additional file system to your Drupal site,
  2. alongside the public and private file systems, which stores files in Amazon's
  3. Simple Storage Service (S3) (or any S3-compatible storage service). You can set
  4. your site to use S3 File System as the default, or use it only for individual
  5. fields. This functionality is designed for sites which are load-balanced across
  6. multiple servers, as the mechanism used by Drupal's default file systems is not
  7. viable under such a configuration.
  8. =========================================
  9. == Dependencies and Other Requirements ==
  10. =========================================
  11. - Either the Composer Manager or Libraries module is required to manage
  12. the AWS SDK. Download and install one of these two options:
  13. - Composer Manager 1.x - https://drupal.org/project/composer_manager
  14. - Libraries API 2.x - https://drupal.org/project/libraries
  15. - Note: if both Composer Manager and Libraries are installed, the s3fs module
  16. will use Composer Manager.
  17. - AWS SDK for PHP 3.x - https://github.com/aws/aws-sdk-php/releases
  18. - PHP 5.5+ is required. AWS SDK v3 will not work on earlier versions.
  19. - Your PHP must be configured with "allow_url_fopen = On" in your php.ini file.
  20. Otherwise, PHP will be unable to open files that are in your S3 bucket.
  21. - PHP must also have the SimpleXML extension enabled.
  22. - See this page for additional recommendations:
  23. https://docs.aws.amazon.com/sdk-for-php/v3/developer-guide/getting-started_requirements.html
  24. ==================
  25. == Installation ==
  26. ==================
  27. FOR COMPOSER MANAGER MODULE:
  28. 1) Install composer manager and follow its instructions for installing the AWS
  29. SDK PHP library. The composer.json file included with this module will set the
  30. version to the latest 3.x.
  31. FOR LIBRARIES MODULE:
  32. 1) Install Libraries version 2.x from http://drupal.org/project/libraries.
  33. 2) Install the AWS SDK for PHP.
  34. a) If you have drush, you can install the SDK with this command (executed
  35. from the root folder of your Drupal codebase):
  36. drush make --no-core sites/all/modules/s3fs/s3fs.make
  37. b) If you don't have drush, download the latest Version 3 aws.zip file
  38. from here:
  39. https://github.com/aws/aws-sdk-php/releases/latest/download/aws.zip
  40. Extract the zip file into your Drupal Libraries folder for AWS SDK
  41. (sites/all/libraries/awssdk) such that the path to aws-autoloader.php
  42. is: "sites/all/libraries/awssdk/aws-autoloader.php"
  43. IN CASE OF TROUBLE DETECTING THE AWS SDK LIBRARY:
  44. Ensure that the awssdk folder itself, and all the files within it, can be read
  45. by your webserver. Usually this means that the user "apache" (or "_www" on OSX)
  46. must have read permissions for the files, and read+execute permissions for all
  47. the folders in the path leading to the awssdk files.
  48. ====================
  49. == Initial Setup ==
  50. ====================
  51. With the code installation complete, you must now configure s3fs to use your
  52. Amazon Web Services credentials.
  53. The preferred method is to use environment variables or IAM credentials as
  54. outlined here: https://docs.aws.amazon.com/aws-sdk-php/v3/guide/guide/credentials.html
  55. However, you can also set the credentials in the $conf array in
  56. your site's settings.php file (sites/default/settings.php), like so:
  57. $conf['awssdk_access_key'] = 'YOUR ACCESS KEY';
  58. $conf['awssdk_secret_key'] = 'YOUR SECRET KEY';
  59. Configure your settings for S3 File System (including your S3 bucket name) at
  60. /admin/config/media/s3fs/settings. You can input your AWS credentials on this
  61. page as well, but using the $conf array is recommended.
  62. You can also configure the rest of your S3 preferences in the $conf array. See
  63. the "Configuring S3FS in settings.php" section below for more info.
  64. ===================== ESSENTIAL STEP! DO NOT SKIP THIS! ======================
  65. With the settings saved, go to /admin/config/media/s3fs/actions to refresh the
  66. file metadata cache. This will copy the filenames and attributes for every
  67. existing file in your S3 bucket into Drupal's database. This can take a
  68. significant amount of time for very large buckets (thousands of files). If this
  69. operation times out, you can also perform it using "drush s3fs-refresh-cache".
  70. Please keep in mind that any time the contents of your S3 bucket change without
  71. Drupal knowing about it (like if you copy some files into it manually using
  72. another tool), you'll need to refresh the metadata cache again. S3FS assumes
  73. that its cache is a canonical listing of every file in the bucket. Thus, Drupal
  74. will not be able to access any files you copied into your bucket manually until
  75. S3FS's cache learns of them. This is true of folders as well; s3fs will not be
  76. able to copy files into folders that it doesn't know about.
  77. ============================================
  78. == How to Configure Your Site to Use s3fs ==
  79. ============================================
  80. Visit the admin/config/media/file-system page and set the "Default download
  81. method" to "Amazon Simple Storage Service"
  82. -and/or-
  83. Add a field of type File, Image, etc. and set the "Upload destination" to
  84. "Amazon Simple Storage Service" in the "Field Settings" tab.
  85. This will configure your site to store new uploaded files in S3. Files which
  86. your site creates automatically (such as aggregated CSS) will still be stored
  87. in the server's local filesystem, because Drupal is hard-coded to use the
  88. public:// filesystem for such files.
  89. However, s3fs can be configured to handle these files, as well. On the s3fs
  90. configuration page (admin/config/media/s3fs) you can enable the "Use S3 for
  91. public:// files" and/or "Use S3 for private:// files" options to make s3fs
  92. take over the job of the public and/or private file systems. This will cause
  93. your site to store newly uploaded/generated files from the public/private file
  94. system in S3 instead of the local file system. However, it will make any
  95. existing files in those file systems become invisible to Drupal. To remedy
  96. this, you'll need to copy those files into your S3 bucket.
  97. You are strongly encouraged to use the drush command "drush s3fs-copy-local"
  98. to do this, as it will copy all the files into the correct subfolders in your
  99. bucket, according to your s3fs configuration, and will write them to the
  100. metadata cache. If you don't have drush, you can use the buttons provided on
  101. the S3FS Actions page (admin/config/media/s3fs/actions), though the copy
  102. operation may fail if you have a lot of files, or very large files. The drush
  103. command will cleanly handle any combination of files.
  104. If you're using nginx rather than Apache, you probably have a config block
  105. like this:
  106. location ~ (^/sites/.*/files/imagecache/|^/sites/default/themes/.*/includes/fonts/|^/sites/.*/files/styles/) {
  107. expires max;
  108. try_files $uri @rewrite;
  109. }
  110. To make s3fs's custom image derivative mechanism work, you'll need to modify
  111. that regex it with an additional path, like so:
  112. location ~ (^/s3/files/styles/|^/sites/.*/files/imagecache/|^/sites/default/themes/.*/includes/fonts/|^/sites/.*/files/styles/) {
  113. expires max;
  114. try_files $uri @rewrite;
  115. }
  116. ========================
  117. == AWS Permissions ==
  118. ========================
  119. For s3fs to be able to function, the AWS user identified by the configured
  120. credentials should have the following User Policy set:
  121. {
  122. "Version": "2012-10-17",
  123. "Statement": [
  124. {
  125. "Effect": "Allow",
  126. "Action": [
  127. "s3:ListAllMyBuckets"
  128. ],
  129. "Resource": "arn:aws:s3:::*"
  130. },
  131. {
  132. "Effect": "Allow",
  133. "Action": [
  134. "s3:*"
  135. ],
  136. "Resource": [
  137. "arn:aws:s3:::",
  138. "arn:aws:s3:::/*"
  139. ]
  140. }
  141. ]
  142. }
  143. This is not the precise list of permissions necessary, but it's broad enough
  144. to allow s3fs to function while being strict enough to restrict access to other
  145. services.
  146. =================================
  147. == Aggregated CSS and JS in S3 ==
  148. =================================
  149. If you want your site's aggregated CSS and JS files to be stored on S3, rather
  150. than the default of storing them on the webserver's local filesystem, you'll
  151. need to do two things:
  152. 1) Enable the "Use S3 for public:// files" option in the s3fs configuration,
  153. because Drupal always* puts aggregated CSS/JS into the public:// filesystem.
  154. 2) Because of the way browsers interpret relative URLs used in CSS files, and
  155. how they restrict requests made from external javascript files, you'll need
  156. to set up your webserver as a proxy for those files.
  157. * When you've got a module like "Advanced CSS/JS Aggregation" installed, things
  158. get hairy. For now, that module is not compatible with s3fs public:// takeover.
  159. S3FS will present all css files in the taken over public:// filesystem with the
  160. url prefix /s3fs-css/, and all javascript files with /s3fs-js/. So you need to
  161. set up your webserver to proxy those URLs into your S3 bucket.
  162. For Apache, add this code to the right location* in your server's config:
  163. ProxyRequests Off
  164. SSLProxyEngine on
  165. Order deny,allow
  166. Allow from all
  167. ProxyPass /s3fs-css/ https://YOUR-BUCKET.s3.amazonaws.com/s3fs-public/
  168. ProxyPassReverse /s3fs-css/ https://YOUR-BUCKET.s3.amazonaws.com/s3fs-public/
  169. ProxyPass /s3fs-js/ https://YOUR-BUCKET.s3.amazonaws.com/s3fs-public/
  170. ProxyPassReverse /s3fs-js/ https://YOUR-BUCKET.s3.amazonaws.com/s3fs-public/
  171. If you're using the "S3FS Root Folder" option, you'll need to insert that
  172. folder before the /s3fs-public/ part of the target URLs. Like so:
  173. ProxyPass /s3fs-css/ https://YOUR-BUCKET.s3.amazonaws.com/YOUR-ROOT-FOLDER/s3fs-public/
  174. ProxyPassReverse /s3fs-css/ https://YOUR-BUCKET.s3.amazonaws.com/YOUR-ROOT-FOLDER/s3fs-public/
  175. If you've set up a custom name for the public folder, you'll need to change the
  176. 's3fs-public' part of the URLs above to match your custom folder name.
  177. * The "right location" is implementation-dependent. Normally, placing these
  178. lines at the bottom of your httpd.conf file should be sufficient. However, if
  179. your site is configured to use SSL, you'll need to put these lines in the
  180. VirtualHost settings for both your normal and SSL sites.
  181. For nginx, add this to your server config:
  182. location ~* ^/(s3fs-css|s3fs-js)/(.*) {
  183. set $s3_base_path 'YOUR-BUCKET.s3.amazonaws.com/s3fs-public';
  184. set $file_path $2;
  185. resolver 8.8.4.4 8.8.8.8 valid=300s;
  186. resolver_timeout 10s;
  187. proxy_pass http://$s3_base_path/$file_path;
  188. }
  189. Again, be sure to take the S3FS Root Folder setting into account, here.
  190. The /s3fs-public/ subfolder is where s3fs stores the files from the public://
  191. filesystem, to avoid name conflicts with files from the s3:// filesystem.
  192. If you're using the "Use a Custom Host" option to store your files in a
  193. non-Amazon file service, you'll need to change the proxy target to the
  194. appropriate URL for your service.
  195. Under some domain name setups, you may be able to avoid the need for proxying
  196. by having the same domain name as your site also point to your S3 bucket. If
  197. that is the case with your site, enable the "Don't rewrite CSS/JS file paths"
  198. option to prevent s3fs from prefixing the URLs for CSS/JS files.
  199. ======================================
  200. == Configuring S3FS in settings.php ==
  201. ======================================
  202. If you want to configure S3 File System entirely from settings.php, here are
  203. examples of how to configure each setting:
  204. // All the s3fs config settings start with "s3fs_"
  205. $conf['s3fs_use_instance_profile'] = TRUE or FALSE;
  206. $conf['s3fs_credentials_file'] = '/full/path/to/credentials.ini';
  207. $conf['s3fs_bucket'] = 'YOUR BUCKET NAME';
  208. $conf['s3fs_region'] = 'YOUR REGION';
  209. $conf['s3fs_use_cname'] = TRUE or FALSE;
  210. $conf['s3fs_domain'] = 'cdn.example.com';
  211. $conf['s3fs_domain_root'] = 'none', 'root', 'public', or 'root_public';
  212. $conf['s3fs_domain_s3_private'] = TRUE or FALSE;
  213. $conf['s3fs_use_customhost'] = TRUE or FALSE;
  214. $conf['s3fs_hostname'] = 'host.example.com';
  215. $conf['s3fs_use_versioning'] = TRUE OR FALSE;
  216. $conf['s3fs_cache_control_header'] = 'public, max-age=300';
  217. $conf['s3fs_encryption'] = 'aws:kms';
  218. $conf['s3fs_use_https'] = TRUE or FALSE;
  219. $conf['s3fs_ignore_cache'] = TRUE or FALSE;
  220. $conf['s3fs_use_s3_for_public'] = TRUE or FALSE;
  221. $conf['s3fs_no_rewrite_cssjs'] = TRUE or FALSE;
  222. $conf['s3fs_use_s3_for_private'] = TRUE or FALSE;
  223. $conf['s3fs_root_folder'] = 'drupal-root';
  224. $conf['s3fs_public_folder'] = 's3fs-public';
  225. $conf['s3fs_private_folder'] = 's3fs-private';
  226. $conf['s3fs_presigned_urls'] = "300|presigned-files/*\n60|other-presigned/*";
  227. $conf['s3fs_saveas'] = "videos/*\nfull-size-images/*";
  228. $conf['s3fs_torrents'] = "yarrr/*";
  229. // AWS Credentials use a different prefix than the rest of s3fs's settings
  230. $conf['awssdk_access_key'] = 'YOUR ACCESS KEY';
  231. $conf['awssdk_secret_key'] = 'YOUR SECRET KEY';
  232. ===========================================
  233. == Upgrading from S3 File System 7.x-1.x ==
  234. ===========================================
  235. s3fs 7.x-2.x is not 100% backwards-compatible with 7.x-1.x. Most things will
  236. work the same, but if you were using certain options in 1.x, you'll need to
  237. perform some manual intervention to handle the upgrade to 2.x.
  238. The Partial Refresh Prefix setting has been replaced with the Root Folder
  239. setting. Root Folder fulfills the same purpose, but the implementation is
  240. sufficiently different that you'll need to re-configure your site, and
  241. possibly rearrange the files in your S3 bucket to make it work.
  242. With Root Folder, *everything* s3fs does is contained to the specified folder
  243. in your bucket. s3fs acts like the root folder is the bucket root, which means
  244. that the URIs for your files will not reflect the root folder's existence.
  245. Thus, you won't need to configure anything else, like the "file directory"
  246. setting of file and image fields, to make it work.
  247. This is different from how Partial Refresh Prefix worked, because that prefix
  248. *was* reflected in the uris, and you had to configure your file and image
  249. fields appropriately.
  250. So, when upgrading to 7.x-2.x, you'll need to set the Root Folder option to the
  251. same value that you had for Partial Refresh Prefix, and then remove that folder
  252. from your fields' "File directory" settings. Then, move every file that s3fs
  253. previously put into your bucket into the Root Folder. And if there are other
  254. files in your bucket that you want s3fs to know about, move them into there,
  255. too. Then do a metadata refresh.
  256. ===================================================
  257. == Upgrading from AWS SDK Version 2 to Version 3 ==
  258. ===================================================
  259. If you previously used AWS SDK Version 2 and are now upgrading to Version 3,
  260. there are a few important points to consider.
  261. First, if using the Libraries module to manage the SDK, make sure the libraries
  262. subfolder name where AWS SDK stored is updated from "awssdk2" to "awssdk". Also,
  263. after the s3fs module is updated and the new SDK code has been downloaded, it
  264. is important to run a Drush database update (drush updatedb) to ensure database
  265. variable names are properly updated.
  266. If you have configuration settings in your settings.php file referencing
  267. old s3fs variable names, please make sure these are updated to their new
  268. names. Changes are as follows:
  269. - awssdk2_access_key --> awssdk_access_key
  270. - awssdk2_secret_key --> awssdk_access_key
  271. - awssdk2_use_instance_profile --> s3fs_use_instance_profile
  272. - awssdk2_default_cache_config --> s3fs_credentials_file
  273. Finally, if you previously used the Default Cache Location setting to
  274. define where the profile credentials should be cached, this has been
  275. changed. It is recommended to use AWS IAM users to provide secure access.
  276. If you would prefer a file-based approach, it is necessary to create a
  277. credentials.ini file to be stored on your server using the new
  278. "s3fs_credentials_file" variable. Possible options are discussed here:
  279. https://docs.aws.amazon.com/aws-sdk-php/v3/guide/guide/credentials.html
  280. https://docs.aws.amazon.com/sdk-for-php/v3/developer-guide/guide_configuration.html
  281. ==================
  282. == Known Issues ==
  283. ==================
  284. Some curl libraries, such as the one bundled with MAMP, do not come
  285. with authoritative certificate files. See the following page for details:
  286. http://dev.soup.io/post/56438473/If-youre-using-MAMP-and-doing-something
  287. Because of a bizarre limitation regarding MySQL's maximum index length for
  288. InnoDB tables, the maximum uri length that S3FS supports is 250 characters.
  289. That includes the full path to the file in your bucket, as the full folder
  290. path is part of the uri.
  291. eAccelerator, a deprecated opcode cache plugin for PHP, is incompatible with
  292. AWS SDK for PHP. eAccelerator will corrupt the configuration settings for
  293. the SDK's s3 client object, causing a variety of different exceptions to be
  294. thrown. If your server uses eAccelerator, it is highly recommended that you
  295. replace it with a different opcode cache plugin, as its development was
  296. abandoned several years ago.
  297. =====================
  298. == Acknowledgments ==
  299. =====================
  300. Special recognition goes to justafish, author of the AmazonS3 module:
  301. http://drupal.org/project/amazons3
  302. S3 File System started as a fork of her great module, but has evolved
  303. dramatically since then, becoming a very different beast. The main benefit of
  304. using S3 File System over AmazonS3 is performance, especially for image-
  305. related operations, due to the metadata cache that is central to S3
  306. File System's operation.