You are here

README.txt in S3 File System 8.2

INTRODUCTION
------------

  * S3 File System (s3fs) provides an additional file system to your drupal site,
   alongside the public and private file systems, which stores files in Amazon's
   Simple Storage Service (S3) (or any S3-compatible storage service). You can set
   your site to use S3 File System as the default, or use it only for individual
   fields. This functionality is designed for sites which are load-balanced across
   multiple servers, as the mechanism used by Drupal's default file systems is not
   viable under such a configuration.


REQUIREMENTS
------------

  * AWS SDK version-3. If module is installed via Composer it gets automatically installed.

  * Your PHP must be configured with "allow_url_fopen = On" in your php.ini file.
    Otherwise, PHP will be unable to open files that are in your S3 bucket.


INSTALLATION
------------

  * With the code installation complete, you must now configure s3fs to use your
    Amazon Web Services credentials. To do so, store them in the $config array in
    your site's settings.php file (sites/default/settings.php), like so:
    $config['s3fs.settins']['access_key'] = 'YOUR ACCESS KEY';
    $config['s3fs.settins']['secret_key'] = 'YOUR SECRET KEY';

  * Configure your setttings for S3 File System (including your S3 bucket name) at
    /admin/config/media/s3fs. You can input your AWS credentials on this page as
    well, but using the $config array is reccomended.

  * @todo this is not implemented yet
    With the settings saved, go to /admin/config/media/s3fs/actions to refresh the
    file metadata cache. This will copy the filenames and attributes for every
    existing file in your S3 bucket into Drupal's database. This can take a
    significant amount of time for very large buckets (thousands of files). If this
    operation times out, you can also perform it using "drush s3fs-refresh-cache".

  * Please keep in mind that any time the contents of your S3 bucket change without
    Drupal knowing about it (like if you copy some files into it manually using
    another tool), you'll need to refresh the metadata cache again. S3FS assumes
    that its cache is a canonical listing of every file in the bucket. Thus, Drupal
    will not be able to access any files you copied into your bucket manually until
    S3FS's cache learns of them. This is true of folders as well; s3fs will not be
    able to copy files into folders that it doesn't know about.


CONFIGURATION
-------------

  * Visit the admin/config/media/file-system page and set the "Default download
    method" to "Amazon Simple Storage Service"
    -and/or-
    Add a field of type File, Image, etc. and set the "Upload destination" to
    "Amazon Simple Storage Service" in the "Field Settings" tab.

  * This will configure your site to store new uploaded files in S3. Files which
    your site creates automatically (such as aggregated CSS) will still be stored
    in the server's local filesystem, because Drupal is hard-coded to use the
    public:// filesystem for such files.

  * However, s3fs can be configured to handle these files as well. In settings.php
    you can enable the s3fs.use_s3_for_public and s3fs.use_s3_for_private settings
    to make s3fs take over the job of the public and/or private file systems. This
    will cause your site to store newly uploaded/generated files from the
    public/private file system in S3 instead of the local file system. However, it
    will make any existing files in those file systems become invisible to Drupal.
    To remedy this, you'll need to copy those files into your S3 bucket.
    Example:
    $settings['s3fs.use_s3_for_public'] = TRUE;

  * You are strongly encouraged to use the drush command "drush s3fs-copy-local"
    to do this, as it will copy all the files into the correct subfolders in your
    bucket, according to your s3fs configuration, and will write them to the
    metadata cache. If you don't have drush, you can use the buttons provided on
    the S3FS Actions page (admin/config/media/s3fs/actions), though the copy
    operation may fail if you have a lot of files, or very large files. The drush
    command will cleanly handle any combination of files. @todo actions page is
    not implemented yet in 8 version.


TROUBLESHOOTING
---------------

  * In the unlikely circumstance that the version of the SDK you downloaded causes
    errors with S3 File System, you can download this version instead, which is
    known to work:
    https://github.com/aws/aws-sdk-php/releases/download/3.22.7/aws.zip

  * IN CASE OF TROUBLE DETECTING THE AWS SDK LIBRARY:
    Ensure that the aws folder itself, and all the files within it, can be read
    by your webserver. Usually this means that the user "apache" (or "_www" on OSX)
    must have read permissions for the files, and read+execute permissions for all
    the folders in the path leading to the aws files.


AGGREGATED CSS AND JS IN S3
---------------------------

  * Because of the way browsers interpret relative URLs used in CSS files, and how
    they restrict requests made from external javascript files, if you want your
    site's aggregated CSS and JS to be placed in S3, you'll need to set up your
    webserver as a proxy for those files. S3 File System will present all public://
    css files with the url prefix /s3fs-css/, and all public:// javascript files
    with /s3fs-js/. So you need to set up your webserver to proxy all URLs with
    those prefixes into your S3 bucket.

  * For Apache, add this code to the right location* in your server's config:
    ProxyRequests Off
    SSLProxyEngine on
    <Proxy *>
        Order deny,allow
        Allow from all
    </Proxy>
    ProxyPass /s3fs-css/ https://YOUR-BUCKET.s3.amazonaws.com/s3fs-public/
    ProxyPassReverse /s3fs-css/ https://YOUR-BUCKET.s3.amazonaws.com/s3fs-public/
    ProxyPass /s3fs-js/ https://YOUR-BUCKET.s3.amazonaws.com/s3fs-public/
    ProxyPassReverse /s3fs-js/ https://YOUR-BUCKET.s3.amazonaws.com/s3fs-public/

  * For nginx, add this to your server config:
    location ~* ^/(s3fs-css|s3fs-js)/(.*) {
      set $s3_base_path 'YOUR-BUCKET.s3.amazonaws.com/s3fs-public';
      set $file_path $2;

      resolver         172.16.0.23 valid=300s;
      resolver_timeout 10s;

      proxy_pass http://$s3_base_path/$file_path;
    }

  * Again, be sure to take the S3FS Root Folder setting into account, here.
    The /s3fs-public/ subfolder is where s3fs stores the files from the public://
    filesystem, to avoid name conflicts with files from the s3:// filesystem.

  * If you're using the "Use a Custom Host" option to store your files in a
    non-Amazon file service, you'll need to change the proxy target to the
    appropriate URL for your service.

  * Under some domain name setups, you may be able to avoid the need for proxying
    by having the same domain name as your site also point to your S3 bucket. If
    that is the case with your site, enable the "Don't rewrite CSS/JS file paths"
    option to prevent s3fs from prefixing the URLs for CSS/JS files.


UPGRADING FROM S3 FILE SYSTEM 7.x-2.x or 7.x-3.x
------------------------------------------------

  * Drupal 8 version has the most of 7 params, you must use the new $config
    and $settings arrays, please read INSTALLATION and CONFIGURATION sections.

  * The database schema is the same than 7. Export and import, it could be
    enough. Other options could be refresh metadata cache when it'll be
    implemented.

  * If you use some functions or methods from .module or other files in your
    custom code you must find the equivalent function or method.


KNOWN ISSUES
------------

  * These problems are from Drupal 7, now we don't know if they happen in 8.
    If you tried that options or know new issues, please create a new issue
    in https://www.drupal.org/project/issues/s3fs?version=8.x

  * Some curl libraries, such as the one bundled with MAMP, do not come
    with authoritative certificate files. See the following page for details:
    http://dev.soup.io/post/56438473/If-youre-using-MAMP-and-doing-something

  * Because of a bizzare limitation regarding MySQL's maximum index length for
    InnoDB tables, the maximum uri length that S3FS supports is 250 characters.
    That includes the full path to the file in your bucket, as the full folder
    path is part of the uri.

  * eAccelerator, a deprecated opcode cache plugin for PHP, is incompatible with
    AWS SDK for PHP. eAccelerator will corrupt the configuration settings for
    the SDK's s3 client object, causing a variety of different exceptions to be
    thrown. If your server uses eAccelerator, it is highly recommended that you
    replace it with a different opcode cache plugin, as its development was
    abandoned several years ago.


ACKNOWLEDGEMENT
---------------

  * Special recognition goes to justafish, author of the AmazonS3 module:
    http://drupal.org/project/amazons3

  * S3 File System started as a fork of her great module, but has evolved
    dramatically since then, becoming a very different beast. The main benefit of
    using S3 File System over AmazonS3 is performance, especially for image-
    related operations, due to the metadata cache that is central to S3
    File System's operation.


MAINTAINERS
-----------

Current maintainers:

  * webankit (https://www.drupal.org/u/webankit)

  * coredumperror (https://www.drupal.org/u/coredumperror)

  * zach.bimson (https://www.drupal.org/u/zachbimson)

  * neerajskydiver (https://www.drupal.org/u/neerajskydiver)

  * Abhishek Anand (https://www.drupal.org/u/abhishek-anand)

  * jansete (https://www.drupal.org/u/jansete)

File

README.txt
View source
  1. INTRODUCTION
  2. ------------
  3. * S3 File System (s3fs) provides an additional file system to your drupal site,
  4. alongside the public and private file systems, which stores files in Amazon's
  5. Simple Storage Service (S3) (or any S3-compatible storage service). You can set
  6. your site to use S3 File System as the default, or use it only for individual
  7. fields. This functionality is designed for sites which are load-balanced across
  8. multiple servers, as the mechanism used by Drupal's default file systems is not
  9. viable under such a configuration.
  10. REQUIREMENTS
  11. ------------
  12. * AWS SDK version-3. If module is installed via Composer it gets automatically installed.
  13. * Your PHP must be configured with "allow_url_fopen = On" in your php.ini file.
  14. Otherwise, PHP will be unable to open files that are in your S3 bucket.
  15. INSTALLATION
  16. ------------
  17. * With the code installation complete, you must now configure s3fs to use your
  18. Amazon Web Services credentials. To do so, store them in the $config array in
  19. your site's settings.php file (sites/default/settings.php), like so:
  20. $config['s3fs.settins']['access_key'] = 'YOUR ACCESS KEY';
  21. $config['s3fs.settins']['secret_key'] = 'YOUR SECRET KEY';
  22. * Configure your setttings for S3 File System (including your S3 bucket name) at
  23. /admin/config/media/s3fs. You can input your AWS credentials on this page as
  24. well, but using the $config array is reccomended.
  25. * @todo this is not implemented yet
  26. With the settings saved, go to /admin/config/media/s3fs/actions to refresh the
  27. file metadata cache. This will copy the filenames and attributes for every
  28. existing file in your S3 bucket into Drupal's database. This can take a
  29. significant amount of time for very large buckets (thousands of files). If this
  30. operation times out, you can also perform it using "drush s3fs-refresh-cache".
  31. * Please keep in mind that any time the contents of your S3 bucket change without
  32. Drupal knowing about it (like if you copy some files into it manually using
  33. another tool), you'll need to refresh the metadata cache again. S3FS assumes
  34. that its cache is a canonical listing of every file in the bucket. Thus, Drupal
  35. will not be able to access any files you copied into your bucket manually until
  36. S3FS's cache learns of them. This is true of folders as well; s3fs will not be
  37. able to copy files into folders that it doesn't know about.
  38. CONFIGURATION
  39. -------------
  40. * Visit the admin/config/media/file-system page and set the "Default download
  41. method" to "Amazon Simple Storage Service"
  42. -and/or-
  43. Add a field of type File, Image, etc. and set the "Upload destination" to
  44. "Amazon Simple Storage Service" in the "Field Settings" tab.
  45. * This will configure your site to store new uploaded files in S3. Files which
  46. your site creates automatically (such as aggregated CSS) will still be stored
  47. in the server's local filesystem, because Drupal is hard-coded to use the
  48. public:// filesystem for such files.
  49. * However, s3fs can be configured to handle these files as well. In settings.php
  50. you can enable the s3fs.use_s3_for_public and s3fs.use_s3_for_private settings
  51. to make s3fs take over the job of the public and/or private file systems. This
  52. will cause your site to store newly uploaded/generated files from the
  53. public/private file system in S3 instead of the local file system. However, it
  54. will make any existing files in those file systems become invisible to Drupal.
  55. To remedy this, you'll need to copy those files into your S3 bucket.
  56. Example:
  57. $settings['s3fs.use_s3_for_public'] = TRUE;
  58. * You are strongly encouraged to use the drush command "drush s3fs-copy-local"
  59. to do this, as it will copy all the files into the correct subfolders in your
  60. bucket, according to your s3fs configuration, and will write them to the
  61. metadata cache. If you don't have drush, you can use the buttons provided on
  62. the S3FS Actions page (admin/config/media/s3fs/actions), though the copy
  63. operation may fail if you have a lot of files, or very large files. The drush
  64. command will cleanly handle any combination of files. @todo actions page is
  65. not implemented yet in 8 version.
  66. TROUBLESHOOTING
  67. ---------------
  68. * In the unlikely circumstance that the version of the SDK you downloaded causes
  69. errors with S3 File System, you can download this version instead, which is
  70. known to work:
  71. https://github.com/aws/aws-sdk-php/releases/download/3.22.7/aws.zip
  72. * IN CASE OF TROUBLE DETECTING THE AWS SDK LIBRARY:
  73. Ensure that the aws folder itself, and all the files within it, can be read
  74. by your webserver. Usually this means that the user "apache" (or "_www" on OSX)
  75. must have read permissions for the files, and read+execute permissions for all
  76. the folders in the path leading to the aws files.
  77. AGGREGATED CSS AND JS IN S3
  78. ---------------------------
  79. * Because of the way browsers interpret relative URLs used in CSS files, and how
  80. they restrict requests made from external javascript files, if you want your
  81. site's aggregated CSS and JS to be placed in S3, you'll need to set up your
  82. webserver as a proxy for those files. S3 File System will present all public://
  83. css files with the url prefix /s3fs-css/, and all public:// javascript files
  84. with /s3fs-js/. So you need to set up your webserver to proxy all URLs with
  85. those prefixes into your S3 bucket.
  86. * For Apache, add this code to the right location* in your server's config:
  87. ProxyRequests Off
  88. SSLProxyEngine on
  89. Order deny,allow
  90. Allow from all
  91. ProxyPass /s3fs-css/ https://YOUR-BUCKET.s3.amazonaws.com/s3fs-public/
  92. ProxyPassReverse /s3fs-css/ https://YOUR-BUCKET.s3.amazonaws.com/s3fs-public/
  93. ProxyPass /s3fs-js/ https://YOUR-BUCKET.s3.amazonaws.com/s3fs-public/
  94. ProxyPassReverse /s3fs-js/ https://YOUR-BUCKET.s3.amazonaws.com/s3fs-public/
  95. * For nginx, add this to your server config:
  96. location ~* ^/(s3fs-css|s3fs-js)/(.*) {
  97. set $s3_base_path 'YOUR-BUCKET.s3.amazonaws.com/s3fs-public';
  98. set $file_path $2;
  99. resolver 172.16.0.23 valid=300s;
  100. resolver_timeout 10s;
  101. proxy_pass http://$s3_base_path/$file_path;
  102. }
  103. * Again, be sure to take the S3FS Root Folder setting into account, here.
  104. The /s3fs-public/ subfolder is where s3fs stores the files from the public://
  105. filesystem, to avoid name conflicts with files from the s3:// filesystem.
  106. * If you're using the "Use a Custom Host" option to store your files in a
  107. non-Amazon file service, you'll need to change the proxy target to the
  108. appropriate URL for your service.
  109. * Under some domain name setups, you may be able to avoid the need for proxying
  110. by having the same domain name as your site also point to your S3 bucket. If
  111. that is the case with your site, enable the "Don't rewrite CSS/JS file paths"
  112. option to prevent s3fs from prefixing the URLs for CSS/JS files.
  113. UPGRADING FROM S3 FILE SYSTEM 7.x-2.x or 7.x-3.x
  114. ------------------------------------------------
  115. * Drupal 8 version has the most of 7 params, you must use the new $config
  116. and $settings arrays, please read INSTALLATION and CONFIGURATION sections.
  117. * The database schema is the same than 7. Export and import, it could be
  118. enough. Other options could be refresh metadata cache when it'll be
  119. implemented.
  120. * If you use some functions or methods from .module or other files in your
  121. custom code you must find the equivalent function or method.
  122. KNOWN ISSUES
  123. ------------
  124. * These problems are from Drupal 7, now we don't know if they happen in 8.
  125. If you tried that options or know new issues, please create a new issue
  126. in https://www.drupal.org/project/issues/s3fs?version=8.x
  127. * Some curl libraries, such as the one bundled with MAMP, do not come
  128. with authoritative certificate files. See the following page for details:
  129. http://dev.soup.io/post/56438473/If-youre-using-MAMP-and-doing-something
  130. * Because of a bizzare limitation regarding MySQL's maximum index length for
  131. InnoDB tables, the maximum uri length that S3FS supports is 250 characters.
  132. That includes the full path to the file in your bucket, as the full folder
  133. path is part of the uri.
  134. * eAccelerator, a deprecated opcode cache plugin for PHP, is incompatible with
  135. AWS SDK for PHP. eAccelerator will corrupt the configuration settings for
  136. the SDK's s3 client object, causing a variety of different exceptions to be
  137. thrown. If your server uses eAccelerator, it is highly recommended that you
  138. replace it with a different opcode cache plugin, as its development was
  139. abandoned several years ago.
  140. ACKNOWLEDGEMENT
  141. ---------------
  142. * Special recognition goes to justafish, author of the AmazonS3 module:
  143. http://drupal.org/project/amazons3
  144. * S3 File System started as a fork of her great module, but has evolved
  145. dramatically since then, becoming a very different beast. The main benefit of
  146. using S3 File System over AmazonS3 is performance, especially for image-
  147. related operations, due to the metadata cache that is central to S3
  148. File System's operation.
  149. MAINTAINERS
  150. -----------
  151. Current maintainers:
  152. * webankit (https://www.drupal.org/u/webankit)
  153. * coredumperror (https://www.drupal.org/u/coredumperror)
  154. * zach.bimson (https://www.drupal.org/u/zachbimson)
  155. * neerajskydiver (https://www.drupal.org/u/neerajskydiver)
  156. * Abhishek Anand (https://www.drupal.org/u/abhishek-anand)
  157. * jansete (https://www.drupal.org/u/jansete)