You are here

README.txt in S3 File System 8.3

INTRODUCTION
------------

  * S3 File System (s3fs) provides an additional file system to your drupal
    site, alongside the public and private file systems, which stores files in
    Amazon's Simple Storage Service (S3) or any S3-compatible storage service.
    You can set your site to use S3 File System as the default, or use it only
    for individual fields. This functionality is designed for sites which are
    load-balanced across multiple servers, as the mechanism used by Drupal's
    default file systems is not viable under such a configuration.

TABLE OF CONTENTS
-----------------

* REQUIREMENTS
* S3FS INITIAL CONFIGURATION
* CONFIGURE DRUPAL TO STORE FILES IN S3
* COPY LOCAL FILES TO S3
* AGGREGATED CSS AND JS IN S3
* IMAGE STYLES
* CACHE AWS CREDENTIALS
* UPGRADING FROM S3 FILE SYSTEM 7.x-2.x or 7.x-3.x
* TROUBLESHOOTING
* KNOWN ISSUES
* DEVELOPER TESTING
* ACKNOWLEDGEMENT
* MAINTAINERS

REQUIREMENTS
------------

  * AWS SDK version-3. If module is installed via Composer it gets
    automatically installed.

  * Your PHP must be configured with "allow_url_fopen = On" in your php.ini
    file.
    Otherwise, PHP will be unable to open files that are in your S3 bucket.

  * Ensure the account used to connect to the S3 bucket has sufficient
    privileges.

    * Minimum required actions for read-write are:

      "Action": [
          "s3:ListBucket",
          "s3:ListBucketVersions",
          "s3:PutObject",
          "s3:GetObject",
          "s3:DeleteObjectVersion",
          "s3:DeleteObject",
          "s3:GetObjectVersion"
          "s3:GetObjectAcl",
          "s3:PutObjectAcl",
      ]

    * For read-only buckets you must NOT grant the following actions:

      s3:PutObject
      s3:DeleteObjectVersion
      s3:DeleteObject
      s3:PutObjectAcl

  * Optional: doctrine/cache library for caching S3 Credentials.

S3FS INITIAL CONFIGURATION
--------------------------

  * S3 Credentials configuration:

    * Option 1: Use AWS defaultProvider.

      No configuration of credentials is required. S3fs will utilize the SDK
      default method of checking for environment variables, shared
      credentials/profile files, or assuming IAM roles.

      It is recommended to review the CACHE AWS CREDENTIALS section of this
      guide.

      Continue with configuration below.

    * Option 2: Provide an AWS compatible credentials INI.

      Create an AWS SDK compatible INI file with your configuration in the
      [default] profile. It is recommend that this file be located outside
      the docroot of your server for security.

      Example:
        [default]
        aws_access_key_id = YOUR_ACCESS_KEY
        aws_secret_access_key = YOUR_SECRET_KEY

      Visit /admin/config/media/s3fs and set the path to your INI in
      'Custom Credentials File Location'

      Note: If your INI causes lookups to AWS for tokens please review the
      CACHE AWS CREDENTIALS section of this guide.

      See https://docs.aws.amazon.com/cli/latest/userguide/cli-configure-files.html
      for more information on file format.

    * Option 3: Use the key module:

        Install the Key module from https://www.drupal.org/project/key and select
        your Amazon Web Services credentials from the dropdowns in the "Keys"
        panel at (/admin/config/media/s3fs).

    * Option 4: Set Access Key and Secret Key in settings.php:

      Example:
        $settings['s3fs.access_key'] = 'YOUR ACCESS KEY';
        $settings['s3fs.secret_key'] = 'YOUR SECRET KEY';

      * Reminder: For security reasons you should ensure that all secrets are
        stored outside the document root.

  * Configure your settings for S3 File System (including your S3 bucket name)
    at /admin/config/media/s3fs.

  * Savings the settings page will trigger the bucket region to be detected.

  * If your S3 bucket is configured with BlockPublicAcls then enable the
    'upload_as_private' setting.

    Example:
      $settings['s3fs.upload_as_private'] = TRUE;

    * If s3fs will provide storage for s3:// or public:// files the generated
      links will return 403 errors unless access is granted either with
      presigned urls or through other external means.

  * With the settings saved, go to /admin/config/media/s3fs/actions.

    * First validate your configuration to verify access to your S3 bucket.

    * Next refresh the file metadata cache. This will copy the filenames and
      attributes for every existing file in your S3 bucket into Drupal's
      database. This can take a significant amount of time for very large
      buckets (thousands of files). If this operation times out, you can also
      perform it using "drush s3fs-refresh-cache".

  * Please keep in mind that any time the contents of your S3 bucket change
    without Drupal knowing about it (like if you copy some files into it
    manually using another tool), you'll need to refresh the metadata cache
    again. S3FS assumes that its cache is a canonical listing of every file in
    the bucket. Thus, Drupal will not be able to access any files you copied
    into your bucket manually until S3FS's cache learns of them. This is true
    of folders as well; s3fs will not be able to copy files into folders that
    it doesn't know about.

  * After refreshing the s3fs metadata it is recommended to clear the Drupal
    Cache.

CONFIGURE DRUPAL TO STORE FILES IN S3
-------------------------------------

  * Optional: To enable S3 to be the default for new storage fields visit
    /admin/config/media/file-system and set the "Default download method" to
    "Amazon Simple Storage Service"

  * To begin using S3 for storage either edit an existing field or add a new
    field of type File, Image, etc. and set the "Upload destination" to
    "S3 File System" in the "Field Settings" tab. Files uploaded to a field
    configured to use S3 will be stored in the S3 bucket.

    * Drupal will by default continue to store files it creates automatically
      (such as aggregated CSS) on the local filesystem as they are hard coded
      to use the public:// file handler. To prevent this enable takeover of
      the public:// file handler.

  * To enable takeover of the public and/or private file handler(s you can
    enable s3fs.use_s3_for_public and/or s3fs.use_s3_for_private in
    settings.php. This will cause your site to store newly uploaded/generated
    files from the public/private file system in S3 instead of in local
    storage.

    Example:
      $settings['s3fs.use_s3_for_public'] = TRUE;
      $settings['s3fs.use_s3_for_private'] = TRUE;

    * These settings will cause the existing file systems to become invisible
      to Drupal. To remedy this, you will need to copy the existing files into
      the S3 bucket.

    * Refer to the 'COPY LOCAL FILES TO S3' section of the manual.

  * If you use s3fs for public:// files:

    * You should change your php twig storage folder to a local directory.
      Php twig files stored in S3 pose a security concern (these files would
      be public) in addition to a performance concern(latency).
      Change the php_storage settings in your setting.php. It is recomend that
      this directory be outside out of the docroot.

      Example:
        $settings['php_storage']['twig']['directory'] = '../storage/php';

      If you have a multiple backends you may use a NAS to store it or other
      shared storage system with your others backends.

    * Refer to 'AGGREGATED CSS AND JS IN S3' for important information
      related to bucket configuration to support aggregated CSS/JS files.

    * Clear the Drupal Cache:

      * Whenever making changes to enable/disable public:// or private://
        StreamWrappers it is necessary to clear the Drupal Container Cache

      * The Container Cache can be cleared either with 'drush cr' or by
        visiting admin/config/development/performance and clicking the
        'clearing all caches' button.

COPY LOCAL FILES TO S3
----------------------

  * The migration process is only useful if you have enabled or plan to enable
    public:// or private:// filesystem handling by s3fs.

  * It is possible to copy local files to s3 without activating the
    use_s3_for_public or use_s3_for_private handlers in settings.php
    If activated before the migration existing files will be unavailable during
    the migration process.

  * You are strongly encouraged to use the drush command "drush
    s3fs-copy-local" to do this, as it will copy all the files into the correct
    subfolders in your bucket, according to your s3fs configuration, and will
    write them to the metadata cache.

    See "drush help s3fs:copy-local" for command syntax.

  * If you don't have drush, you can use the
    buttons provided on the S3FS Actions page (admin/config/media/s3fs/actions),
    though the copy operation may fail if you have a lot of files, or very
    large files. The drush command will cleanly handle any combination of
    files.

  * You should not allow new files to be uploaded during the migration process.

  * Once the migration is complete you can, if you have not already, enable
    public:// and/or private:// takeover. The files will be served from S3
    instead of the local filesystem. You may delete the local files when
    you are sure you no longer require them locally.

  * You can perform a custom migrating process by implementing
    S3fsServiceInterface or extending S3fsService and use your custom service
    class in a ServiceProvider (see S3fsServiceProvider).

AGGREGATED CSS AND JS IN S3
---------------------------

  * In previous versions S3FS required that the server be configured as a
    reverse proxy in order to use the public:// StreamWrapper.
    This requirement has been removed. Please read below for new requirements.

  * CSS and Javascript files will be stored in your S3 bucket with all other
    public:// files.

  * Because of the way browsers restrict reqeusts made to domains that differ
    from the original requested domain you will need to ensure you have setup
    a CORS policy on your S3 Bucket or CDN.

  * Sample CORS policy that will allow any site to load files:

    <CORSConfiguration>
      <CORSRule>
        <AllowedOrigin>*</AllowedOrigin>
        <AllowedMethod>GET</AllowedMethod>
      </CORSRule>
    </CORSConfiguration>

  * Please see https://docs.aws.amazon.com/AmazonS3/latest/userguide/cors.html
    for more information.

  * Links inside CSS/JS files will be rewritten to use either the base_url of
    the webserver or optionally a custom hostname.

    Links will generate with https:// if use_https is enabled otherwise links
    will generate //servername/path notation to allow for protocol agnostic
    loading of content. If your server supports HTTPS it is recommended to
    enable use_https.

IMAGE STYLES
------------

  * S3FS display image style from Amazon trough dynamic routes /s3/files/styles/
    to fix the issues around style generated images being stored in S3.
    (read more at https://www.drupal.org/node/2861975)

  * If you are using Nginx as webserver, it is neccessary to add additional
    block to your Nginx site configuration:

    location ~ ^/s3/files/styles/ {
            try_files $uri @rewrite;
    }

CACHE AWS CREDENTIALS
---------------------

  * Some authentication methods inside of the AWS ecosystem make calls to
    AWS servers in order to obtain credentials. Using an IAM role assigned
    to an instance is an example of such a method.

  * AWS does not charge for these API calls but may rate limit the requests
    leading to random errors when a request is rejected.

  * In order to avoid rate limits and increase performance it is recommended
    to enable the caching of S3 Credentials that rely on receiving tokens
    from AWS.

  * WARNING: Enabling caching will store a copy of the credentials in plain
    text on the filesystem.

    * Depending upon configuration the credentials may be short lived STS
      credentials or may be long-lived access_keys.

  * Enable Credential Caching:

    * Install doctrine/cache
      composer require "doctrine/cache:~1.4"

    * Configure a directory to store the cached credentials.

      * The directory can be entered into the 'Cached Credentials Folder'
      setting on /admin/config/media/s3fs.

      * This directory should be stored outside the docroot of the server.
        and should not be included in backups or replication.

      * The directory will be created if it does not exist.

      * Directories and files will be crated with a umask of 0012 (rwxrw----).


UPGRADING FROM S3 FILE SYSTEM 7.x-2.x or 7.x-3.x
------------------------------------------------

  * Please read the 'S3FS INITIAL CONFIGURATION'
    and 'CONFIGURE DRUPAL TO STORE FILES IN S3' sections. for how to
    configure settings that can not be migrated.

  * The $conf settings have been changed and are no longer recommend.
    When not using the settings page it is recommend to use Drupal
    configuration management to import configuration overrides.

  * d7_s3fs_config can be used to import the majority of configurations from
    the previous versions. Please verify all settings after importing.

    The following settings can not be migrated into configuration:
    - awssdk_access_key
    - awssdk_secret_key
    - s3fs_use_s3_for_public
    - s3fs_use_s3_for_private
    - s3fs_domain_s3_private

  * After configuring s3fs in D8/D9 perform a config validation and a metadata
    cache refresh to import the current list of files stored in the S3 bucket.

  * d7_s3fs_s3_migrate, d7_s3fs_public_migrate, and d7_s3fs_private_migrate
    can be used to import file entries from s3://, public://, and private://
    schemes when s3fs was used to manage files in D7.  d7_s3fs_s3_migrate
    should be ran in almost all migrations while the public:// and private://
    migrations should only be executed if s3fs takeover was enabled for them
    in D7.

    These scripts copy the managed file entries from D7 into D8 without
    copying the actual files because they are already stored in the s3
    bucket.

    When public:// or private:// files are stored in s3fs the D8/9 core
    d7_file(public://) and/or d7_file_private(private://) migrations should
    not be executed as the s3fs migration tasks will perform all required
    actions.

  * If you use some functions or methods from .module or other files in your
    custom code you must find the equivalent function or method.

TROUBLESHOOTING
---------------

  * In the unlikely circumstance that the version of the SDK you downloaded
    causes errors with S3 File System, you can download this version instead,
    which is known to work:
    https://github.com/aws/aws-sdk-php/releases/download/3.22.7/aws.zip

  * IN CASE OF TROUBLE DETECTING THE AWS SDK LIBRARY:
    Ensure that the aws folder itself, and all the files within it, can be read
    by your webserver. Usually this means that the user "apache" (or "_www" on
    OSX) must have read permissions for the files, and read+execute permissions
    for all the folders in the path leading to the aws files.

KNOWN ISSUES
------------

  * Moving/renaming 'directories' is not supported. Objects must be moved
    individually.
    @see https://www.drupal.org/project/s3fs/issues/3200867

  * The max file size supported for writing is currently 5GB.
    @see https://www.drupal.org/project/s3fs/issues/3204634

  * These problems are from Drupal 7, now we don't know if they happen in 8.
    If you tried that options or know new issues, please create a new issue
    in https://www.drupal.org/project/issues/s3fs?version=8.x

      * Some curl libraries, such as the one bundled with MAMP, do not come
        with authoritative certificate files. See the following page for
        details:
        http://dev.soup.io/post/56438473/If-youre-using-MAMP-and-doing-something

      * Because of a limitation regarding MySQL's maximum index length for
        InnoDB tables, the maximum uri length that S3FS supports is 255
        characters. The limit is on the full path including the s3://,
        public:// or private:// prefix as they are part of the uri.

        This limit is the same limit as imposed by Drupal for max managed file
        lengths, however some unmanaged files (image derivatives) could be
        impacted by this limit.

      * eAccelerator, a deprecated opcode cache plugin for PHP, is incompatible
        with AWS SDK for PHP. eAccelerator will corrupt the configuration
        settings for the SDK's s3 client object, causing a variety of different
        exceptions to be thrown. If your server uses eAccelerator, it is highly
        recommended that you replace it with a different opcode cache plugin,
        as its development was abandoned several years ago.


DEVELOPER TESTING
-----------------

PHPUnit tests exist for this project.  Some tests may require configuration
before they can be executed.

  * S3 Configuration

    Default configuration for S3 is to attempt to reach a localstack
    server started with EXTERNAL_HOSTNAME of 's3fslocalstack' using hostnames
    's3.s3fslocalstack' and 's3fs-test-bucket.s3.s3fslocalstack'. This can be
    overridden by editing the prepareConfig() section of
    src/Tests/S3fsTestBase.php or by setting the following environment
    variables prior to execution:

      * S3FS_AWS_NO_CUSTOM_HOST=true - Use default AWS servers.
      * S3FS_AWS_CUSTOM_HOST = Custom S3 host to connect to.
      * S3FS_AWS_KEY - AWS IAM user key
      * S3FS_AWS_SECRET - AWS IAM secret
      * S3FS_AWS_BUCKET - Name of S3 bucket
      * S3FS_AWS_REGION - Region of bucket.


ACKNOWLEDGEMENT
---------------

  * Special recognition goes to justafish, author of the AmazonS3 module:
    http://drupal.org/project/amazons3

  * S3 File System started as a fork of her great module, but has evolved
    dramatically since then, becoming a very different beast. The main benefit
    of using S3 File System over AmazonS3 is performance, especially for image-
    related operations, due to the metadata cache that is central to S3 File
    System's operation.


MAINTAINERS
-----------

Current maintainers:

  * webankit (https://www.drupal.org/u/webankit)

  * coredumperror (https://www.drupal.org/u/coredumperror)

  * zach.bimson (https://www.drupal.org/u/zachbimson)

  * neerajskydiver (https://www.drupal.org/u/neerajskydiver)

  * Abhishek Anand (https://www.drupal.org/u/abhishek-anand)

  * jansete (https://www.drupal.org/u/jansete)

  * cmlara (https://www.drupal.org/u/cmlara)

File

README.txt
View source
  1. INTRODUCTION
  2. ------------
  3. * S3 File System (s3fs) provides an additional file system to your drupal
  4. site, alongside the public and private file systems, which stores files in
  5. Amazon's Simple Storage Service (S3) or any S3-compatible storage service.
  6. You can set your site to use S3 File System as the default, or use it only
  7. for individual fields. This functionality is designed for sites which are
  8. load-balanced across multiple servers, as the mechanism used by Drupal's
  9. default file systems is not viable under such a configuration.
  10. TABLE OF CONTENTS
  11. -----------------
  12. * REQUIREMENTS
  13. * S3FS INITIAL CONFIGURATION
  14. * CONFIGURE DRUPAL TO STORE FILES IN S3
  15. * COPY LOCAL FILES TO S3
  16. * AGGREGATED CSS AND JS IN S3
  17. * IMAGE STYLES
  18. * CACHE AWS CREDENTIALS
  19. * UPGRADING FROM S3 FILE SYSTEM 7.x-2.x or 7.x-3.x
  20. * TROUBLESHOOTING
  21. * KNOWN ISSUES
  22. * DEVELOPER TESTING
  23. * ACKNOWLEDGEMENT
  24. * MAINTAINERS
  25. REQUIREMENTS
  26. ------------
  27. * AWS SDK version-3. If module is installed via Composer it gets
  28. automatically installed.
  29. * Your PHP must be configured with "allow_url_fopen = On" in your php.ini
  30. file.
  31. Otherwise, PHP will be unable to open files that are in your S3 bucket.
  32. * Ensure the account used to connect to the S3 bucket has sufficient
  33. privileges.
  34. * Minimum required actions for read-write are:
  35. "Action": [
  36. "s3:ListBucket",
  37. "s3:ListBucketVersions",
  38. "s3:PutObject",
  39. "s3:GetObject",
  40. "s3:DeleteObjectVersion",
  41. "s3:DeleteObject",
  42. "s3:GetObjectVersion"
  43. "s3:GetObjectAcl",
  44. "s3:PutObjectAcl",
  45. ]
  46. * For read-only buckets you must NOT grant the following actions:
  47. s3:PutObject
  48. s3:DeleteObjectVersion
  49. s3:DeleteObject
  50. s3:PutObjectAcl
  51. * Optional: doctrine/cache library for caching S3 Credentials.
  52. S3FS INITIAL CONFIGURATION
  53. --------------------------
  54. * S3 Credentials configuration:
  55. * Option 1: Use AWS defaultProvider.
  56. No configuration of credentials is required. S3fs will utilize the SDK
  57. default method of checking for environment variables, shared
  58. credentials/profile files, or assuming IAM roles.
  59. It is recommended to review the CACHE AWS CREDENTIALS section of this
  60. guide.
  61. Continue with configuration below.
  62. * Option 2: Provide an AWS compatible credentials INI.
  63. Create an AWS SDK compatible INI file with your configuration in the
  64. [default] profile. It is recommend that this file be located outside
  65. the docroot of your server for security.
  66. Example:
  67. [default]
  68. aws_access_key_id = YOUR_ACCESS_KEY
  69. aws_secret_access_key = YOUR_SECRET_KEY
  70. Visit /admin/config/media/s3fs and set the path to your INI in
  71. 'Custom Credentials File Location'
  72. Note: If your INI causes lookups to AWS for tokens please review the
  73. CACHE AWS CREDENTIALS section of this guide.
  74. See https://docs.aws.amazon.com/cli/latest/userguide/cli-configure-files.html
  75. for more information on file format.
  76. * Option 3: Use the key module:
  77. Install the Key module from https://www.drupal.org/project/key and select
  78. your Amazon Web Services credentials from the dropdowns in the "Keys"
  79. panel at (/admin/config/media/s3fs).
  80. * Option 4: Set Access Key and Secret Key in settings.php:
  81. Example:
  82. $settings['s3fs.access_key'] = 'YOUR ACCESS KEY';
  83. $settings['s3fs.secret_key'] = 'YOUR SECRET KEY';
  84. * Reminder: For security reasons you should ensure that all secrets are
  85. stored outside the document root.
  86. * Configure your settings for S3 File System (including your S3 bucket name)
  87. at /admin/config/media/s3fs.
  88. * Savings the settings page will trigger the bucket region to be detected.
  89. * If your S3 bucket is configured with BlockPublicAcls then enable the
  90. 'upload_as_private' setting.
  91. Example:
  92. $settings['s3fs.upload_as_private'] = TRUE;
  93. * If s3fs will provide storage for s3:// or public:// files the generated
  94. links will return 403 errors unless access is granted either with
  95. presigned urls or through other external means.
  96. * With the settings saved, go to /admin/config/media/s3fs/actions.
  97. * First validate your configuration to verify access to your S3 bucket.
  98. * Next refresh the file metadata cache. This will copy the filenames and
  99. attributes for every existing file in your S3 bucket into Drupal's
  100. database. This can take a significant amount of time for very large
  101. buckets (thousands of files). If this operation times out, you can also
  102. perform it using "drush s3fs-refresh-cache".
  103. * Please keep in mind that any time the contents of your S3 bucket change
  104. without Drupal knowing about it (like if you copy some files into it
  105. manually using another tool), you'll need to refresh the metadata cache
  106. again. S3FS assumes that its cache is a canonical listing of every file in
  107. the bucket. Thus, Drupal will not be able to access any files you copied
  108. into your bucket manually until S3FS's cache learns of them. This is true
  109. of folders as well; s3fs will not be able to copy files into folders that
  110. it doesn't know about.
  111. * After refreshing the s3fs metadata it is recommended to clear the Drupal
  112. Cache.
  113. CONFIGURE DRUPAL TO STORE FILES IN S3
  114. -------------------------------------
  115. * Optional: To enable S3 to be the default for new storage fields visit
  116. /admin/config/media/file-system and set the "Default download method" to
  117. "Amazon Simple Storage Service"
  118. * To begin using S3 for storage either edit an existing field or add a new
  119. field of type File, Image, etc. and set the "Upload destination" to
  120. "S3 File System" in the "Field Settings" tab. Files uploaded to a field
  121. configured to use S3 will be stored in the S3 bucket.
  122. * Drupal will by default continue to store files it creates automatically
  123. (such as aggregated CSS) on the local filesystem as they are hard coded
  124. to use the public:// file handler. To prevent this enable takeover of
  125. the public:// file handler.
  126. * To enable takeover of the public and/or private file handler(s you can
  127. enable s3fs.use_s3_for_public and/or s3fs.use_s3_for_private in
  128. settings.php. This will cause your site to store newly uploaded/generated
  129. files from the public/private file system in S3 instead of in local
  130. storage.
  131. Example:
  132. $settings['s3fs.use_s3_for_public'] = TRUE;
  133. $settings['s3fs.use_s3_for_private'] = TRUE;
  134. * These settings will cause the existing file systems to become invisible
  135. to Drupal. To remedy this, you will need to copy the existing files into
  136. the S3 bucket.
  137. * Refer to the 'COPY LOCAL FILES TO S3' section of the manual.
  138. * If you use s3fs for public:// files:
  139. * You should change your php twig storage folder to a local directory.
  140. Php twig files stored in S3 pose a security concern (these files would
  141. be public) in addition to a performance concern(latency).
  142. Change the php_storage settings in your setting.php. It is recomend that
  143. this directory be outside out of the docroot.
  144. Example:
  145. $settings['php_storage']['twig']['directory'] = '../storage/php';
  146. If you have a multiple backends you may use a NAS to store it or other
  147. shared storage system with your others backends.
  148. * Refer to 'AGGREGATED CSS AND JS IN S3' for important information
  149. related to bucket configuration to support aggregated CSS/JS files.
  150. * Clear the Drupal Cache:
  151. * Whenever making changes to enable/disable public:// or private://
  152. StreamWrappers it is necessary to clear the Drupal Container Cache
  153. * The Container Cache can be cleared either with 'drush cr' or by
  154. visiting admin/config/development/performance and clicking the
  155. 'clearing all caches' button.
  156. COPY LOCAL FILES TO S3
  157. ----------------------
  158. * The migration process is only useful if you have enabled or plan to enable
  159. public:// or private:// filesystem handling by s3fs.
  160. * It is possible to copy local files to s3 without activating the
  161. use_s3_for_public or use_s3_for_private handlers in settings.php
  162. If activated before the migration existing files will be unavailable during
  163. the migration process.
  164. * You are strongly encouraged to use the drush command "drush
  165. s3fs-copy-local" to do this, as it will copy all the files into the correct
  166. subfolders in your bucket, according to your s3fs configuration, and will
  167. write them to the metadata cache.
  168. See "drush help s3fs:copy-local" for command syntax.
  169. * If you don't have drush, you can use the
  170. buttons provided on the S3FS Actions page (admin/config/media/s3fs/actions),
  171. though the copy operation may fail if you have a lot of files, or very
  172. large files. The drush command will cleanly handle any combination of
  173. files.
  174. * You should not allow new files to be uploaded during the migration process.
  175. * Once the migration is complete you can, if you have not already, enable
  176. public:// and/or private:// takeover. The files will be served from S3
  177. instead of the local filesystem. You may delete the local files when
  178. you are sure you no longer require them locally.
  179. * You can perform a custom migrating process by implementing
  180. S3fsServiceInterface or extending S3fsService and use your custom service
  181. class in a ServiceProvider (see S3fsServiceProvider).
  182. AGGREGATED CSS AND JS IN S3
  183. ---------------------------
  184. * In previous versions S3FS required that the server be configured as a
  185. reverse proxy in order to use the public:// StreamWrapper.
  186. This requirement has been removed. Please read below for new requirements.
  187. * CSS and Javascript files will be stored in your S3 bucket with all other
  188. public:// files.
  189. * Because of the way browsers restrict reqeusts made to domains that differ
  190. from the original requested domain you will need to ensure you have setup
  191. a CORS policy on your S3 Bucket or CDN.
  192. * Sample CORS policy that will allow any site to load files:
  193. *
  194. GET
  195. * Please see https://docs.aws.amazon.com/AmazonS3/latest/userguide/cors.html
  196. for more information.
  197. * Links inside CSS/JS files will be rewritten to use either the base_url of
  198. the webserver or optionally a custom hostname.
  199. Links will generate with https:// if use_https is enabled otherwise links
  200. will generate //servername/path notation to allow for protocol agnostic
  201. loading of content. If your server supports HTTPS it is recommended to
  202. enable use_https.
  203. IMAGE STYLES
  204. ------------
  205. * S3FS display image style from Amazon trough dynamic routes /s3/files/styles/
  206. to fix the issues around style generated images being stored in S3.
  207. (read more at https://www.drupal.org/node/2861975)
  208. * If you are using Nginx as webserver, it is neccessary to add additional
  209. block to your Nginx site configuration:
  210. location ~ ^/s3/files/styles/ {
  211. try_files $uri @rewrite;
  212. }
  213. CACHE AWS CREDENTIALS
  214. ---------------------
  215. * Some authentication methods inside of the AWS ecosystem make calls to
  216. AWS servers in order to obtain credentials. Using an IAM role assigned
  217. to an instance is an example of such a method.
  218. * AWS does not charge for these API calls but may rate limit the requests
  219. leading to random errors when a request is rejected.
  220. * In order to avoid rate limits and increase performance it is recommended
  221. to enable the caching of S3 Credentials that rely on receiving tokens
  222. from AWS.
  223. * WARNING: Enabling caching will store a copy of the credentials in plain
  224. text on the filesystem.
  225. * Depending upon configuration the credentials may be short lived STS
  226. credentials or may be long-lived access_keys.
  227. * Enable Credential Caching:
  228. * Install doctrine/cache
  229. composer require "doctrine/cache:~1.4"
  230. * Configure a directory to store the cached credentials.
  231. * The directory can be entered into the 'Cached Credentials Folder'
  232. setting on /admin/config/media/s3fs.
  233. * This directory should be stored outside the docroot of the server.
  234. and should not be included in backups or replication.
  235. * The directory will be created if it does not exist.
  236. * Directories and files will be crated with a umask of 0012 (rwxrw----).
  237. UPGRADING FROM S3 FILE SYSTEM 7.x-2.x or 7.x-3.x
  238. ------------------------------------------------
  239. * Please read the 'S3FS INITIAL CONFIGURATION'
  240. and 'CONFIGURE DRUPAL TO STORE FILES IN S3' sections. for how to
  241. configure settings that can not be migrated.
  242. * The $conf settings have been changed and are no longer recommend.
  243. When not using the settings page it is recommend to use Drupal
  244. configuration management to import configuration overrides.
  245. * d7_s3fs_config can be used to import the majority of configurations from
  246. the previous versions. Please verify all settings after importing.
  247. The following settings can not be migrated into configuration:
  248. - awssdk_access_key
  249. - awssdk_secret_key
  250. - s3fs_use_s3_for_public
  251. - s3fs_use_s3_for_private
  252. - s3fs_domain_s3_private
  253. * After configuring s3fs in D8/D9 perform a config validation and a metadata
  254. cache refresh to import the current list of files stored in the S3 bucket.
  255. * d7_s3fs_s3_migrate, d7_s3fs_public_migrate, and d7_s3fs_private_migrate
  256. can be used to import file entries from s3://, public://, and private://
  257. schemes when s3fs was used to manage files in D7. d7_s3fs_s3_migrate
  258. should be ran in almost all migrations while the public:// and private://
  259. migrations should only be executed if s3fs takeover was enabled for them
  260. in D7.
  261. These scripts copy the managed file entries from D7 into D8 without
  262. copying the actual files because they are already stored in the s3
  263. bucket.
  264. When public:// or private:// files are stored in s3fs the D8/9 core
  265. d7_file(public://) and/or d7_file_private(private://) migrations should
  266. not be executed as the s3fs migration tasks will perform all required
  267. actions.
  268. * If you use some functions or methods from .module or other files in your
  269. custom code you must find the equivalent function or method.
  270. TROUBLESHOOTING
  271. ---------------
  272. * In the unlikely circumstance that the version of the SDK you downloaded
  273. causes errors with S3 File System, you can download this version instead,
  274. which is known to work:
  275. https://github.com/aws/aws-sdk-php/releases/download/3.22.7/aws.zip
  276. * IN CASE OF TROUBLE DETECTING THE AWS SDK LIBRARY:
  277. Ensure that the aws folder itself, and all the files within it, can be read
  278. by your webserver. Usually this means that the user "apache" (or "_www" on
  279. OSX) must have read permissions for the files, and read+execute permissions
  280. for all the folders in the path leading to the aws files.
  281. KNOWN ISSUES
  282. ------------
  283. * Moving/renaming 'directories' is not supported. Objects must be moved
  284. individually.
  285. @see https://www.drupal.org/project/s3fs/issues/3200867
  286. * The max file size supported for writing is currently 5GB.
  287. @see https://www.drupal.org/project/s3fs/issues/3204634
  288. * These problems are from Drupal 7, now we don't know if they happen in 8.
  289. If you tried that options or know new issues, please create a new issue
  290. in https://www.drupal.org/project/issues/s3fs?version=8.x
  291. * Some curl libraries, such as the one bundled with MAMP, do not come
  292. with authoritative certificate files. See the following page for
  293. details:
  294. http://dev.soup.io/post/56438473/If-youre-using-MAMP-and-doing-something
  295. * Because of a limitation regarding MySQL's maximum index length for
  296. InnoDB tables, the maximum uri length that S3FS supports is 255
  297. characters. The limit is on the full path including the s3://,
  298. public:// or private:// prefix as they are part of the uri.
  299. This limit is the same limit as imposed by Drupal for max managed file
  300. lengths, however some unmanaged files (image derivatives) could be
  301. impacted by this limit.
  302. * eAccelerator, a deprecated opcode cache plugin for PHP, is incompatible
  303. with AWS SDK for PHP. eAccelerator will corrupt the configuration
  304. settings for the SDK's s3 client object, causing a variety of different
  305. exceptions to be thrown. If your server uses eAccelerator, it is highly
  306. recommended that you replace it with a different opcode cache plugin,
  307. as its development was abandoned several years ago.
  308. DEVELOPER TESTING
  309. -----------------
  310. PHPUnit tests exist for this project. Some tests may require configuration
  311. before they can be executed.
  312. * S3 Configuration
  313. Default configuration for S3 is to attempt to reach a localstack
  314. server started with EXTERNAL_HOSTNAME of 's3fslocalstack' using hostnames
  315. 's3.s3fslocalstack' and 's3fs-test-bucket.s3.s3fslocalstack'. This can be
  316. overridden by editing the prepareConfig() section of
  317. src/Tests/S3fsTestBase.php or by setting the following environment
  318. variables prior to execution:
  319. * S3FS_AWS_NO_CUSTOM_HOST=true - Use default AWS servers.
  320. * S3FS_AWS_CUSTOM_HOST = Custom S3 host to connect to.
  321. * S3FS_AWS_KEY - AWS IAM user key
  322. * S3FS_AWS_SECRET - AWS IAM secret
  323. * S3FS_AWS_BUCKET - Name of S3 bucket
  324. * S3FS_AWS_REGION - Region of bucket.
  325. ACKNOWLEDGEMENT
  326. ---------------
  327. * Special recognition goes to justafish, author of the AmazonS3 module:
  328. http://drupal.org/project/amazons3
  329. * S3 File System started as a fork of her great module, but has evolved
  330. dramatically since then, becoming a very different beast. The main benefit
  331. of using S3 File System over AmazonS3 is performance, especially for image-
  332. related operations, due to the metadata cache that is central to S3 File
  333. System's operation.
  334. MAINTAINERS
  335. -----------
  336. Current maintainers:
  337. * webankit (https://www.drupal.org/u/webankit)
  338. * coredumperror (https://www.drupal.org/u/coredumperror)
  339. * zach.bimson (https://www.drupal.org/u/zachbimson)
  340. * neerajskydiver (https://www.drupal.org/u/neerajskydiver)
  341. * Abhishek Anand (https://www.drupal.org/u/abhishek-anand)
  342. * jansete (https://www.drupal.org/u/jansete)
  343. * cmlara (https://www.drupal.org/u/cmlara)