public function S3fsService::writeFolders in S3 File System 4.0.x
Same name and namespace in other branches
- 8.3 src/S3fsService.php \Drupal\s3fs\S3fsService::writeFolders()
Write the folders list to the databsae.
Parameters
array $folders: The complete list of folders.
Throws
\Exception
Overrides S3fsServiceInterface::writeFolders
1 call to S3fsService::writeFolders()
- S3fsService::refreshCache in src/
S3fsService.php - Refreshes the metadata cache.
File
- src/
S3fsService.php, line 439
Class
- S3fsService
- Defines a S3fsService service.
Namespace
Drupal\s3fsCode
public function writeFolders(array $folders) {
// Now that the $folders array contains all the ancestors of every file in
// the cache, as well as the existing folders from before the refresh,
// write those folders to the DB.
if ($folders) {
// Splits the data into manageable parts for the database.
$chunks = array_chunk($folders, 10000, TRUE);
foreach ($chunks as $chunk) {
$insert_query = \Drupal::database()
->insert('s3fs_file_temp')
->fields([
'uri',
'filesize',
'timestamp',
'dir',
'version',
]);
foreach ($chunk as $folder_uri => $ph) {
$metadata = $this
->convertMetadata($folder_uri, []);
$insert_query
->values($metadata);
}
// @todo Integrity constraint violation.
// If this throws an integrity constraint violation, then the user's
// S3 bucket has objects that represent folders using a different
// scheme than the one we account for above. The best solution I can
// think of is to convert any "files" in s3fs_file_temp which match
// an entry in the $folders array (which would have been added in
// _s3fs_write_metadata()) to directories.
$insert_query
->execute();
}
}
}