question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

ObjectHashMismatchError when uploading backup to S3 Minio

See original GitHub issue

Hi there! I have some problems with uploading backup to S3 Minio.

medusa backup --backup-name=test_cassandra-2021-04-27 --mode=full
[2021-04-27 09:20:20,776] INFO: Monitoring provider is noop
[2021-04-27 09:20:21,161] WARNING: is ccm : 0
[2021-04-27 09:20:21,262] INFO: Saving tokenmap and schema
[2021-04-27 09:20:21,750] INFO: Node testnode1.unix.local does not have latest backup
[2021-04-27 09:20:21,751] INFO: Starting backup
[2021-04-27 09:20:21,751] INFO: Creating snapshot
[2021-04-27 09:20:24,383] INFO: Uploading /storage/cassandra/data/system_auth/role_permissions-3afbe79f219431a7add7f5ab90d8ec9c/snapshots/medusa-test_cassandra-2021-04-27/md-5-big-TOC.txt (92.000B)
[2021-04-27 09:24:44,810] ERROR: This error happened during the backup: <ObjectHashMismatchError in <medusa.libcloud.storage.drivers.s3_base_driver.S3BaseStorageDriver object at 0x7faa61219fd0>, value=MD5 hash efcbe5b62bba3e537bc827136f8a091b-1 checksum does not match 61bffd3142add6ac2429794cfe837ffa, object = testnode1.unix.local/test_cassandra-2021-04-27/data/system_auth/role_permissions-3afbe79f219431a7add7f5ab90d8ec9c/md-5-big-TOC.txt>
Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/medusa/backup_node.py", line 335, in backup_snapshots
    manifest_objects += storage.storage_driver.upload_blobs(src_batch, dst_path)
  File "/usr/local/lib/python3.8/site-packages/medusa/storage/s3_base_storage.py", line 96, in upload_blobs
    return medusa.storage.s3_compat_storage.concurrent.upload_blobs(
  File "/usr/local/lib/python3.8/site-packages/medusa/storage/s3_compat_storage/concurrent.py", line 87, in upload_blobs
    return job.execute(list(src))
  File "/usr/local/lib/python3.8/site-packages/medusa/storage/s3_compat_storage/concurrent.py", line 51, in execute
    return list(executor.map(self.with_storage, iterables))
  File "/usr/local/lib/python3.8/concurrent/futures/_base.py", line 619, in result_iterator
    yield fs.pop().result()
  File "/usr/local/lib/python3.8/concurrent/futures/_base.py", line 444, in result
    return self.__get_result()
  File "/usr/local/lib/python3.8/concurrent/futures/_base.py", line 389, in __get_result
    raise self._exception
  File "/usr/local/lib/python3.8/concurrent/futures/thread.py", line 57, in run
    result = self.fn(*self.args, **self.kwargs)
  File "/usr/local/lib/python3.8/site-packages/medusa/storage/s3_compat_storage/concurrent.py", line 60, in with_storage
    return self.func(self.storage, connection, iterable)
  File "/usr/local/lib/python3.8/site-packages/medusa/storage/s3_compat_storage/concurrent.py", line 82, in <lambda>
    lambda storage, connection, src_file: __upload_file(
  File "/usr/local/lib/python3.8/site-packages/medusa/storage/s3_compat_storage/concurrent.py", line 119, in __upload_file
    obj = _upload_single_part(connection, src, bucket, full_object_name)
  File "/usr/local/lib/python3.8/site-packages/retrying.py", line 49, in wrapped_f
    return Retrying(*dargs, **dkw).call(f, *args, **kw)
  File "/usr/local/lib/python3.8/site-packages/retrying.py", line 212, in call
    raise attempt.get()
  File "/usr/local/lib/python3.8/site-packages/retrying.py", line 247, in get
    six.reraise(self.value[0], self.value[1], self.value[2])
  File "/usr/local/lib/python3.8/site-packages/six.py", line 703, in reraise
    raise value
  File "/usr/local/lib/python3.8/site-packages/retrying.py", line 200, in call
    attempt = Attempt(fn(*args, **kwargs), attempt_number, False)
  File "/usr/local/lib/python3.8/site-packages/medusa/storage/s3_compat_storage/concurrent.py", line 126, in _upload_single_part
    obj = connection.upload_object(
  File "/usr/local/lib/python3.8/site-packages/libcloud/storage/drivers/s3.py", line 545, in upload_object
    return self._put_object(container=container, object_name=object_name,
  File "/usr/local/lib/python3.8/site-packages/libcloud/storage/drivers/s3.py", line 922, in _put_object
    raise ObjectHashMismatchError(
libcloud.storage.types.ObjectHashMismatchError: <ObjectHashMismatchError in <medusa.libcloud.storage.drivers.s3_base_driver.S3BaseStorageDriver object at 0x7faa61219fd0>, value=MD5 hash efcbe5b62bba3e537bc827136f8a091b-1 checksum does not match 61bffd3142add6ac2429794cfe837ffa, object = testnode1.unix.local/test_cassandra-2021-04-27/data/system_auth/role_permissions-3afbe79f219431a7add7f5ab90d8ec9c/md-5-big-TOC.txt>
[2021-04-27 09:24:47,651] ERROR: This error happened during the backup: <ObjectHashMismatchError in <medusa.libcloud.storage.drivers.s3_base_driver.S3BaseStorageDriver object at 0x7faa61219fd0>, value=MD5 hash efcbe5b62bba3e537bc827136f8a091b-1 checksum does not match 61bffd3142add6ac2429794cfe837ffa, object = u236i43vc-cas01.unix.local/test_cassandra-2021-04-27/data/system_auth/role_permissions-3afbe79f219431a7add7f5ab90d8ec9c/md-5-big-TOC.txt>
Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/medusa/backup_node.py", line 210, in main
    num_files, node_backup_cache = do_backup(
  File "/usr/local/lib/python3.8/site-packages/medusa/backup_node.py", line 261, in do_backup
    num_files = backup_snapshots(storage, manifest, node_backup, node_backup_cache, snapshot)
  File "/usr/local/lib/python3.8/site-packages/medusa/backup_node.py", line 347, in backup_snapshots
    raise e
  File "/usr/local/lib/python3.8/site-packages/medusa/backup_node.py", line 335, in backup_snapshots
    manifest_objects += storage.storage_driver.upload_blobs(src_batch, dst_path)
  File "/usr/local/lib/python3.8/site-packages/medusa/storage/s3_base_storage.py", line 96, in upload_blobs
    return medusa.storage.s3_compat_storage.concurrent.upload_blobs(
  File "/usr/local/lib/python3.8/site-packages/medusa/storage/s3_compat_storage/concurrent.py", line 87, in upload_blobs
    return job.execute(list(src))
  File "/usr/local/lib/python3.8/site-packages/medusa/storage/s3_compat_storage/concurrent.py", line 51, in execute
    return list(executor.map(self.with_storage, iterables))
  File "/usr/local/lib/python3.8/concurrent/futures/_base.py", line 619, in result_iterator
    yield fs.pop().result()
  File "/usr/local/lib/python3.8/concurrent/futures/_base.py", line 444, in result
    return self.__get_result()
  File "/usr/local/lib/python3.8/concurrent/futures/_base.py", line 389, in __get_result
    raise self._exception
  File "/usr/local/lib/python3.8/concurrent/futures/thread.py", line 57, in run
    result = self.fn(*self.args, **self.kwargs)
  File "/usr/local/lib/python3.8/site-packages/medusa/storage/s3_compat_storage/concurrent.py", line 60, in with_storage
    return self.func(self.storage, connection, iterable)
  File "/usr/local/lib/python3.8/site-packages/medusa/storage/s3_compat_storage/concurrent.py", line 82, in <lambda>
    lambda storage, connection, src_file: __upload_file(
  File "/usr/local/lib/python3.8/site-packages/medusa/storage/s3_compat_storage/concurrent.py", line 119, in __upload_file
    obj = _upload_single_part(connection, src, bucket, full_object_name)
  File "/usr/local/lib/python3.8/site-packages/retrying.py", line 49, in wrapped_f
    return Retrying(*dargs, **dkw).call(f, *args, **kw)
  File "/usr/local/lib/python3.8/site-packages/retrying.py", line 212, in call
    raise attempt.get()
  File "/usr/local/lib/python3.8/site-packages/retrying.py", line 247, in get
    six.reraise(self.value[0], self.value[1], self.value[2])
  File "/usr/local/lib/python3.8/site-packages/six.py", line 703, in reraise
    raise value
  File "/usr/local/lib/python3.8/site-packages/retrying.py", line 200, in call
    attempt = Attempt(fn(*args, **kwargs), attempt_number, False)
  File "/usr/local/lib/python3.8/site-packages/medusa/storage/s3_compat_storage/concurrent.py", line 126, in _upload_single_part
    obj = connection.upload_object(
  File "/usr/local/lib/python3.8/site-packages/libcloud/storage/drivers/s3.py", line 545, in upload_object
    return self._put_object(container=container, object_name=object_name,
  File "/usr/local/lib/python3.8/site-packages/libcloud/storage/drivers/s3.py", line 922, in _put_object
    raise ObjectHashMismatchError(
libcloud.storage.types.ObjectHashMismatchError: <ObjectHashMismatchError in <medusa.libcloud.storage.drivers.s3_base_driver.S3BaseStorageDriver object at 0x7faa61219fd0>, value=MD5 hash efcbe5b62bba3e537bc827136f8a091b-1 checksum does not match 61bffd3142add6ac2429794cfe837ffa, object = testnode1.unix.local/test_cassandra-2021-04-27/data/system_auth/role_permissions-3afbe79f219431a7add7f5ab90d8ec9c/md-5-big-TOC.txt>

I tried changing multipart_chunksize in /usr/local/lib/python3.8/site-packages/medusa/storage/abstract_storage.py according to the issue here https://github.com/thelastpickle/cassandra-medusa/issues/100 It doesn’t work. The errors the same.

┆Issue is synchronized with this Jira Task by Unito ┆Issue Number: K8SSAND-195

Issue Analytics

  • State:closed
  • Created 2 years ago
  • Comments:7 (4 by maintainers)

github_iconTop GitHub Comments

1reaction
adejanovskicommented, Apr 27, 2021

yes, that’s an identified and corrected bug actually. I’ve just triggered the release of 0.10.1 which will fix this issue. I’m not suggesting specifically that there’s a problem in your Minio install, just that there could be some settings that are not well handled by Libcloud and/or Medusa.

I’ll run some tests locally using 0.10.0 to see if I can reproduce this specific problem and will report back.

0reactions
adejanovskicommented, Mar 25, 2022

I’ll close this issue. Feel free to post more info (if any) on this ticket.

Read more comments on GitHub >

github_iconTop Results From Across the Web

Unable to backup to S3-Compatible Minio server
Recently I see that my backups to my S3-Compatible server are failing. My server is an Ubuntu server running Minio and this config...
Read more >
Monitoring Bucket and Object Events - MinIO
MinIO supports bucket and object-level S3 events similar to the Amazon S3 Event Notifications. MinIO supports publishing bucket or object events to the ......
Read more >
Is that possible to use custom S3 compatible storage for ...
Answer · Go to Tools & Settings > Backup Manager > Remote Storage Settings > Amazon S3 Backup. · Under “Service provider”, select...
Read more >
S3 Object Storage - JFrog - JFrog Documentation
Backup your system. ... Artifactory receives and simultaneously uploads to S3. Artifactory finishes uploading the binary to S3.
Read more >
Configure the GitLab chart with an external object storage
By default, an S3-compatible storage solution named minio is deployed with the ... Configuration of object storage for LFS, artifacts, uploads, packages, ...
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found