…or one-day just to add one line of code

s3fs.S3FileSystem.cachable = False

Adding caching under the good and not mentioning it in the documentation – that is called a dirty trick.

My case was lambda processing s3 files. When a file comes on s3 lambda process the file and triggers next lambda. The next lambda works fine only the first time.

The first lambda is using only boto3 and there is no problem.

The second lambda use s3fs.

The second invocation of the lambda is using already initialized context and the s3fs thinks that it knows what objects are on s3 but it is wrong!

So…. I found this issue – thank you jalpes196 !

Another way is to invalidate the cache…

from s3fs.core import S3FileSystem


S3FileSystem.clear_instance_cache()
s3 = S3FileSystem(anon=False)
s3.invalidate_cache()