Retrieving subfolders names in S3 bucket from boto3


Using boto3, I can access my AWS S3 bucket:

s3 = boto3.resource("s3")
bucket = s3.Bucket("my-bucket-name")

Now, the bucket contains folder first-level, which itself contains several sub-folders named with a timestamp, for instance 1456753904534. I need to know the name of these sub-folders for another job I"m doing and I wonder whether I could have boto3 retrieve those for me.

So I tried:

objs = bucket.meta.client.list_objects(Bucket="my-bucket-name")

which gives a dictionary, whose key "Contents" gives me all the third-level files instead of the second-level timestamp directories, in fact I get a list containing things as

{u"ETag": ""etag"", u"Key": first-level/1456753904534/part-00014", u"LastModified": datetime.datetime(2016, 2, 29, 13, 52, 24, tzinfo=tzutc()),
u"Owner": {u"DisplayName": "owner", u"ID": "id"},
u"Size": size, u"StorageClass": "storageclass"}

you can see that the specific files, in this case part-00014 are retrieved, while I"d like to get the name of the directory alone. In principle I could strip out the directory name from all the paths but it"s ugly and expensive to retrieve everything at third level to get the second level!

I also tried something reported here:

for o in bucket.objects.filter(Delimiter="/"):

but I do not get the folders at the desired level.

Is there a way to solve this?

Answer rating: 121

Below piece of code returns ONLY the "subfolders" in a "folder" from s3 bucket.

import boto3
bucket = "my-bucket"
#Make sure you provide / in the end
prefix = "prefix-name-with-slash/"  

client = boto3.client("s3")
result = client.list_objects(Bucket=bucket, Prefix=prefix, Delimiter="/")
for o in result.get("CommonPrefixes"):
    print "sub folder : ", o.get("Prefix")

For more details, you can refer to

Answer rating: 93

S3 is an object storage, it doesn"t have real directory structure. The "/" is rather cosmetic. One reason that people want to have a directory structure, because they can maintain/prune/add a tree to the application. For S3, you treat such structure as sort of index or search tag.

To manipulate object in S3, you need boto3.client or boto3.resource, e.g. To list all object

import boto3 
s3 = boto3.client("s3")
all_objects = s3.list_objects(Bucket = "bucket-name")

In fact, if the s3 object name is stored using "/" separator. The more recent version of list_objects (list_objects_v2) allows you to limit the response to keys that begin with the specified prefix.

To limit the items to items under certain sub-folders:

    import boto3 
    s3 = boto3.client("s3")
    response = s3.list_objects_v2(
            Prefix ="DIR1/DIR2",
            MaxKeys=100 )


Another option is using python os.path function to extract the folder prefix. Problem is that this will require listing objects from undesired directories.

import os
s3_key = "first-level/1456753904534/part-00014"
filename = os.path.basename(s3_key) 
foldername = os.path.dirname(s3_key)

# if you are not using conventional delimiter like "#" 
s3_key = "first-level#1456753904534#part-00014"
filename = s3_key.split("#")[-1]

A reminder about boto3 : boto3.resource is a nice high level API. There are pros and cons using boto3.client vs boto3.resource. If you develop internal shared library, using boto3.resource will give you a blackbox layer over the resources used.

Get Solution for free from DataCamp guru