Running a Startup on AWS? Get Funding With AWS JumpStart. Click Here to Learn More

2021 Fillmore Street #1128


24/7 solutions


How-to Give AWS Route 53 IAM access

Our how-to guides cover various topics in detail. Learn what you need to with AllCode.

S3 Access using IAM Groups

Aurora is able to utilize the resources provided by Amazon S3 in order to save data from an Aurora DB cluster or load data into an Aurora DB cluster. However, you must begin by creating an IAM policy that will allow Aurora to access Amazon S3. This policy will need to give access to the buckets and objects. A list of Aurora features that can access an Amazon S3 bucket on your behalf, including the minimum bucket and object permissions required by each feature, is shown in the following table:

Feature Bucket Permission Object Permissions












The following policy adds the rights Aurora would need to access an Amazon S3 bucket for you.


    “Version”: “2012-10-17”,

    “Statement”: [


            “Sid”: “AllowAuroraToExampleBucket”,

            “Effect”: “Allow”,

            “Action”: [









            “Resource”: [







Ensure to include both entries in the Resource value. In order to use Aurora, you must have access to both the bucket and all the items within it. You may not need to include all of the sample policy’s permissions, depending on your use case. In addition, it’s possible that further authorizations will be necessary. To decrypt an encrypted Amazon S3 bucket, you’ll need the KMS: Decrypt privilege.

By adhering to these procedures, you will be able to construct an IAM policy that will grant Aurora the bare minimum of rights required for it to access an Amazon S3 bucket on your behalf. The bare necessities of permissible actions will be covered by the policy. You are free to skip over these procedures and instead make use of the predefined IAM policy named AmazonS3ReadOnlyAccess or AmazonS3FullAccess to provide Aurora access to all of your Amazon S3 buckets. As a result, Aurora will have access to all of the buckets contained within your Amazon S3 account.

IAM policy needs to be created in order to grant access to your Amazon S3 resources.

  • Launch the IAM Management Console in your web browser.
  • Select Policies from the list of options in the navigation pane.
  • Select the option to create a policy.
  • On the tab for the Visual editor, select Choose a service and then select S3 from the drop-down menu.
  • Select All Actions from the drop-down menu under Actions, then after that pick the necessary bucket permissions and object permissions for the IAM policy. Permissions for object actions in Amazon S3 are known as “object permissions,” because they must be granted for individual items within a bucket rather than for the bucket itself.
  • Select Resources, then select Add ARN for a bucket from the drop-down menu.
  • After providing the necessary information about your resource in the Add ARN(s) dialogue box, you can then select Add.

Choose the Amazon S3 bucket that will be accessible to users. For example, if you wish to grant Aurora access to the Amazon S3 bucket that is named example-bucket, then you need to change the value of the Amazon Resource Name (ARN) to read arn:aws:s3:::example-bucket in the configuration file.

  • Select the Add ARN for the object option if the object resource is already mentioned.
  • In the dialogue box labeled Add ARN(s), provide the relevant information on your resource.

Specify the Amazon S3 bucket that you want to give access to when asked about the Amazon S3 bucket. You have the option of selecting Any as the object, which will allow you to provide access to any object in the bucket. In order to grant Aurora access to a subset of the files and folders contained within an Amazon S3 bucket, you can restrict Aurora’s access to those resources by assigning a more granular Amazon Resource Name (ARN) value.

  • To add another Amazon S3 bucket to the policy, select the Add ARN for bucket option, and then repeat the steps from the previous section for that bucket. You are free to repeat this process in order to add appropriate bucket permission statements to your policy for each Amazon S3 bucket that you want Aurora to be able to access. You also have the option of granting access to any and all buckets and objects stored in Amazon S3.
  • Choose to look over the guidelines.
  • In the “Name” field, give a name for your IAM policy, such as “Allow Aurora to Example Bucket,” for instance. You give the newly created IAM role associated with your Aurora DB cluster this name when you create the role. You can also choose to add a value for the optional Description field.
  • Click the Create policy button.
  • Finish the steps outlined in the Creating an IAM role guide in order to grant Amazon Aurora access to AWS services.

Providing Access to an Amazon S3 Storage Device


Before you are able to load data from an Amazon S3 bucket, you need to first grant permission for your Aurora MySQL DB cluster to access Amazon S3.

In order to grant access to Amazon S3 for Aurora MySQL.

  • Ensure that your Aurora MySQL DB cluster is able to connect to Amazon S3 by developing an AWS Identity and Access Management (IAM) policy that specifies the bucket and object permissions necessary for doing so.
  • You will need to create an IAM role and then attach the IAM policy that you developed to the newly created IAM role.
  • Check to see that the database cluster is utilizing a specialised DB cluster parameter group.
  • The Amazon Resource Name (ARN) of the new IAM role should be entered into either the aurora load from s3 role or aws default s3 role DB cluster parameter, depending on whether you are using Aurora MySQL version 1 or version 2. Aurora will use the IAM role that is supplied for the aws default s3 role if an IAM role is not specified for the aurora load from s3 role variable.

Use the aws default s3 role setting for the Aurora MySQL version 3 installation.

In the event that the cluster is a component of an Aurora global database, you will need to configure this parameter for each Aurora cluster contained inside the global database. Despite the fact that only the primary cluster in an Aurora global database is able to load data, another cluster has the potential to be promoted and become the primary cluster if the failover mechanism is activated.

  • You need to associate the role that you generated in Creating an IAM role to allow Amazon Aurora to access AWS services with the DB cluster before database users in an Aurora MySQL DB cluster may access Amazon S3. It is necessary to associate the role with each Aurora cluster that is part of the global database when working with an Aurora global database.
  • Create outbound connections to Amazon S3 using the Aurora MySQL DB cluster by configuring the necessary settings.

Defining the Location of an Amazon S3 Bucket


A path to files in an Amazon S3 bucket can be specified using the following syntax.



The values in the path are as follows:


AWS region (optional) – The AWS region where the Amazon S3 bucket to load from is located This field can be left blank. Your file is downloaded from Amazon S3 in the same area as your DB cluster when you don’t provide it a specific location to load from.


A string identifying the Amazon S3 bucket in which the data to be loaded resides. It is possible to use prefixes that designate a virtual path.


The Amazon S3 text file or XML file’s name, or a prefix that identifies one or more text or XML files to load. One or more text files may be loaded by using a manifest file. As an example, see Using a manifest file to define data files to load for additional information on how to load text from Amazon S3.


This statement can be used to load data from any supported text file format, including comma-delimited text data, into your database using the MySQL LOAD DATA INFILE statement. There is no support for compressed files.



    INTO TABLE tbl_name

    [PARTITION (partition_name,…)]

    [CHARACTER SET charset_name]


        [TERMINATED BY ‘string’]

        [[OPTIONALLY] ENCLOSED BY ‘char’]

        [ESCAPED BY ‘char’]



        [STARTING BY ‘string’]

        [TERMINATED BY ‘string’]


    [IGNORE number {LINES | ROWS}]


    [SET col_name = expr,…]

Dolan Cleary
Dolan Cleary

I am a recent graduate from the University of Wisconsin - Stout and am now working with AllCode as a web technician.

Related Articles

Here’s Why You Should Work with an AWS Partner

Here’s Why You Should Work with an AWS Partner

Amazon Web Services is understandably a difficult platform to adapt to and utilize fully upon first getting started. Some organizations can be selected to become certified partners to indirectly extend services to help build on the Amazon Cloud. Finding a certified company to help build out is undoubtedly the best way to significantly simplify, streamline, and reduce the cost of utilizing AWS.

Amazon Web Services – CodeCatalyst

Amazon Web Services – CodeCatalyst

When a development team is building out an application, it helps to have access to the same resources, have the tools for planning and testing, and to have access to the application all in one place. CodeCatalyst comes with a slew of continuous integration/continuous development (CI/CD) tools and can leverage other AWS services and be connected to other AWS projects on an account. As a collaborative tool, it is easy to introduce new members into the project and to log all activity or all tests from a single dashboard. It’s a complete package of all the tools needed to securely work on every step of an application’s lifecycle.

The Definitive Guide to AWS Pricing

The Definitive Guide to AWS Pricing

Perhaps the biggest issue with AWS that its competitors edge out on is the confusing pricing model. It does promise the capacity to help users save significantly on funds that otherwise by avoiding spending on unnecessary resources, but getting to that point isn’t always clear. We will be covering in greater detail how this works.

Download our 10-Step Cloud Migration ChecklistYou'll get direct access to our full-length guide on Google Docs. From here, you will be able to make a copy, download the content, and share it with your team.