Skip to content

Instantly share code, notes, and snippets.

@rexwhitten
Created January 15, 2026 18:36
Show Gist options
  • Select an option

  • Save rexwhitten/910c87c33905c3c34379333ca867dbc9 to your computer and use it in GitHub Desktop.

Select an option

Save rexwhitten/910c87c33905c3c34379333ca867dbc9 to your computer and use it in GitHub Desktop.
GitLab

Here is a complete, working example of a .gitlab-ci.yml file that meets your requirements.

I have chosen a Python base image for this example because it makes installing the AWS CLI (via pip) extremely straightforward and fast for a test scenario.

1. The Configuration (.gitlab-ci.yml)

Copy and paste this into your repository's CI/CD configuration file.

stages:
  - connectivity_test

aws_s3_list_test:
  stage: connectivity_test
  # Using a standard python image makes installing the CLI via pip easy
  image: python:3.11-slim

  before_script:
    # 1. Update pip and install the AWS CLI
    - pip install --upgrade pip
    - pip install awscli
    - aws --version  # Verify installation

  script:
    # 2. Run the AWS command using the env vars automatically
    - echo "Attempting to list S3 buckets..."
    - aws s3 ls

2. Required Setup: GitLab CI/CD Variables

For the script above to work, do not hardcode your credentials in the YML file. You must set them as environment variables in the GitLab UI.

  1. Go to your GitLab Project.
  2. Navigate to Settings > CI/CD.
  3. Expand the Variables section.
  4. Add the following three variables:
Key Value Example Note
AWS_ACCESS_KEY_ID AKIAIOSFODNN7EXAMPLE Make sure to check "Mask variable"
AWS_SECRET_ACCESS_KEY wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY Check "Mask variable" to hide in logs
AWS_DEFAULT_REGION us-east-1 Required for the CLI to know where to look

Note on Security: GitLab automatically injects these variables into the pipeline runner's environment. The AWS CLI is built to look for these specific variable names automatically, so you don't need to explicitly pass them in the script command (e.g., aws s3 ls just works).


3. A "Better" Way (Optimization)

If this pipeline runs frequently, installing the CLI every time is inefficient. A more optimized approach uses the official AWS Docker image, which has the CLI pre-installed.

stages:
  - connectivity_test

aws_s3_list_test:
  stage: connectivity_test
  # Official AWS image - no installation steps needed
  image: amazon/aws-cli:latest
  
  # We need to override the entrypoint because this image 
  # defaults to running the 'aws' command immediately
  entrypoint: [""]

  script:
    - aws --version
    - aws s3 ls

Troubleshooting Tips

  • "Unable to locate credentials": Double-check that your variable names in GitLab Settings match exactly (AWS_ACCESS_KEY_ID, etc.).
  • "Access Denied": The IAM User associated with the keys does not have the s3:ListAllMyBuckets permission in AWS.
  • Protected Branches: By default, GitLab variables might be set to "Protected." If you are running this test on a feature branch (not main or master), uncheck the "Protected" flag on the variables in the settings.

Would you like me to write a Terraform snippet or AWS IAM policy to generate the specific user permissions needed for this test?

{ "Version": "2012-10-17", "Statement": [ { "Sid": "ECRLogin", "Effect": "Allow", "Action": [ "ecr:GetAuthorizationToken" ], "Resource": "" }, { "Sid": "ECRPushImages", "Effect": "Allow", "Action": [ "ecr:CompleteLayerUpload", "ecr:UploadLayerPart", "ecr:InitiateLayerUpload", "ecr:BatchCheckLayerAvailability", "ecr:PutImage" ], "Resource": "arn:aws:ecr:YOUR_REGION:YOUR_ACCOUNT_ID:repository/YOUR_ECR_REPO_NAME" }, { "Sid": "S3PushArtifacts", "Effect": "Allow", "Action": [ "s3:PutObject" ], "Resource": "arn:aws:s3:::YOUR_TARGET_BUCKET_NAME/cmapp/dev/" }, { "Sid": "SSMUpdateParameters", "Effect": "Allow", "Action": [ "ssm:PutParameter", "ssm:GetParameter" ], "Resource": "arn:aws:ssm:YOUR_REGION:YOUR_ACCOUNT_ID:parameter/cmapp/dev/*" } ] }

@rexwhitten
Copy link
Author

1. List all S3 Buckets

aws s3 ls

2. List SSM Parameters by Path Prefix

aws ssm get-parameters-by-path
--path "/your/path/prefix/"
--recursive
--with-decryption
--query "Parameters[].{Name:Name, Value:Value}"
--output table

3. List Subnets for a specific VPC ID

Replace 'vpc-0123456789abcdef0' with your actual VPC ID

aws ec2 describe-subnets
--filters Name=vpc-id,Values=vpc-0123456789abcdef0
--query "Subnets[].{ID:SubnetId, CIDR:CidrBlock, AZ:AvailabilityZone, Name:Tags[?Key=='Name'].Value|[0]}"
--output table

@rexwhitten
Copy link
Author

To ensure the pipeline (or the specific script) continues running even if a single command fails, you have two primary options depending on your goal.

Option 1: Command-Level Continuation (The "Try-Catch" approach)

If you want all three commands to run within the same job, even if the first one fails, you must append || true (or a custom error message) to each command.

By default, GitLab pipelines stop immediately if any command returns an error (exit code non-zero). Adding || true forces the command to return "success" regardless of the actual result.

stages:
  - audit

aws_resources_audit:
  stage: audit
  image: amazon/aws-cli:latest
  # Override entrypoint so we can run shell commands
  entrypoint: [""]
  
  script:
    - echo "--- Starting AWS Resource Audit ---"

    # 1. List S3 Buckets
    # If this fails, it prints the error, then '||' catches it, echoes a message, and continues.
    - echo "Attempting to list S3 buckets..."
    - aws s3 ls || echo "WARNING: Failed to list S3 buckets. Check permissions."

    # 2. List SSM Parameters
    - echo "Attempting to list SSM Parameters..."
    - >
      aws ssm get-parameters-by-path 
      --path "/your/path/prefix/" 
      --recursive 
      --with-decryption 
      --query "Parameters[].{Name:Name, Value:Value}" 
      --output table 
      || echo "WARNING: Failed to list SSM parameters."

    # 3. List Subnets for a specific VPC
    # Replace vpc-ID with your variable or actual ID
    - echo "Attempting to list Subnets..."
    - >
      aws ec2 describe-subnets 
      --filters Name=vpc-id,Values=vpc-0123456789abcdef0 
      --query "Subnets[].{ID:SubnetId, CIDR:CidrBlock, Name:Tags[?Key=='Name'].Value|[0]}" 
      --output table 
      || echo "WARNING: Failed to list subnets."

    - echo "--- Audit Complete ---"

Option 2: Job-Level Continuation (The "Non-Blocking" approach)

If you prefer to separate these into different jobs (so you can easily see which one failed in the GitLab UI) but want the pipeline status to remain green/passing, use allow_failure: true.

stages:
  - audit

audit_s3:
  stage: audit
  image: amazon/aws-cli:latest
  entrypoint: [""]
  allow_failure: true  # Pipeline continues even if this job fails
  script:
    - aws s3 ls

audit_ssm:
  stage: audit
  image: amazon/aws-cli:latest
  entrypoint: [""]
  allow_failure: true
  script:
    - aws ssm get-parameters-by-path --path "/prefix/" --recursive

audit_vpc:
  stage: audit
  image: amazon/aws-cli:latest
  entrypoint: [""]
  allow_failure: true
  script:
    - aws ec2 describe-subnets --filters Name=vpc-id,Values=vpc-xxxx

Which one should I use?

  • Use Option 1 if you want a single log file and want to perform a "best effort" run where you just gather whatever info you can in one go.
  • Use Option 2 if you want clear visual reporting in GitLab (red/green icons for each task) without stopping the whole deployment.

@rexwhitten
Copy link
Author

Define the logic once here

.aws_auth_template:
stage: audit
image: amazon/aws-cli:latest
before_script:
- echo "Assuming AWS Role..."
# (Insert your assume role export command here)

Just list the jobs simply

audit_s3:
extends: .aws_auth_template
script:
- aws s3 ls

audit_vpc:
extends: .aws_auth_template
script:
- aws ec2 describe-vpcs

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment