Blockers why podman cannot replace docker-ce (yet). This is to remind myself to not waste any more time trying to use podman.
[lester@rocky8 ~]$ podman run alpine| # Requirement: brew | |
| ## Install jenv | |
| brew install jenv | |
| jenv versions | |
| ## MacOS | |
| brew install openjdk@23 | |
| brew install openjdk@11 | |
| ls -l /opt/homebrew/opt/openjdk* |
| Name | Type | |
|---|---|---|
| aws_eip.eip1 | resource | |
| aws_eip.eip2 | resource | |
| aws_internet_gateway.igw | resource | |
| aws_main_route_table_association.vpc_main_rt | resource | |
| aws_nat_gateway.ngw1 | resource | |
| aws_nat_gateway.ngw2 | resource | |
| aws_route_table.private_rt1 | resource | |
| aws_route_table.private_rt2 | resource | |
| aws_route_table.public_rt1 | resource |
| Name | Description | |
|---|---|---|
| private_subnet_cidr1 | CIDR range of the first private subnet in the VPC | |
| private_subnet_cidr2 | CIDR range of the second private subnet in the VPC | |
| private_subnet_id1 | Subnet ID of the first private subnet in the VPC | |
| private_subnet_id2 | Subnet ID of the second private subnet in the VPC | |
| vpc_id | ID of the VPC |
| Name | Description | Type | Default | Required | |
|---|---|---|---|---|---|
| customer | Name of the customer | string | n/a | yes | |
| environment | Environment | string | n/a | yes | |
| owner | Technical owner | string | n/a | yes | |
| private_subnet_1 | CIDR block for the first private subnet | string | n/a | yes | |
| private_subnet_2 | CIDR block for the second private subnet | string | n/a | yes | |
| public_subnet_1 | CIDR block for the first public subnet | string | n/a | yes | |
| public_subnet_2 | CIDR block for the second public subnet | string | n/a | yes | |
| vpc_cidr | CIDR block for the VPC | string | n/a | yes |
| [elastic-oss] | |
| name=Elastic repository for 8.x oss-packages | |
| baseurl=https://artifacts.elastic.co/packages/oss-8.x/yum | |
| gpgcheck=1 | |
| gpgkey=https://artifacts.elastic.co/GPG-KEY-elasticsearch | |
| enabled=0 | |
| autorefresh=1 | |
| type=rpm-md |
| ## awx namespace | |
| export NAMESPACE=awx | |
| kubectl create ns $NAMESPACE | |
| ## secrets | |
| cat <<EOF > awx-secrets.yml | |
| --- | |
| apiVersion: v1 | |
| kind: Secret |
| Hostname | role | IP address | |
|---|---|---|---|
| kubehost1 | master | 192.168.133.91 | |
| kubehost2 | worker | 192.168.133.92 | |
| kubehost3 | worker | 192.168.133.93 | |
| buildatron | management/local | 192.168.133.128 |
| Wednesday 04 November 2020 12:03:28 +0100 (0:00:00.067) 0:00:14.229 **** | |
| redirecting (type: modules) ansible.builtin.keycloak_client to community.general.keycloak_client | |
| Using module file /Users/workstation/.local/share/virtualenvs/ansible_project-6ES-zTZc/lib/python3.6/site-packages/ansible_collections/community/general/plugins/modules/keycloak_client.py | |
| Pipelining is enabled. | |
| <targetserver> ESTABLISH SSH CONNECTION FOR USER: root | |
| <targetserver> SSH: EXEC ssh -C -o ControlMaster=auto -o ControlPersist=60s -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o 'User="root"' -o ConnectTimeout=10 -o ControlPath=/Users/workstation/.ansible/cp/205f67cdb9 targetserver '/bin/sh -c '"'"'/usr/bin/python && sleep 0'"'"'' | |
| <targetserver> (0, b'\n{"proposed": {"publicClient": false, "protocol": "openid-connect", "description": "awesomeapp Desktop Application OpenID client", "directAccessGrantsEnabled": true, "adminUrl": "https |
| concurrent = 10 | |
| check_interval = 0 | |
| [session_server] | |
| session_timeout = 1800 | |
| [[runners]] | |
| name = "Docker runner" | |
| url = "https://gitlab.localdomain.local/" | |
| token = "tOkeNh3r3-" |