Packer google cloud authentication within vpc - google-cloud-platform

We are using Packer to build images in a GCP compute instance. Packer tries to grab the image based on project and image as follows:
https://www.googleapis.com/compute/v1/projects/<project-name>/global/images/<image-name>?alt=json
Then it throws an error:
oauth2: cannot fetch token: Post https://accounts.google.com/o/oauth2/token: dial tcp 108.177.111.84:443: i/o timeout
Based on security principle, our compute instance has no external IP address, therefore it does not have access to internet. In this case, accounts.google.com is no longer accessible. Then how can we authenticate google apis?
I tried to enable firewall rules and provide routes for internet access. But based on the requirement stated here, the instance still won't have access if it does not have external IP address.
This means we must have a separate way to authenticate googleapis.
But does Packer support this?
Here is the packer builder we have:
"builders": [
{
"type": "googlecompute",
"project_id": "test",
"machine_type": "n1-standard-4",
"source_image_family": "{{user `source_family`}}",
"source_image": "{{user `source_image`}}",
"source_image_project_id": "{{user `source_project_id`}}",
"region": "{{user `region`}}",
"zone": "{{user `zone`}}",
"network": "{{user `network`}}",
"subnetwork": "{{user `subnetwork`}}",
"image_name": "test-{{timestamp}}",
"disk_size": 10,
"disk_type": "pd-ssd",
"state_timeout": "5m",
"ssh_username": "build",
"ssh_timeout": "1000s",
"ssh_private_key_file": "./gcp-instance-key.pem",
"service_account_email": "test-account#test-mine.iam.gserviceaccount.com",
"omit_external_ip": true,
"use_internal_ip": true,
"metadata": {
"user": "build"
}
}

To do what you want manually you will need to have an ssh tunnel open on a working compute instance inside the project or in a vpc that has a peering enabled on the network the compute you want to reach is.
If you then use a CI with a runner like gitlab-ci, be sure to create the runner inside the same vpc or in a vpc with a peering.
If you don't want to create a compute with an external ip you could try to open a vpn connection to the project and do it through the vpn.

Related

Which operators of OpenShift modify AWS resources at runtime?

Our company is using AWS and we have deployed an OpenShift OKD cluster using openshift-installer and following the instructions on the page "Installing a private cluster on AWS". We have been using the cluster for a while and everything has been going well.
Recently, I need to expose some Services using the NodePort in addition to the usual HTTP ports (80, 443), specifically the range 30000-32767. I discover that the Installer deployed a (private) Classic Load Balancer with only two Listeners for port 80 and 443 which to me was sensible.
I have manually added several more listeners for the NodePort ports and they are working as expected.
cat <<'__ELB__' | xargs -0 aws elb create-load-balancer-listeners --cli-input-yaml
LoadBalancerName: 'abcdefghijklmnopqrstu012345678'
Listeners:
- Protocol: 'TCP'
LoadBalancerPort: 32348
InstanceProtocol: 'TCP'
InstancePort: 32348
#- ... more listeners omitted
__ELB__
However, after a few days, I noticed that those added listeners were removed. By checking CloudTrail history, it turned out that the listeners were removed by one of the Control Plane.
// Please note that all information were redacted as well as irrelevant properties
// were removed
{
"eventVersion": "1.08",
"userIdentity": {
"type": "AssumedRole",
"principalId": "AROAQO2JUSTEXAMPLE:i-03b1248d0example",
"arn": "arn:aws:sts::0123456789:assumed-role/ExampleOrg__OKD4--ControlPlane/i-03b1248d0example",
"accountId": "0123456789",
"accessKeyId": "ASIAABCDEEXAMPLEONLY",
"sessionContext": {
"sessionIssuer": {
"type": "Role",
"principalId": "AROAQO2JUSTEXAMPLE",
"arn": "arn:aws:iam::0123456789:role/ExampleOrg__OKD4--ControlPlane",
"accountId": "0123456789",
"userName": "ExampleOrg__OKD4--ControlPlane"
},
"ec2RoleDelivery": "2.0"
}
},
"eventSource": "elasticloadbalancing.amazonaws.com",
"eventName": "DeleteLoadBalancerListeners",
"userAgent": "kubernetes/v1.23.3-2003+e419edff267ffa-dirty aws-sdk-go/1.38.49 (go1.17.5; linux; amd64)",
"requestParameters": {
"loadBalancerName": "abcdefghijklmnopqrstu012345678",
"loadBalancerPorts": [
32349,
...
]
},
"responseElements": null,
"eventType": "AwsApiCall",
"apiVersion": "2012-06-01",
"managementEvent": true,
"eventCategory": "Management"
}
There were other actions from the Control Plane which modified the Load Balancers.
I tried to search the logs of the Pods of the Operators on the Control Planes as well as the Git repositories on GitHub 2 but I could not find any hint where these calls to AWS were made.
I really appreciate if anyone who could point me to:
Which operators/components on OKD/OpenShift are used to update the AWS resources given the fact that the cluster was installed using Mint mode and cloud credential (aws-creds) has not been removed?
Is it possible that the Control Planes themselves (EC2 instance) with IAM Role make those calls outside the OpenShift cluster (like other daemon processes)?
What would be the correct way to amend the ports into the cluster?

How to share EFS among different ECS tasks and hosted in different instances

Currently, the tasks that we defined are using bind_mount to share the EFS persistent data among containers in a single task, lets say taskA saves in /efs/cache/taskA.
But we are looking to find out, if there's any way to share the EFS data of taskA with the taskB containers in ECS. So taskB can be able to access data from taskA by doing bind_mount in taskB.
So can we use bind_mount in ecs to achieve this? or is there any alternative. Thanks
taskB definition looks like:
containerDefinitions": [
"mountPoints": [
{
"readOnly": null,
"containerPath": "/efs/cache/taskA",
"sourceVolume": "efs_cache_taskA"
},
...],
"volumes": [
{
"fsxWindowsFileServerVolumeConfiguration": null,
"efsVolumeConfiguration": null,
"name": "efs_cache_taskA",
"host": {
"sourcePath": "/efs/cache/taskA"
},
"dockerVolumeConfiguration": null
},
...
}
You no longer need to mount EFS on EC2 and then to bind mounts. Now ECS supports a native integration with ECS (both EC2 and Fargate) that will allow you to configure the tasks to mount the same file system (or Access Point) without even bothering about configuring EC2 (in fact it works with Fargate as well). See this blog post series for more info.

How to run an AWS EMR cluster on multiple subnets?

Currently we are creating instances using a config.json file from EMR to configure the cluster. This file specifies a subnet ("Ec2SubnetId").
ALL of my EMR instances end up using this subnet...how do I let it use multiple subnets?
Here is the terraform template I am pushing to S3.
{
"Applications": [
{"Name": "Spark"},
{"Name": "Hadoop"}
],
"BootstrapActions": [
{
"Name": "Step1-stuff",
"ScriptBootstrapAction": {
"Path": "s3://${artifact_s3_bucket_name}/artifacts/${build_commit_id}/install-stuff.sh",
"Args": ["${stuff_args}"]
}
},
{
"Name": "setup-cloudWatch-agent",
"ScriptBootstrapAction": {
"Path": "s3://${artifact_s3_bucket_name}/artifacts/${build_commit_id}/setup-cwagent-emr.sh",
"Args": ["${build_commit_id}"]
}
}
],
"Configurations": [
{
"Classification": "spark",
"Properties": {
"maximizeResourceAllocation": "true"
}
],
"Instances": {
"AdditionalMasterSecurityGroups": [ "${additional_master_security_group}" ],
"AdditionalSlaveSecurityGroups": [ "${additional_slave_security_group}" ],
"Ec2KeyName": "privatekey-${env}",
"Ec2SubnetId": "${data_subnet}",
"InstanceGroups": [
You cannot currently achieve what you are trying to do. EMR clusters always end up with all of their nodes in the same subnet.
Using Instance Fleets, you are indeed able to configure a set of subnets.. but at launch time, AWS will choose the best one and put all your instances there.
From the EMR Documentation, under "Use the Console to Configure Instance Fleets":
For Network, enter a value. If you choose a VPC for Network, choose a single EC2 Subnet or CTRL + click to choose multiple EC2 subnets. The subnets you select must be the same type (public or private). If you choose only one, your cluster launches in that subnet. If you choose a group, the subnet with the best fit is selected from the group when the cluster launches.

How to launch an AMI instance copied from another region?

Using Packer I create AMI in North Virginia(us-east-1). Below is the builder snippet for it.
"builders": [{
"type": "amazon-ebs",
"access_key": "XXXXXXXXXXXXXXXXXXXXXXX",
"secret_key": "XXXXXXXXXXXXXXXXXXXXXXX",
"region": "us-east-1",
"source_ami": "XXXXXXXXXXXXXXXXXXXXXXX",
"instance_type": "m4.2xlarge",
"ssh_username": "ubuntu",
"ami_users": [
"XXXXXXXXXXXX",
"YYYYYYYYYYYY"
],
"ami_name": "XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX",
"launch_block_device_mappings": [{
"device_name": "/dev/sda1",
"volume_type": "gp2",
"delete_on_termination": true,
"volume_size": 30
}]
}]
I have no problems in launching this AMI in us-east-1. But when I copy it to Mumbai(ap-south-1) and try to launch it
The instance configuration for this AWS Marketplace product is not supported. Please see the AWS Marketplace site for more information about supported instance types, regions, and operating systems.
Most of the settings are left as default so not sure what is causing this issue. Any pointers will be of great help.
Marketplace AMI cannot be moved between accounts due to license restrictions
Source: https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/CopyingAMIs.html
You can't copy an AMI with an associated billingProduct code that was
shared with you from another account. This includes Windows AMIs and
AMIs from the AWS Marketplace. To copy a shared AMI with a
billingProduct code, launch an EC2 instance in your account using the
shared AMI and then create an AMI from the instance. For more
information, see Creating an Amazon EBS-Backed Linux AMI.

AWS Cloudformation: Create Route to an Instance - CF can't find instance ID

I'm unable to create a route that points to the interface of an EC2 instance (NAT box in my public subnet). I used a DependsOn attribute in the Route resource, and I can see in the CF log that the instance is created before CF tries to create the Route. However, it errors out saying "The gateway ID 'i-xxxxxxxx' does not exist".
"RoutePrivate1": {
"DependsOn": "EC2InstanceNAT",
"Properties": {
"DestinationCidrBlock": "0.0.0.0/0",
"GatewayId": {
"Ref": "EC2InstanceNAT"
},
"RouteTableId": {
"Ref": "RouteTablePrivateSubnets"
}
},
"Type": "AWS::EC2::Route"
},
I can manually go into the route table, and add that very gateway id without issue. Could I be hitting a race condition? Or am I doing something wrong?
Thanks for any help!
_KJH
The AWS::EC2::Route documentation says that GatewayId is used to indicate an Internet Gateway (IGW). To indicate a NAT instance you should use InstanceId.