Secret Manager GCP - dockerfile

I am using SecretManager Service of GCP and is using client libraries to access secrets.
My prod and dev credentials are different.
If we want to use these client libraries, the service account file needs to exported as GOOGLE_APPLICATION_CREDENTIALS variable. This is required as GCP checks whether the exported service account has the required permission to access Secrets.
As of now, I provide these service account file in code and export them in makefile and Dockerfile before the start of the server
Now, I want to access the service account file too from the SecretManager and that can't be done using client Libraries.
I went through the best practices of Secret Manager and there it is not recommended to provide secrets via filesystem or environment variables as it is vulnerable to attacks
The only option that I can think of is to access this service account file secret by using secretManager curl API in dockerfile but that would be complex as I also have to decode the utf-8 secret using bash.
curl "https://secretmanager.googleapis.com/v1/projects/project-id/secrets/secret-id/versions/version-id":access --request "GET" --header "authorization: Bearer $(gcloud auth print-access-token)" --header "content-type: application/json"
Is there any other best practice recommended for handling this situation?

Related

How to recover GCP project service account

I ignorantly deleted the service account to my GCP project rather than the service account to Google Calendar API and Dialogflow service account.
I'm now having issues trying to deploy my dialogflow agent through the inline code editor to Cloud Functions.
When I check the logs, I get this message:
2020-07-30 15:48:40.350 WAT
Dialogflow API
CreateCloudFunction
us-central1
bashorun.emma#gmail.com
userFacingMessage:
Default service account 'northern-timer-231210#appspot.gserviceaccount.com' doesn't exist.
Please recreate this account (for example by disabling and enabling the Cloud Functions API),
or specify a different account.;
com.google.cloud.eventprocessing.manager.api.error.DefaultServiceAccountDoesNotExistException: userFacingMessage:
Default service account 'northern-timer-231210#appspot.gserviceaccount.com' doesn't exist. Please recreate this account (for example by disabling and enabling the Cloud Functions API), or specify a different account.; Code: FAILED_PRECONDITION com.google.apps.framework.request.StatusException: <eye3 title='FAILED_PRECONDITION'/> generic::FAILED_PRECONDITION: userFacingMessage:
Default service account 'northern-timer-231210#appspot.gserviceaccount.com' doesn't exist.
Please recreate this account (for example by disabling and enabling the Cloud Functions API), or specify a different account.; com.google.cloud.eventprocessing.manager.api.error.DefaultServiceAccountDoesNotExistException: userFacingMessage:
Default service account 'northern-timer-231210#appspot.gserviceaccount.com' doesn't exist. Please recreate this account (for example by disabling and enabling the Cloud Functions API), or specify a different account.; Code: FAILED_PRECONDITION
Is it possible to retrieve back the service account or am I getting these errors as a result of a different problem?
After a service account is deleted, you can recover it between 30 days after its deletion.
To do it, you can run the following command from cloud shell:
gcloud beta iam service-accounts undelete ACCOUNT_ID
The account ID can be taken from stackdriver logging with the following filter
resource.type="service_account"
resource.labels.email_id="service-account-name"
"DeleteServiceAccount"
Hope this helps to recover your service account.
Recover App Engine or any deleted service account
You can undelete service accounts. You will need the service account's unique ID. If you don't have it, you can find it on Google Cloud Logging.
You can find Logging service here on the side menu:
Then you will need to filter by date and type service account to find the exact moment the service was deleted.
Then you can either
Option 1: Use Google Cloud Command Line
You can run the command line by installing it on your computer (https://cloud.google.com/sdk/docs/install). Or you can run it online using the Active Shell offered by Google Cloud Platform.
The command you want to run is the following.
gcloud beta iam service-accounts undelete 12345678901234567890
Option 2: Use Google Cloud API
Using curl, call the API with the following command.
You will need to change API_KEY, PROJECT_ID and SERVICE_ACCOUNT_UID for real values.
curl -X POST \
-H "Authorization: Bearer API_KEY \
-H "Content-Type: application/json; charset=utf-8" \
-d "" \
"https://iam.googleapis.com/v1/projects/PROJECT_ID/serviceAccounts/SERVICE_ACCOUNT_UID:undelete"
You can get the API_KEY from Google Cloud Command Line:
gcloud auth application-default print-access-token
Again you can either have gcloud installed on your local machine or you can use it online with the Active Shell.

Using aws credentials on remote server without storing them on the remote server?

Is it possible to use aws credentials on remote server without explicitly copying them?
For example I can use my local ssh key on server like ssh-add && ssh -A <server_name> is there something like this for aws cli without copying the ~/.aws/credentials and ~/.aws/config?
I want to use these aws credentials just to download some files from S3.
In order to SSH to a remote server, your public key must already be present on the remote server. Your tool uses the private key to encrypt communications. Therefore, your assumption that your credentials are not needed on the remote server is incorrect.
EC2 supports retrieving credentials from metadata. You could create an IAM role s3access and then assume that role inside EC2. You can even retrieve those credentials using the command line tool curl. Example:
TOKEN=`curl -X PUT "http://169.254.169.254/latest/api/token" -H "X-aws-ec2-metadata-token-ttl-seconds: 21600"`
curl -H "X-aws-ec2-metadata-token: $TOKEN" -v http://169.254.169.254/latest/meta-data/iam/security-credentials/s3access
Example output:
{
"Code" : "Success",
"LastUpdated" : "2012-04-26T16:39:16Z",
"Type" : "AWS-HMAC",
"AccessKeyId" : "ASIAIOSFODNN7EXAMPLE",
"SecretAccessKey" : "wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY",
"Token" : "token",
"Expiration" : "2017-05-17T15:09:54Z"
}
Refer to this link for more information on metadata credentials.
You can also setup the CLI to automatically use metadata credentials. Refer to this link for more information.
If your goal is to have no credentials on the EC2 instance, then you will need to use Presigned URLs. Refer to this link for more information.
If all you want to do is download some files from S3, persigned URLs may be an easier and safer option. AWS allows you to generate URLs for any AWS API action that are only usable for a certain period of time. You can generate those URLs to your specific files, send them to your server, and have the server use them to download the files.
For example:
aws s3 presign s3://awsexamplebucket/test2.txt --expires-in 604800
All of the different frameworks like boto3 and aws-sdk also support generating URLs.
Another option is generating temporary credentials. AWS allows lets you create credentials that only for a certain period of time. It also allows you to limit their scope so you can ask that they only allow downloading from a specific bucket, for example. Using STS you will get a new set of access key and session token, send those to your server, and let your server use them to do what it needs to do.
If you want the token to have exactly the same credentials as the calling role, use:
aws sts get-session-token
Otherwise you will need to create a role with the appropriate permissions and use:
aws sts assume-role --role-arn arn:aws:iam::123456789012:role/xaccounts3access --role-session-name s3-access-example
Just like with presigned URLs, these APIs are available in every SDK and not just on the command line.

How to Authenticate Amazon Elasticsearch Service with AWS Access key ID & Secret access key?

I'm using Amazon Elasticsearch Service. I need to authenticate it with the access_key_id and secret_access_key. I saw some aws-sdk available for this service but I need to access the URL in terminal. Something similar to this.
curl -H "Authorization: ApiKey VnVhQ2ZHY0JDZGJrUW0tZTVhT3g6dWkybHAyYXhUTm1zeWFrdzl0dk5udw==" http://localhost:9200/_cluster/health
Is it possible?

Permission to invoke CloudRun apparently not granted to GKE (pods)

I want to be able to invoke a GloudRun endpoint by one of my GKE pods.
When I describe my VMs/instances that comprise my GKE cluster, I see
serviceAccounts:
- email: 873099409230-compute#developer.gserviceaccount.com
So I added the CloudRun Invoker role to the above service account.
I have enabled CloudRun with Authentication Required.
However when I exec to one of my pods and try to curl the endpoint I get 403 (which I also get from my laptop, but the later is expected).
Any suggestions?
Curl don't know Google Cloud security. I mean that cURL don't know how to add the security token to your request. For this, you have to explicitly add the token in the header of your request.
From my computer I use this, because it's my personal account which is defined in Gcloud SDK.
curl -H "Authorization: Bearer $(gcloud config config-helper --format='value(credential.id_token)')" <URL>
With a service account defined in gcloud, you can use this command
curl -H "Authorization: Bearer $(gcloud auth print-identity-token)" <URL>
In both case you have to add the authorization header to your request.
In your code, if you use google libraries, you can use default credential, your default compute service-account will be used. cURL don't know do this!

HTTP POST to AWS IoT

I want to connect a HTTP device to IoT core.
I have tried this with the curl command all goes well.
Now I want to try to use POST with signatyure version 4
I'm using postmand to send a POST request, but I got this output:
"message": "The request signature we calculated does not match the signature you provided. Check your AWS Secret Access Key and signing method. Consult the service documentation for details.",
In authorization fields I have chosen "AWS Signature" and I have completed all of them: access and secret key, aws region and service name=iotdata
I want to get the same results as when I use the curl command:
curl --tlsv1.2 --cacert YY.pem --cert XX.pem.crt --key ZZ.pem.key -X POST -d "{ \"Trama\": \"message\"}" "https://PPPPPP.iot.eu-west-1.amazonaws.com:8443/topics/topicname?qos=1"
The problem here is that both of your commands are a little different, because of the various ways you can send data to AWS IoT.
In the curl command you're actually using x.509 certificate approach (you can see here for further information: https://docs.aws.amazon.com/iot/latest/developerguide/managing-device-certs.html) This doesn't need the payload to be signed, it's already trusted because the certificate is.
This approach is mostly unique to AWS IoT, because the aim is that the data comes from lots of devices- and you wouldn't want to give them all an IAM Role. In fact, certificate is the recommended way to send data from a device.
You can use these certificates with Postman if you want, by adding them to the request under certificates tab (you only need the .crt and .key files). See https://www.getpostman.com/docs/v6/postman/sending_api_requests/certificates for more detailed instructions.
You still can use AWS v4 signatures (https://docs.aws.amazon.com/iot/latest/developerguide/iam-users-groups-roles.html) so the suggestion is that you're not forming the request properly.
Looking at this documentation (https://docs.aws.amazon.com/iot/latest/apireference/API_iotdata_Publish.html) you should be using:
Method: POST
Uri: <AWS IoT Endpoint>/<url_encoded_topic_name>?qos=1 (e.g. https://a1pn10j0v8htvw.iot.us-east-1.amazonaws.com:8443/topics/iotbutton/virtualButton?qos=1)
Authorisation Type: AWS Signature
AccessKey / SecretKey: As per your credentials
AWS Region: Region you AWS IoT instances is in
Service Name: iotdata
Session Token: Leave blank