AWS Error retrieving credentials from the instance profile metadata server - amazon-web-services

Can someone please give me an insight on this error? Credentials are set via IAM policy. This box is included in auto scaling group, and this is the only one that got the following error.
Error retrieving credentials from the instance profile metadata server. When you are not running inside of Amazon EC2, you must provide your AWS access key ID and secret access key in the \"key\" and \"secret\" options when creating a client or provide an instantiated Aws\Common\Credentials\CredentialsInterface object"
Logs:
CRITICAL
Phalconry\Mvc\Exceptions\ServerException
Extra
"remoteip": "XX.XX.XX.XX, XX.XX.XX.XX",
"userid": "1357416",
"session": "fcke8khsqe4lfo2lj6kdmrd4l7",
"url": "GET:\/manage-competition\/athlete",
"request_identifier": "xxxxxx5c80516bc11532.74367732",
"server": "companydomain.com",
"client_agent": "Mozilla\/5.0 (Linux; Android 9; SM-G965U) AppleWebKit\/537.36 (KHTML, like Gecko) Chrome\/72.0.3626.121 Mobile Safari\/537.36",
"instance_ip_address": "xx.xx.xx.xx",
"process_id": 29528,
"file": "\/var\/www\/code_deploy\/cfweb\/releases\/20190306195438\/core\/classes\/phalconry\/mvc\/exceptions\/MvcException.php",
"line": 51,
"class": "Phalconry\\Mvc\\Exceptions\\MvcException",
"function": "dispatch"
}```
Context
```{
"Status Code": 500,
"Reason": "Internal Server Error",
"Details": "Array\n(\n [code] => server_error\n [description] => Uncaught Server Error\n [details] => Request could not be processed. Please contact Support.\n)\n",
"Log": "Error retrieving credentials from the instance profile metadata server. When you are not running inside of Amazon EC2, you must provide your AWS access key ID and secret access key in the \"key\" and \"secret\" options when creating a client or provide an instantiated Aws\\Common\\Credentials\\CredentialsInterface object. (Unable to parse response body into JSON: 4)",
"Trace": "#0 [internal function]: Phalconry\\Mvc\\MvcApplication::Phalconry\\Mvc\\{closure}(Object(Phalcon\\Events\\Event), Object(Phalcon\\Mvc\\Dispatcher), Object(Aws\\Common\\Exception\\InstanceProfileCredentialsException))\n#1 [internal function]: Phalcon\\Events\\Manager->fireQueue(Array, Object(Phalcon\\Events\\Event))\n#2 [internal function]: Phalcon\\Events\\Manager->fire('dispatch:before...', Object(Phalcon\\Mvc\\Dispatcher), Object(Aws\\Common\\Exception\\InstanceProfileCredentialsException))\n#3 [internal function]: Phalcon\\Mvc\\Dispatcher->_handleException(Object(Aws\\Common\\Exception\\InstanceProfileCredentialsException))\n#4 [internal function]: Phalcon\\Dispatcher->dispatch()\n#5 \/var\/www\/code_deploy\/cfweb\/releases\/20190306195438\/sites\/games\/lib\/phalcon.php(101): Phalcon\\Mvc\\Application->handle()\n#6 \/var\/www\/code_deploy\/cfweb\/releases\/20190306195438\/sites\/games\/index.php(4): require_once('\/var\/www\/code_d...')\n#7 {main}"
}```

The meta-data of instance-profile credentials is under
http://169.254.169.254/latest/meta-data/identity-credentials/ec2/security-credentials/ec2-instance
If that is failing it might be an issue with the hypervisor/droplet that your server has been spun up on. This endpoint will give you the last time credentials were refreshed.
'http://169.254.169.254/latest/meta-data/identity-credentials/ec2/info'
If other servers with identical AMI's and availability zones aren't having an issue, I would log a support ticket, terminate and move on.

These credentials come from the instance metadata URL (http://169.254.169.254).
The problem that you are likely having (esp since you are running in an ASG) is that when you create an AMI and then launch it in another AZ, the route to the metadata URL is not updated. What you need to do is force cloud-init to run on next boot.
The simple way to do this is to clear out the cloud-init metadata directory:
sudo rm -f /var/lib/cloud/instances/*/sem/config_scripts_user
After you run that command, then shutdown the machine and create an AMI from it. If you use that AMI for your ASG, cloud-init will do a full run on first boot which will update the routes to the instance metadata URL and your IAM creds should work.

Related

Phillips-Labs terraform-aws-github-runner not creating ec2 instance

I am trying to set up self-hosted runners for GitHub using Terraform with Phillips-Labs terraform-aws-github-runner module. I see the GH webhook send/receive messages, SQS queue receiving messages and those messages being retrieve. The scale-up lambda is firing and I see the following logs:
2023-01-31 11:50:15.879 INFO [scale-up:22b11002-76d2-5596-9451-4c51746730c2 index.js:119051 scaleUp] Received workflow_job from {my-org}/terraform-aws-github-self-hosted-runners
{}
2023-01-31 11:50:15.880 INFO [scale-up:22b11002-76d2-5596-9451-4c51746730c2 index.js:119084 scaleUp] Received event
{
"runnerType": "Org",
"runnerOwner": "my-org",
"event": "workflow_job",
"id": "11002102910"
}
2023-01-31 11:50:16.188 DEBUG [gh-auth:22b11002-76d2-5596-9451-4c51746730c2 index.js:118486 createAuth] GHES API URL: {"runnerType":"Org","runnerOwner":"my-org","event":"workflow_job","id":"11002102910"}
2023-01-31 11:50:16.193 WARN [scale-runners:22b11002-76d2-5596-9451-4c51746730c2 index.js:118529 Runtime.handler] Ignoring error: error:1E08010C:DECODER routines::unsupported
{
"runnerType": "Org",
"runnerOwner": "my-org",
"event": "workflow_job",
"id": "11002102910"
}
I do not see any EC2 instances being creating. I suspect the GHES API URL: should have a value after it, but I'm not certain. Also, the final log says it is ignoring an error...
I have confirmed my private key pem file is stored as a multi-line secret in secrets manager.
Any advice would be much appreciated!
It looks like not all the permissions needed by the github app are documented. I needed to add a subscription to the Workflow run event.

"Host header is specified and is not an IP address or localhost" message when using chromedp headless-shell

I'm trying to deploy chromedp/headless-shell to Cloud Run.
Here is my Dockerfile:
FROM chromedp/headless-shell
ENTRYPOINT [ "/headless-shell/headless-shell", "--remote-debugging-address=0.0.0.0", "--remote-debugging-port=9222", "--disable-gpu", "--headless", "--no-sandbox" ]
The command I used to deploy to Cloud Run is
gcloud run deploy chromedp-headless-shell --source . --port 9222
Problem
When I go to this path /json/list, I expect to see something like this
[{
"description": "",
"devtoolsFrontendUrl": "/devtools/inspector.html?ws=localhost:9222/devtools/page/B06F36A73E5F33A515E87C6AE4E2284E",
"id": "B06F36A73E5F33A515E87C6AE4E2284E",
"title": "about:blank",
"type": "page",
"url": "about:blank",
"webSocketDebuggerUrl": "ws://localhost:9222/devtools/page/B06F36A73E5F33A515E87C6AE4E2284E"
}]
but instead, I get this error:
Host header is specified and is not an IP address or localhost.
Is there something wrong with my configuration or is Cloud Run not the ideal choice for deploying this?
This specific issue is not unique to Cloud Run. It originates from an existing change in the Chrome DevTools Protocol which generates this error when accessing it remotely. It could be attributed to security measures against some types of attacks. You can see the related Chromium pull request here.
I deployed a chromedp/headless-shell container to Cloud Run using your configuration and also received the same error. Now, there is this useful comment in a GitHub issue showing a workaround for this problem, by passing a HOST:localhost header. While this does work when I tested it locally, it does not work on Cloud Run (returns a 404 error). This 404 error could be due to how Cloud Run also utilizes the HOST header to route requests to the correct service.
Unfortunately this answer is not a solution, but it sheds some light on what you are seeing and why. I would go for using a different service from GCP, such a GCE that are pure virtual machines and less managed.

How to connect Loopback app to google database with ssl enabled?

I'm attempting to connect a Loopback app to a Google SQL database, and I've altered the Datasource.json file to match the credentials. However, when I make a GET request in the Loopback API explorer I get an error. I have not found any docs on how to specify the ssl credentials in Datasource.json and I think this is causing the error.
I've fruitlessly attempted to change Datasource.json and below is the current state. I've changed details for privacy, but I'm 100% certain the credentials are correct as I can make a successful connection with javascript.
{
"nameOfModel": {
"name": "db",
"connector": "mysql",
"host": "xx.xxx.x.xxx",
"port": xxxx,
"user": "user",
"password": "password",
"database": "sql_db",
"ssl": true,
"ca" : "/server-ca.pem",
"cert" : "/client-cert.pem",
"key" : "/client-key.pem"
}
}
This is the error the command line returns when I attempt a GET request on the loopback API explorer. The "Error:
Timeout in connecting after 5000 ms" leads me to believe it's not reading the ssl credentials.
Unhandled error in GET /edd-sales?filter[offset]=0&filter[limit]=0&filter[skip]=0: 500 TypeError: Cannot read property 'name' of undefined
at EddDbDataSource.DataSource.queueInvocation.DataSource.ready (D:\WebstormProjects\EDD-Database\edd-api\node_modules\loopback-datasource-juggler\lib\datasource.js:2577:81)
(node:10176) UnhandledPromiseRejectionWarning: Error: Timeout in connecting after 5000 ms
at Timeout._onTimeout (D:\WebstormProjects\EDD-Database\edd-api\node_modules\loopback-datasource-juggler\lib\datasource.js:2572:10)
at ontimeout (timers.js:498:11)
(node:10176) UnhandledPromiseRejectionWarning: Unhandled promise rejection. This error originated either by throwing inside of an async function without a catch block, or by rejecting a promise which was not handled with .catch(). (rejection id: 2)
(node:10176) [DEP0018] DeprecationWarning: Unhandled promise rejections are deprecated. In the future, promise rejections that are not handled will terminate the Node.js process with a non-zero exit code.
Are you sure datasource.json allows you to specify "ssl" connections? Where did you get that information? I have checked their documentation and they don't show the "ssl" properties that you are using. Neither it specifies so on the MySQL connector properties.
You have two options:
1.- Create the connection without using SSL.
2.- Create your own connector or use an existing one with ssl options implemented. Have in mind that this might causes issues with LoopBack framework.
Don't matter which of the two options you decide to use, remember to Whitelist your IP (the IP from where you are trying to access the database instance), you can do so on the Cloud Console, on the "Connections" tab, under "public IP" Authorized networks. If you don't do this it can cause the timeout error that you are getting.
Try this. I am using lookback3 and it works well for me. You need to create datasources.local.js in order to properly load CA files.
datasources.local.js
const fs = require('fs');
module.exports = {
nameOfModel: {
name: 'db',
connector: 'mysql',
host: 'xx.xxx.x.xxx',
port: 'xxxx',
user: 'user',
password: 'password',
database: 'sql_db',
ssl: {
ca: fs.readFileSync(`${__dirname}/server-ca.pem`),
cert: fs.readFileSync(`${__dirname}/client-cert.pem`),
key: fs.readFileSync(`${__dirname}/client-key.pem`),
},
}
}
Notice that instead of using ssl: true, you need to use an object with those properties.

aws pinpoint update-apns-sandbox-channel command results in: missing credentials

aws --version
aws-cli/1.16.76 Python/2.7.10 Darwin/16.7.0 botocore/1.12.66
I'm trying to programmatically add an APNS_SANDBOX channel to a pinpoint app. I'm able to do this successfully via the pinpoint console, but not with aws cli or a lambda function which is the end goal. Changes to our Test/Prod environments can only be made via the CodePipeline, but for testing purposes I'm trying to achieve this with the aws cli.
I've tried both aws cli (using the root credentials) and a lambda function -- both result in the following error:
An error occurred (BadRequestException) when calling the UpdateApnsSandboxChannel operation: Missing credentials
I have tried setting the Certificate field in the UpdateApnsSandboxChannel json object as the path to the .p12 certificate file as well as using a string value retrieved from the openssl tool.
Today I worked with someone from aws support, and they were not able to figure out the issue after trying to debug for a couple of hours. They said they would send an email to the pinpoint team, but they did not have an ETA on when they might respond.
Thanks
I ended up getting this to work successfully -- This is why it was failing:
I was originally making the cli call with the following request object as this is what is including in the documentation:
aws pinpoint update-apns-sandbox-channel --application-id [physicalID] --cli-input-json file:///path-to-requestObject.json
{
"APNSSandboxChannelRequest": {
"BundleId": "com.bundleId.value",
"Certificate":"P12_FILE_PATH_OR_CERT_AS_STRING",
"DefaultAuthenticationMethod": "CERTIFICATE",
"Enabled": true,
"PrivateKey":"PRIVATEKEY_FILE_PATH_OR_AS_STRING",
"TeamId": "",
"TokenKey": "",
"TokenKeyId": ""
},
"ApplicationId": "Pinpoint_PhysicalId"
}
After playing around with it some more I got it to work by removing BundleId, TeamId, TokenKey, and TokenKeyId. I believe these fields are needed when using a p8 certificate.
{
"APNSSandboxChannelRequest": {
"Certificate":"P12_FILE_PATH_OR_CERT_AS_STRING",
"DefaultAuthenticationMethod": "CERTIFICATE",
"Enabled": true,
"PrivateKey":"PRIVATEKEY_FILE_PATH_OR_AS_STRING"
},
"ApplicationId": "Pinpoint_PhysicalId"
}

Python Requests Post request fails when connecting to a Kerberized Hadoop cluster with Livy

I'm trying to connect to a kerberized hadoop cluster via Livy to execute Spark code. The requests call im making is as below.
kerberos_auth = HTTPKerberosAuth(mutual_authentication=REQUIRED, force_preemptive=True)
r = requests.post(host + '/sessions', data=json.dumps(data), headers=headers, auth=kerberos_auth)
This call fails with the following error
GSSException: No valid credentials provided (Mechanism level: Failed
to find any Kerberos credentails)
Any help here would be appreciated.
When running Hadoop service daemons in Hadoop in secure mode, Kerberos tickets are decrypted with a keytab and the service uses the keytab to determine the credentials of the user coming into the cluster. Without a keytab in place with the right service principal inside of it, you will get this error message. Please refer to Hadoop in Secure Mode for further details on setting up the keytab.