Trying to get my node.js IOT example working but not sure what configuration I need to set to pass to my thingShadow constructor awsIot.thingShadow(config)
This is the sample config I get from the AWS dashboard
{
"host": "foo.iot.us-east-1.amazonaws.com",
"port": 8883,
"clientId": "bar",
"thingName": "bar",
"caCert": "root-CA.crt",
"clientCert": "bar-certificate.pem.crt",
"privateKey": "bar-private.pem.key"
}
However this is the constructor I set based on the sdk readme
{
keyPath: 'bar-private.pem.key',
certPath: 'bar-certificate.pem.crt',
caCert: "root-CA.crt",
clientId: 'bar'
}
I get the error
events.js:141
throw er; // Unhandled 'error' event
^
Error: unable to get local issuer certificate
at Error (native)
at TLSSocket.<anonymous> (_tls_wrap.js:1017:38)
at emitNone (events.js:67:13)
at TLSSocket.emit (events.js:166:7)
at TLSSocket._init.ssl.onclienthello.ssl.oncertcb.TLSSocket._finishInit (_tls_wrap.js:582:8)
at TLSWrap.ssl.onclienthello.ssl.oncertcb.ssl.onnewsession.ssl.onhandshakedone (_tls_wrap.js:424:38)
What is caCert based on - is that a cert that I have in my local path? If so where do I get it from, the dashboard as a download somewhere? Am I sending the right certificate files for privateKey?
So the issue was the root-CA.crt file. I found mine from the node_modules directory in the aws library and that was not valid.
I needed to get the crt file from
https://www.symantec.com/content/en/us/enterprise/verisign/roots/VeriSign-Class%203-Public-Primary-Certification-Authority-G5.pem
As noted in this doc http://docs.aws.amazon.com/iot/latest/developerguide/iot-device-sdk-node.html
Related
I am trying to set up self-hosted runners for GitHub using Terraform with Phillips-Labs terraform-aws-github-runner module. I see the GH webhook send/receive messages, SQS queue receiving messages and those messages being retrieve. The scale-up lambda is firing and I see the following logs:
2023-01-31 11:50:15.879 INFO [scale-up:22b11002-76d2-5596-9451-4c51746730c2 index.js:119051 scaleUp] Received workflow_job from {my-org}/terraform-aws-github-self-hosted-runners
{}
2023-01-31 11:50:15.880 INFO [scale-up:22b11002-76d2-5596-9451-4c51746730c2 index.js:119084 scaleUp] Received event
{
"runnerType": "Org",
"runnerOwner": "my-org",
"event": "workflow_job",
"id": "11002102910"
}
2023-01-31 11:50:16.188 DEBUG [gh-auth:22b11002-76d2-5596-9451-4c51746730c2 index.js:118486 createAuth] GHES API URL: {"runnerType":"Org","runnerOwner":"my-org","event":"workflow_job","id":"11002102910"}
2023-01-31 11:50:16.193 WARN [scale-runners:22b11002-76d2-5596-9451-4c51746730c2 index.js:118529 Runtime.handler] Ignoring error: error:1E08010C:DECODER routines::unsupported
{
"runnerType": "Org",
"runnerOwner": "my-org",
"event": "workflow_job",
"id": "11002102910"
}
I do not see any EC2 instances being creating. I suspect the GHES API URL: should have a value after it, but I'm not certain. Also, the final log says it is ignoring an error...
I have confirmed my private key pem file is stored as a multi-line secret in secrets manager.
Any advice would be much appreciated!
It looks like not all the permissions needed by the github app are documented. I needed to add a subscription to the Workflow run event.
In a docker container, the scality/s3server-image is running. I am connecting to it with NodeJS using the #aws-sdk/client-s3 API.
The S3Client setup looks like this:
const s3Client = new S3Client({
region: undefined, // See comment below
endpoint: 'http://127.0.0.1:8000',
credentials: {
accessKeyId: 'accessKey1',
secretAccessKey: 'verySecretKey1',
},
})
Region undefined: this answer to a similar question mentions to leave the region out, but, accessing the region with await s3Client.config.region() still displays eu-central-1, which was the value I passed to the constructor in a previous version. Although I changed it to undefined, it does still take the old configuration. Could that be connected to the issue?
It was possible to successfully create a bucket (test) and it could be listed by running a ListBucketsCommand (await s3Client.send(new ListBucketsCommand({}))).
However, as mentionned in the title, uploading content or streams to the Bucket with
bucketParams = {
Bucket: 'test',
Key: 'test.txt',
Body: 'Test Content',
}
await s3Client.send(new PutObjectCommand(bucketParams))
does not work, instead I am getting a DNS resolution error (which seems odd, since I manually typed the IP-address, not localhost.
Anyway, here the error message:
Error: getaddrinfo EAI_AGAIN test.127.0.0.1
at GetAddrInfoReqWrap.onlookup [as oncomplete] (node:dns:72:26) {
errno: -3001,
code: 'EAI_AGAIN',
syscall: 'getaddrinfo',
hostname: 'test.127.0.0.1',
'$metadata': { attempts: 1, totalRetryDelay: 0 }
}
Do you have any idea on
why the region is still configured and/or
why the DNS lookup happens / and then fails, but only when uploading, not when retrieving metadata about the Buckets / creating the Buckets?
For the second question, I found a workaround:
Instead of specifying the IP-Address directly, using endpoint: http://localhost:8000 (so using the Hostname instead of the IP-Adress) fixes the DNS lookup exception. However, there is no obvious reason on why this should happen.
I'm attempting to connect a Loopback app to a Google SQL database, and I've altered the Datasource.json file to match the credentials. However, when I make a GET request in the Loopback API explorer I get an error. I have not found any docs on how to specify the ssl credentials in Datasource.json and I think this is causing the error.
I've fruitlessly attempted to change Datasource.json and below is the current state. I've changed details for privacy, but I'm 100% certain the credentials are correct as I can make a successful connection with javascript.
{
"nameOfModel": {
"name": "db",
"connector": "mysql",
"host": "xx.xxx.x.xxx",
"port": xxxx,
"user": "user",
"password": "password",
"database": "sql_db",
"ssl": true,
"ca" : "/server-ca.pem",
"cert" : "/client-cert.pem",
"key" : "/client-key.pem"
}
}
This is the error the command line returns when I attempt a GET request on the loopback API explorer. The "Error:
Timeout in connecting after 5000 ms" leads me to believe it's not reading the ssl credentials.
Unhandled error in GET /edd-sales?filter[offset]=0&filter[limit]=0&filter[skip]=0: 500 TypeError: Cannot read property 'name' of undefined
at EddDbDataSource.DataSource.queueInvocation.DataSource.ready (D:\WebstormProjects\EDD-Database\edd-api\node_modules\loopback-datasource-juggler\lib\datasource.js:2577:81)
(node:10176) UnhandledPromiseRejectionWarning: Error: Timeout in connecting after 5000 ms
at Timeout._onTimeout (D:\WebstormProjects\EDD-Database\edd-api\node_modules\loopback-datasource-juggler\lib\datasource.js:2572:10)
at ontimeout (timers.js:498:11)
(node:10176) UnhandledPromiseRejectionWarning: Unhandled promise rejection. This error originated either by throwing inside of an async function without a catch block, or by rejecting a promise which was not handled with .catch(). (rejection id: 2)
(node:10176) [DEP0018] DeprecationWarning: Unhandled promise rejections are deprecated. In the future, promise rejections that are not handled will terminate the Node.js process with a non-zero exit code.
Are you sure datasource.json allows you to specify "ssl" connections? Where did you get that information? I have checked their documentation and they don't show the "ssl" properties that you are using. Neither it specifies so on the MySQL connector properties.
You have two options:
1.- Create the connection without using SSL.
2.- Create your own connector or use an existing one with ssl options implemented. Have in mind that this might causes issues with LoopBack framework.
Don't matter which of the two options you decide to use, remember to Whitelist your IP (the IP from where you are trying to access the database instance), you can do so on the Cloud Console, on the "Connections" tab, under "public IP" Authorized networks. If you don't do this it can cause the timeout error that you are getting.
Try this. I am using lookback3 and it works well for me. You need to create datasources.local.js in order to properly load CA files.
datasources.local.js
const fs = require('fs');
module.exports = {
nameOfModel: {
name: 'db',
connector: 'mysql',
host: 'xx.xxx.x.xxx',
port: 'xxxx',
user: 'user',
password: 'password',
database: 'sql_db',
ssl: {
ca: fs.readFileSync(`${__dirname}/server-ca.pem`),
cert: fs.readFileSync(`${__dirname}/client-cert.pem`),
key: fs.readFileSync(`${__dirname}/client-key.pem`),
},
}
}
Notice that instead of using ssl: true, you need to use an object with those properties.
aws --version
aws-cli/1.16.76 Python/2.7.10 Darwin/16.7.0 botocore/1.12.66
I'm trying to programmatically add an APNS_SANDBOX channel to a pinpoint app. I'm able to do this successfully via the pinpoint console, but not with aws cli or a lambda function which is the end goal. Changes to our Test/Prod environments can only be made via the CodePipeline, but for testing purposes I'm trying to achieve this with the aws cli.
I've tried both aws cli (using the root credentials) and a lambda function -- both result in the following error:
An error occurred (BadRequestException) when calling the UpdateApnsSandboxChannel operation: Missing credentials
I have tried setting the Certificate field in the UpdateApnsSandboxChannel json object as the path to the .p12 certificate file as well as using a string value retrieved from the openssl tool.
Today I worked with someone from aws support, and they were not able to figure out the issue after trying to debug for a couple of hours. They said they would send an email to the pinpoint team, but they did not have an ETA on when they might respond.
Thanks
I ended up getting this to work successfully -- This is why it was failing:
I was originally making the cli call with the following request object as this is what is including in the documentation:
aws pinpoint update-apns-sandbox-channel --application-id [physicalID] --cli-input-json file:///path-to-requestObject.json
{
"APNSSandboxChannelRequest": {
"BundleId": "com.bundleId.value",
"Certificate":"P12_FILE_PATH_OR_CERT_AS_STRING",
"DefaultAuthenticationMethod": "CERTIFICATE",
"Enabled": true,
"PrivateKey":"PRIVATEKEY_FILE_PATH_OR_AS_STRING",
"TeamId": "",
"TokenKey": "",
"TokenKeyId": ""
},
"ApplicationId": "Pinpoint_PhysicalId"
}
After playing around with it some more I got it to work by removing BundleId, TeamId, TokenKey, and TokenKeyId. I believe these fields are needed when using a p8 certificate.
{
"APNSSandboxChannelRequest": {
"Certificate":"P12_FILE_PATH_OR_CERT_AS_STRING",
"DefaultAuthenticationMethod": "CERTIFICATE",
"Enabled": true,
"PrivateKey":"PRIVATEKEY_FILE_PATH_OR_AS_STRING"
},
"ApplicationId": "Pinpoint_PhysicalId"
}
I've been having issues with deploying a domain I recently transferred from godaddy to AWS.
Here's the zappa settings:
{
"staging": {
"app_function": "__init__.app",
"aws_region": "ap-southeast-2",
"profile_name": "default",
"s3_bucket": "zappa-flowersapp",
"domain": "minnidesign.com",
"certificate_arn": "arn:aws:acm:us-east-1:985294012425:certificate/a8740ef0-0d99-4355-ac99-210ead89b743"
}
}
On running zappa certify the first time I get this error:
params[name] = orig_value.split('/')[-1]
AttributeError: 'NoneType' object has no attribute 'split'
The second time I am getting this error:
raise error_class(parsed_response, operation_name)
BadRequestException: An error occurred (BadRequestException) when calling the CreateDomainName operation: The domain name you provided already exists.
I have no idea why this is happening, I have never had this kind of issue with Zappa. (When I go to minnidesign.com there is a server not found error).
Does anyone know a solution to this problem? Many thanks in advance!
I just had to create a hosted zone then update the NS records of the registered domain with the new hosted zone's NS records, then created new certificate and worked a charm!
Zappa fails with certify sometimes.
You can enter the apigateway dashboard on AWS and manually delete the entry before trying again.
AWS Console > apigateway > custom domain names > delete