I'm getting started with AWS lambda and created a basic java handler like below.
import com.amazonaws.services.lambda.runtime.Context;
import com.amazonaws.services.lambda.runtime.RequestHandler;
import com.amazonaws.services.lambda.runtime.LambdaLogger;
import com.google.gson.Gson;
import com.google.gson.GsonBuilder;
import java.util.Map;
// Handler value: example.Handler
public class Handler implements RequestHandler<Map<String,String>, String>{
Gson gson = new GsonBuilder().setPrettyPrinting().create();
#Override
public String handleRequest(Map<String,String> event, Context context)
{
LambdaLogger logger = context.getLogger();
String response = "200 OK";
// log execution details
logger.log("ENVIRONMENT VARIABLES: " + gson.toJson(System.getenv()));
logger.log("CONTEXT: " + gson.toJson(context));
// process event
logger.log("EVENT: " + gson.toJson(event));
logger.log("EVENT TYPE: " + event.getClass());
return response;
}
}
When I'm testing the lambda function via AWS portal, it is working fine. But when I'm triggering the same using function URL like below, I'm getting a 502: Bad Gateway error.
curl --location --request POST 'https://xyz.lambda-url.region.on.aws?message=HelloWorld' \
--header 'Content-Type: application/json' \
--data-raw '{
"some-key": "some-value"
}'
An error occurred during JSON parsing: java.lang.RuntimeException
java.lang.RuntimeException: An error occurred during JSON parsing
Caused by: java.io.UncheckedIOException: com.amazonaws.lambda.thirdparty.com.fasterxml.jackson.databind.exc.MismatchedInputException: Cannot deserialize instance of java.lang.String out of START_OBJECT token
at [Source: (ByteArrayInputStream); line: 1, column: 84] (through reference chain: java.util.LinkedHashMap["headers"])
at com.amazonaws.services.lambda.runtime.serialization.factories.JacksonFactory$InternalSerializer.fromJson(JacksonFactory.java:184)
Caused by: com.amazonaws.lambda.thirdparty.com.fasterxml.jackson.databind.exc.MismatchedInputException: Cannot deserialize instance of java.lang.String out of START_OBJECT token
at [Source: (ByteArrayInputStream); line: 1, column: 84] (through reference chain: java.util.LinkedHashMap["headers"])
I've mentioned the Auth type of Function URL to NONE as well. Is there anything I'm missing here? Thanks in advance.
According to aws docs, even though you specify AuthType as NONE, your function's resource-based policy is always in effect and must grant public access before your function URL can receive requests.
You can re-verify your lambda role policy if it has public access and also with AuthType set to NONE.
Related
I have an GET based API gateway set up pointing to a Lambda with Lambda Proxy integration enabled
The API has AWS IAM as the auth method.
On my local, I have AWS Auth setup with temp session token
The following works without issue
curl -s GET "https://<ID>.execute-api.us-west-2.amazonaws.com/dev" \
--header "x-amz-security-token: ${SESSION_TOKEN}" \
--user $ACCESS_KEY:$SECRET_KEY \
--aws-sigv4 "aws:amz:us-west-2:execute-api" | jq .
But when I add query params to the url, it fails
curl -s GET "https://<ID>.execute-api.us-west-2.amazonaws.com/dev?a=${v1}&b=${v2}" \
--header "x-amz-security-token: ${SESSION_TOKEN}" \
--user $ACCESS_KEY:$SECRET_KEY \
--aws-sigv4 "aws:amz:us-west-2:execute-api" | jq .
This is the response that I get is
{
"message": "The request signature we calculated does not match the signature you provided. Check your AWS Secret Access Key and signing method. Consult the service documentation for details.\n\nThe Canonical String for this request should have been\n'GET\n/dev\nb=def&a=abc\nhost:<ID>.execute-api.us-west-2.amazonaws.com\nx-amz-date:20230104T112344Z\n\nhost;x-amz-date\<date-token>'\n\nThe String-to-Sign should have been\n'AWS4-HMAC-SHA256\n20230104T112344Z\n20230104/us-west-2/execute-api/aws4_request\<token>'\n"
}
Looks like I need to add the query params part to the signature part. How do I do that ? Or is there something else that I'm missing ?
I fried my brain a long day with a weird problem. I'm trying to send a .docx file document encoded in base64, initially, I thought that the AWS Api Gateway was cutting my body request, but invoking directly via awscli I check the lambda is cutting part of the body payload. Let me show you:
Here a snippet of my lambda function:
def lambda_handler(event, context):
if event["httpMethod"] == "GET":
return {
'statusCode': 200,
'body': json.dumps('Hello from PDFTron! Please send base64 doc to use this Lambda function!')
}
elif event["httpMethod"] == "POST":
try:
output_path = "/tmp/"
body = json.loads(str(event["body"])) # <--- the error is thrown here
base64str = body["file"]["data"]
filename = body["file"]["filename"]
base64_bytes = base64decode(base64str)
input_filename = filename.split('.')[0] + "docx"
Can you check the full file in lambda_function.py
Ok, now I try call my function via awscli with this command line:
aws lambda invoke \
--profile lab \
--region us-east-1 \
--function-name pdf_sign \
--cli-binary-format raw-in-base64-out \
--log-type Tail \
--query 'LogResult' \
--output text \
--payload file://payload.json response.json | base64 -d
payload.json
And receive the error:
C/ZcP1zcsiKiso3qnIUFO0Bk9/LjB7EOzkNAA7GgFDYu2A7RzzmPege9ijNyW/K0LvQKiYYtd21rNKycfuvBIr8syxsO7wi2gebCTwnZmHG+x/9N2jid9MWX+uApnxQ19L5TCPJbetnNGoe94JNV1A5VV5seZPWJ7BMTa7WFKCvBRyBeXWiivCsFH5FY7lRQGmmC8rq6Ezzj4rP3ndEKabbyq9HBRddi8TQILtJ7wfMQQU1sQL8FgwdJFXIqvhhL9a8EHwEJC2oblN8d1U1MbLTqYEnty1Z1EQT/bRCPoNJq18okfXuc70GjC0U0P2m5l6z4riKkoS3YXgWjLLIxbCQD7nzEIGuDHeWe+ADzsBybqyRyBOeBAxk0ED5XN1SITy31hv8QW+ViBw2j1ExOruxU44+sS9d7ZQ9yvXqog7O0v6MhDfxHfPa1W6ULOY7y3Jgt/9XgbuOVptXclLf5GWQesSErNLTXaTWTQTxSI6FL+emt3UJzivnbkQ7rZfxnZXU9K+kbLulko3uYfib5CwAA//8DAFBLAwQUAAYACAAAACEA+36uyJIBAAAiAwAAEQAIAWRvY1Byb3BzL2NvcmUueG1sIKIEASigAAEAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAfJJNa9wwEEDvhf4Ho7tX0uZjg/E6tAmhhAYCdUnoTZUmGzW2JKRJnP33HdtrbzaEgA4zmqeHNKPy/LVtsheIyXq3ZnIhWAZOe2PdZs1+11f5GcsSKmdU4x2s2RYSO6++fil1KLSPcBt9gIgWUkYmlwod1uwRMRScJ/0IrUoLIhwVH3xsFVIaNzwo/aQ2wJdCnPIWUBmFivfCPMxGtlMaPSvDc2wGgdEcGmjBYeJyIfmeRYht+vDAUHlDtha3AT5Ep+JMvyY7g13XLbqjAaX7S35/8/PX8NTcur5XGlhVGl2gxQaqku9DitLz33+gcdyeE4p1BIU+VtcqeZf9UDFaCgZsKvVNf4Jt56NJJDjICDOQdLQBaZSj/mCD6EYlvKHZPlgw37fVtya7pqGmwfSu1uMRXmz/L6rjgZjTSXUbrUMw1VJIkYtlLk9rIQshaP2ZnRNU7iYzPgZMRh0txv5Plbuji8v6iu19ZzXJ5Gr0vTu/F7a7W39uPMnFKpermnQnx4fGSTC29PBXV/8BAAD//wMAUEsDBBQABgAIAAAAIQDKYulMAAEAAOQBAAAUAAAAd29yZC93ZWJTZXR0aW5ncy54bWyU0VFLwzAQB/B3we9Q8r6mHSpS1g5EBj6rHyDNrl1YLhfuMuv89IY6J+LLfLtwuR93/Ffrd/TFG7A4Cq2qy0oVECxtXRhb9fqyWdyrQpIJW+MpQKuOIGrdXV+tpmaC/hlSyj+lyEqQBm2rdinFRmuxO0AjJUUIuTkQo0n5yaNGw/tDXFjCaJLrnXfpqJdVdadODF+i0DA4C49kDwghzfOawWeRguxclG9tukSbiLeRyYJIvgf9l4fGhTNT3/yB0FkmoSGV+ZjTRjOVx+tqrtD/ALf/A5ZnAG3zNAZi0/scQd6kyJjqcgYUk0P3ARviB6ZJgHW30r+y6T4BAAD//wMAUEsDBBQABgAIAAAAIQA1ZRhYpAAAAP4AAAATACgAY3VzdG9tWG1sL2l0ZW0xLnhtbCCiJAAooCAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAACsz00KgzAQhuF9oXcIOYCRLlyICkLd1kKgq26SOJpAfiQZQW/fIKUn6HL4Hl6YRtY8bFFBIhwsKISJ42Ghpe/+2XOzox4mgyb4cZ6NgtFb46HYk6XkhA/hMs6WkhfElGFLK0p2Z32qZUs14lozlpQGJ1IRVvB5m0N0AvMZFxbO8D2ozYFHdivLikkjrQlLFKs+vrG/pLqG/R7urpcPAAAA//8DAFBLAQItABQABgAIAAAAIQCWW+dDiQEAADwGAAATAAAAAAAAAAAAAAAAAAAAAABbQ29udGVudF9UeXBlc10ueG1sUEsBAi0AFAAGAAgAAAAhAB6RGrfvAAAATgIAAAsAAAAAAAAAAAAAAAAAwgMAAF9yZWxzLy5yZWxzUEsBAi0AFAAGAAgAAAAhAFRBdBwzAQAASwUAABwAAAAAAAAAAAAAAAAA4gYAAHdvcmQvX3JlbHMvZG9jdW1lbnQueG1sLnJlbHNQSwECLQAUAAYACAAAACEA5fxODOgMAADnMAAAEQAAAAAAAAAAAAAAAABXCQAAd29yZC9kb2N1bWVudC54bWxQSwECLQAKAAAAAAAAACEArp7hlQsEAQALBAEAFQAAAAAAAAAAAAAAAABuFgAAd29yZC9tZWRpYS9pbWFnZTMuZ2lmUEsBAi0AFAAGAAgAAAAhAJa1reLxBQAAUBsAABUAAAAAAAAAAAAAAAAArBoBAHdvcmQvdGhlbWUvdGhlbWUxLnhtbFBLAQItAAoAAAAAAAAAIQAZDo2bmmgCAJpoAgAVAAAAAAAAAAAAAAAAANAgAQB3b3JkL21lZGlhL2ltYWdlMS5wbmdQSwECLQAUAAYACAAAACEABNjgzVrSAgCcAgcAFQAAAAAAAAAAAAAAAACdiQMAd29yZC9tZWRpYS9pbWFnZTIuZW1mUEsBAi0AFAAGAAgAAAAhABb3sCgZCAAArB0AABEAAAAAAAAAAAAAAAAAKlwGAHdvcmQvc2V0dGluZ3MueG1sUEsBAi0AFAAGAAgAAAAhALqKxOGRAgAAZgkAABIAAAAAAAAAAAAAAAAAcmQGAHdvcmQvZm9udFRhYmxlLnhtbFBLAQItABQABgAIAAAAIQB0Pzl6wgAAACgBAAAeAAAAAAAAAAAAAAAAADNnBgBjdXN0b21YbWwvX3JlbHMvaXRlbTEueG1sLnJlbHNQSwECLQAUAAYACAAAACEASUab3eAAAABVAQAAGAAAAAAAAAAAAAAAAAA5aQYAY3VzdG9tWG1sL2l0ZW1Qcm9wczEueG1sUEsBAi0AFAAGAAgAAAAhAIIHldWPDAAA+ncAAA8AAAAAAAAAAAAAAAAAd2oGAHdvcmQvc3R5bGVzLnhtbFBLAQItABQABgAIAAAAIQCiDemz4QEAAOMDAAAQAAAAAAAAAAAAAAAAADN3BgBkb2NQcm9wcy9hcHAueG1sUEsBAi0AFAAGAAgAAAAhAPt+rsiSAQAAIgMAABEAAAAAAAAAAAAAAAAASnoGAGRvY1Byb3BzL2NvcmUueG1sUEsBAi0AFAAGAAgAAAAhAMpi6UwAAQAA5AEAABQAAAAAAAAAAAAAAAAAE30GAHdvcmQvd2ViU2V0dGluZ3MueG1sUEsBAi0AFAAGAAgAAAAhADVlGFikAAAA/gAAABMAAAAAAAAAAAAAAAAARX4GAGN1c3RvbVhtbC9pdGVtMS54bWxQSwUGAAAAABEAEQBdBAAAQn8GAAAA'}}}
[ERROR] Runtime.MarshalError: Unable to marshal response: Object of type KeyError is not JSON serializable
Traceback (most recent call last):END RequestId: a26b93bd-bcf2-4072-99da-29ffcc3bc350
REPORT RequestId: a26b93bd-bcf2-4072-99da-29ffcc3bc350 Duration: 11.92 ms Billed Duration: 12 ms Memory Size: 1024 MB Max Memory Used: 93 MB
When I add a debug (print command) on my code I can check that the payload is cutted and in the log error conntains the the final part of my payload (eg. RX4GAGN1c3RvbVhtbC9pdGVtMS54bWxQSwUGAAAAABEAEQBdBAAAQn8GAAAA'}}}.
Reading the aws doc about lambda quotas and limits, we can see that the invocation payload limit is 6MB, very different than my payload size (232kb)
Resource
Quota
Invocation payload (request and response)
6 MB each for request and response (synchronous); 256 KB (asynchronous)
Source: https://docs.aws.amazon.com/lambda/latest/dg/gettingstarted-limits.html
I really think I'm doing all things, into the rules, someone here can tell me and help me, where am i going wrong or misunderstand?
I expect the payload of my request arrives correct in my python function.
How you logged event in your Lambda function?
It seems like the problem is this line:
body = json.loads(str(event["body"]))
You are using the CLI and based on the docs, the Event will be sent as a String. So you need to JSON parse the event object first.
output_path = "/tmp/"
e = json.loads(event)
body = str(e["body"])
base64str = body["file"]["data"]
filename = body["file"]["filename"]
base64_bytes = base64decode(base64str)
input_filename = filename.split('.')[0] + "docx"
I've deployed a tensorflow multi-label classification model using a sagemaker endpoint as follows:
predictor = sagemaker_model.deploy(initial_instance_count=1, instance_type="ml.m5.2xlarge", endpoint_name='testing-2')
It gets deployed and works fine when I invoke it from the Sagemaker Jupyter instance:
sample = ['this movie was extremely good']
output=predictor.predict(sample)
output:
{'predictions': [[0.00370046496,
4.32942124e-06,
0.00080883503,
9.25126587e-05,
0.00023958087,
0.000130862]]}
However, I am unable to send a request to the deployed endpoint from other notebooks or sagemaker studio. I'm unsure of the request format.
I've tried several variations in the input format and still failed. The error message is as below:
sagemaker error
Request:
{
"body": {
"text": "Testing model's prediction on this text"
},
"contentType": "application/json",
"endpointName": "testing-2",
"customURL": "",
"customHeaders": [
{
"Key": "sm_endpoint_name",
"Value": "testing-2"
}
]
}
Error:
Error invoking endpoint: Received client error (400) from primary with message "{ "error": "Failed to process element:
0 key: text of 'instances' list. Error: INVALID_ARGUMENT: JSON object: does not have named input: text" }".
See https://us-west-2.console.aws.amazon.com/cloudwatch/home?region=us-west-2#logEventViewer:group=/aws/sagemaker/Endpoints/testing-2
in account 793433463428 for more information.
Is there any way to find out exactly how the model expects the request format to be?
Earlier I had the same model on my local system and the way I tested it was using this curl request:
curl -s -H 'Content-Type: application/json' -d '{"text": "what ugly posts"}' http://localhost:7070/sentiment
And it worked fine without any issues.
I've tried different formats and replaced the "text" key inside body with other words like "input", "body", nothing etc.
Based on your description above, I assume you are deploying the TensorFlow model using the SageMaker TensorFlow container.
If you want to view what your model expects as input you can use the saved_model CLI:
1
├── keras_metadata.pb
├── saved_model.pb
└── variables
├── variables.data-00000-of-00001
└── variables.index
!saved_model_cli show --all --dir {"1"}
After you have confirmed the input name above you can invoke the endpoint as follows:
import json
import boto3
client = boto3.client('runtime.sagemaker')
data = {"instances": ['this movie was extremely good']}
response = client.invoke_endpoint(EndpointName=<EndpointName>,
Body=json.dumps(data))
response_body = response['Body']
print(response_body.read())
The same payload can then also be used in Studio when invoking the endpoint.
I have the following class written in Java using Eclipse on my Amazon EC2 instance.
import java.nio.ByteBuffer;
import com.amazonaws.auth.*;
import com.amazonaws.client.builder.AwsClientBuilder;
import com.amazonaws.services.secretsmanager.*;
import com.amazonaws.services.secretsmanager.model.*;
public class SMtest {
public static void main(String[] args) {
String x = getSecret();
System.out.println(x);
}
public SMtest()
{
}
public static String getSecret() {
String secretName = "mysecret";
String endpoint = "secretsmanager.us-east-1.amazonaws.com";
String region = "us-east-1";
String result = "";
//BasicAWSCredentials awsCreds = new BasicAWSCredentials("mypublickey", "mysecretkey");
AwsClientBuilder.EndpointConfiguration config = new AwsClientBuilder.EndpointConfiguration(endpoint, region);
AWSSecretsManagerClientBuilder clientBuilder = AWSSecretsManagerClientBuilder.standard()
.withCredentials(new InstanceProfileCredentialsProvider(false));
//.withCredentials(new AWSStaticCredentialsProvider(awsCreds));
clientBuilder.setEndpointConfiguration(config);
AWSSecretsManager client = clientBuilder.build();
String secret;
ByteBuffer binarySecretData;
GetSecretValueRequest getSecretValueRequest = new GetSecretValueRequest()
.withSecretId(secretName).withVersionStage("AWSCURRENT");
GetSecretValueResult getSecretValueResult = null;
try {
getSecretValueResult = client.getSecretValue(getSecretValueRequest);
} catch(ResourceNotFoundException e) {
System.out.println("The requested secret " + secretName + " was not found");
} catch (InvalidRequestException e) {
System.out.println("The request was invalid due to: " + e.getMessage());
} catch (InvalidParameterException e) {
System.out.println("The request had invalid params: " + e.getMessage());
}
if(getSecretValueResult == null) {
result = "";
}
// Depending on whether the secret was a string or binary, one of these fields will be populated
if(getSecretValueResult.getSecretString() != null) {
secret = getSecretValueResult.getSecretString();
result = secret;
//System.out.println(secret);
}
else {
binarySecretData = getSecretValueResult.getSecretBinary();
result = binarySecretData.toString();
}
return result;
}
}
When I execute it from within Eclipse, it works just fine. When I compile the class to use it in ColdFusion (2021) from the same EC2 with the following code:
<cfscript>
obj = CreateObject("java","SMtest");
obj.init();
result = obj.getSecret();
</cfscript>
<cfoutput>#result#</cfoutput>
I get a "Failed to connect to service endpoint" error. I believe I have all the IAM credentials set up properly since it is working in straight Java. However, when I change the credentials to use the Basic Credentials with my AWS Public and Secret Key (shown in comments in the code above), it works in both Java and ColdFusion.
I created an AWS Policy that manages the Secrets Manager permissions. I also added AmazonEC2FullAccess. I also tried to create a VPC Endpoint, but this had no effect.
Why would it be working in Java and not in ColdFusion (which is based on Java) ? What roles/policies would I have to add to get it to work in ColdFusion when it is already working in Java?
STACK TRACE ADDED:
com.amazonaws.SdkClientException: Failed to connect to service endpoint:
at com.amazonaws.internal.EC2ResourceFetcher.doReadResource(EC2ResourceFetcher.java:100)
at com.amazonaws.internal.InstanceMetadataServiceResourceFetcher.getToken(InstanceMetadataServiceResourceFetcher.java:91)
at com.amazonaws.internal.InstanceMetadataServiceResourceFetcher.readResource(InstanceMetadataServiceResourceFetcher.java:69)
at com.amazonaws.internal.EC2ResourceFetcher.readResource(EC2ResourceFetcher.java:66)
at com.amazonaws.auth.InstanceMetadataServiceCredentialsFetcher.getCredentialsEndpoint(InstanceMetadataServiceCredentialsFetcher.java:58)
at com.amazonaws.auth.InstanceMetadataServiceCredentialsFetcher.getCredentialsResponse(InstanceMetadataServiceCredentialsFetcher.java:46)
at com.amazonaws.auth.BaseCredentialsFetcher.fetchCredentials(BaseCredentialsFetcher.java:112)
at com.amazonaws.auth.BaseCredentialsFetcher.getCredentials(BaseCredentialsFetcher.java:68)
at com.amazonaws.auth.InstanceProfileCredentialsProvider.getCredentials(InstanceProfileCredentialsProvider.java:165)
at com.amazonaws.http.AmazonHttpClient$RequestExecutor.getCredentialsFromContext(AmazonHttpClient.java:1266)
at com.amazonaws.http.AmazonHttpClient$RequestExecutor.runBeforeRequestHandlers(AmazonHttpClient.java:842)
at com.amazonaws.http.AmazonHttpClient$RequestExecutor.doExecute(AmazonHttpClient.java:792)
at com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeWithTimer(AmazonHttpClient.java:779)
at com.amazonaws.http.AmazonHttpClient$RequestExecutor.execute(AmazonHttpClient.java:753)
at com.amazonaws.http.AmazonHttpClient$RequestExecutor.access$500(AmazonHttpClient.java:713)
at com.amazonaws.http.AmazonHttpClient$RequestExecutionBuilderImpl.execute(AmazonHttpClient.java:695)
at com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:559)
at com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:539)
at com.amazonaws.services.secretsmanager.AWSSecretsManagerClient.doInvoke(AWSSecretsManagerClient.java:2454)
at com.amazonaws.services.secretsmanager.AWSSecretsManagerClient.invoke(AWSSecretsManagerClient.java:2421)
at com.amazonaws.services.secretsmanager.AWSSecretsManagerClient.invoke(AWSSecretsManagerClient.java:2410)
at com.amazonaws.services.secretsmanager.AWSSecretsManagerClient.executeGetSecretValue(AWSSecretsManagerClient.java:943)
at com.amazonaws.services.secretsmanager.AWSSecretsManagerClient.getSecretValue(AWSSecretsManagerClient.java:912)
at SMtest.getSecret(SMtest.java:52)
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:77)
at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:568)
at coldfusion.runtime.java.JavaProxy.invoke(JavaProxy.java:106)
at coldfusion.runtime.CfJspPage._invoke(CfJspPage.java:4254)
at coldfusion.runtime.CfJspPage._invoke(CfJspPage.java:4217)
at cfsmtest2ecfm1508275519.runPage(D:\ColdFusion2021\cfusion\wwwroot\smtest.cfm:4)
at coldfusion.runtime.CfJspPage.invoke(CfJspPage.java:257)
at coldfusion.tagext.lang.IncludeTag.handlePageInvoke(IncludeTag.java:749)
at coldfusion.tagext.lang.IncludeTag.doStartTag(IncludeTag.java:578)
at coldfusion.filter.CfincludeFilter.invoke(CfincludeFilter.java:65)
at coldfusion.filter.ApplicationFilter.invoke(ApplicationFilter.java:573)
at coldfusion.filter.RequestMonitorFilter.invoke(RequestMonitorFilter.java:43)
at coldfusion.filter.MonitoringFilter.invoke(MonitoringFilter.java:40)
at coldfusion.filter.PathFilter.invoke(PathFilter.java:162)
at coldfusion.filter.IpFilter.invoke(IpFilter.java:45)
at coldfusion.filter.LicenseFilter.invoke(LicenseFilter.java:30)
at coldfusion.filter.ExceptionFilter.invoke(ExceptionFilter.java:97)
at coldfusion.filter.BrowserDebugFilter.invoke(BrowserDebugFilter.java:81)
at coldfusion.filter.ClientScopePersistenceFilter.invoke(ClientScopePersistenceFilter.java:28)
at coldfusion.filter.BrowserFilter.invoke(BrowserFilter.java:38)
at coldfusion.filter.NoCacheFilter.invoke(NoCacheFilter.java:60)
at coldfusion.filter.GlobalsFilter.invoke(GlobalsFilter.java:38)
at coldfusion.filter.DatasourceFilter.invoke(DatasourceFilter.java:22)
at coldfusion.filter.CachingFilter.invoke(CachingFilter.java:62)
at coldfusion.CfmServlet.service(CfmServlet.java:231)
at coldfusion.bootstrap.BootstrapServlet.service(BootstrapServlet.java:311)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:228)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:163)
at coldfusion.monitor.event.MonitoringServletFilter.doFilter(MonitoringServletFilter.java:46)
at coldfusion.bootstrap.BootstrapFilter.doFilter(BootstrapFilter.java:47)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:190)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:163)
at coldfusion.inspect.weinre.MobileDeviceDomInspectionFilter.doFilter(MobileDeviceDomInspectionFilter.java:57)
at coldfusion.bootstrap.BootstrapFilter.doFilter(BootstrapFilter.java:47)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:190)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:163)
at org.apache.tomcat.websocket.server.WsFilter.doFilter(WsFilter.java:53)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:190)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:163)
at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:202)
at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:97)
at org.apache.catalina.authenticator.AuthenticatorBase.invoke(AuthenticatorBase.java:542)
at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:143)
at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:92)
at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:78)
at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:373)
at org.apache.coyote.ajp.AjpProcessor.service(AjpProcessor.java:462)
at org.apache.coyote.AbstractProcessorLight.process(AbstractProcessorLight.java:65)
at org.apache.coyote.AbstractProtocol$ConnectionHandler.process(AbstractProtocol.java:893)
at org.apache.tomcat.util.net.NioEndpoint$SocketProcessor.doRun(NioEndpoint.java:1723)
at org.apache.tomcat.util.net.SocketProcessorBase.run(SocketProcessorBase.java:49)
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1136)
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635)
at org.apache.tomcat.util.threads.TaskThread$WrappingRunnable.run(TaskThread.java:61)
at java.base/java.lang.Thread.run(Thread.java:833)
Caused by: java.net.MalformedURLException: Unknown protocol: http
at java.base/java.net.URL.<init>(URL.java:708)
at java.base/java.net.URL.fromURI(URL.java:748)
at java.base/java.net.URI.toURL(URI.java:1139)
at com.amazonaws.internal.ConnectionUtils.connectToEndpoint(ConnectionUtils.java:83)
at com.amazonaws.internal.EC2ResourceFetcher.doReadResource(EC2ResourceFetcher.java:80)
... 80 more
Caused by: java.lang.IllegalStateException: Unknown protocol: http
at org.apache.felix.framework.URLHandlersStreamHandlerProxy.parseURL(URLHandlersStreamHandlerProxy.java:373)
at java.base/java.net.URL.<init>(URL.java:703)
... 84 more
After noticing that the bottom end of the Stack Trace says "unknown protocol: http" and "Illegal State Exception: Unknown Protocol", I changed the endpoint in my class above to:
String endpoint = "https://secretsmanager.us-east-1.amazonaws.com";
and I am running the web-site via https now. It still works with no problem running it in Eclpse, and I am still getting the same error when using ColdFusion (i.e., when using a browser)
Since the basic credentials are working and since the error is not access denied, it suggests there is no issue with your IAM setup. Instead, likely your process failed to fetch the EC2 instance metadata. For example, timeout when calling the metadata endpoint, hence the Failed to connect to service endpoint error.
One way around this is to retrieve the accessKey and secretKey manually by calling the instance metadata API. For example, using cfhttp to populate variables.
Example curl command from docs: retrieve /api/token and use it to retrieve /meta-data/iam/security-credentials/<rolename>.
[ec2-user ~]$ TOKEN=`curl -X PUT "http://169.254.169.254/latest/api/token" -H "X-aws-ec2-metadata-token-ttl-seconds: 21600"` \
&& curl -H "X-aws-ec2-metadata-token: $TOKEN" -v http://169.254.169.254/latest/meta-data/iam/security-credentials/s3access
Output:
{
"Code" : "Success",
"LastUpdated" : "2012-04-26T16:39:16Z",
"Type" : "AWS-HMAC",
"AccessKeyId" : "ASIAIOSFODNN7EXAMPLE",
"SecretAccessKey" : "wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY",
"Token" : "token",
"Expiration" : "2017-05-17T15:09:54Z"
}
From here, if you still face any errors, at least it should be more explicit to you which step has failed and why.
Just a note, AWS recommends caching the credentials until near expiry instead of querying for every transaction to avoid throttling.
I am using lacinia-pedestal for server and re-graph for client side clojuresript
My client code looks like
(re-frame/dispatch [::re-graph/init {:http-url
"http://localhost:8888/graphql"
:ws-url nil
:http-parameters {:headers {"Content-Type" "application/graphql"}
}}])
(re-frame/dispatch [::re-graph/query
"{current_user(token: ss) {id}}"
nil some-func])
However when I connect to my server, I see following error in console log
"message":"Failed to parse GraphQL query.","extensions":{"errors":[{"locations":[{"line":1,"column":null}],"message":"mismatched input '\"query\"' expecting {'query', 'mutation', 'subscription', '...', NameId}"}
Following curl request it works,
curl http://localhost:8888/graphql
-H 'Content-Type: application/graphql'
-d 'query {
current_user(token: "foo"){
id
}}'
Any help will be appreciated
I am attaching my network console log
re-graph uses application/json by default to send the query and variables as a JSON payload. You don't need to override the content-type in your init call.