How to check if AWS X-Ray has been configured - amazon-web-services

We have an AWS Lambda running in Go, and upon initialisation runs the following to initialise AWS X-Ray
err := xray.Configure(xray.Config{
LogLevel: "info",
ServiceVersion: "1.2.3",
})
In a seperate repository, we have a utils repository which exposes a HTTP library for our internal stuff. This is imported as a git submodule to all other Lambdas. The code is as follows:
ctx, subseg := xray.BeginSubsegment(incomingContext, "Outbound HTTP call")
client := xray.Client(&http.Client{Transport: tr})
// further down
client.Do(req)
// finally
subseg.Close(resp)
This works as expected when deployed on AWS, producing a nice graph.
The problem is running unit tests on the utils repository. In the context of that repository alone, X-Ray has not been configured, so on the BeginSubsegment call I get a panic:
panic: failed to begin subsegment named 'Outbound HTTP call': segment cannot be found.
I want to gracefully handle the case when X-Ray has not been configured, log it, and carry on execution regardless.
How can I ensure to properly error handle the call to BeginSubsegment when it does not return an error object?

In the case of lambda this code executes without any panic is because lambda creates a facade segment and then your code will be creating subsegments. In the non lambda environment you will have to create a segment first before creating a subsegment. If you don't then it will generate a panic. Now, If you want to log this panic and continue executing your unit tests then I would recommend you to set AWS_XRAY_CONTEXT_MISSING environment variable to LOG_ERROR. It will basically log your panic and continue executing your unit tests.

Related

How to run python backend testcases (using rest API) using AWS pipeline. Tests run successfully in my local machine

I am trying to run my testsuite with a testcase which has a post request using aws pipeline.
During build execution it fails with Timeout. It says "Max retries exceeded with url"
We do VPC configuration to run front end web testcases using devicefarm.
The same way, should we do any vpc configuration to run backend cases also?
If yes, what configuration and where can I do that.
Please share the infromation on this if you have any.
Thank you

Scheduled Cloud build trigger throws 404 NOT_FOUND error

I recently created a scheduled trigger by following this google page: . But when I did a test run from Scheduler's interface, the result was a NOT_FOUND error:
{
#type: "type.googleapis.com/google.cloud.scheduler.logging.AttemptFinished"
jobName: "projects/myproject/locations/australia-southeast1/jobs/trigger-schedule"
status: "NOT_FOUND"
targetType: "HTTP"
url: "https://cloudbuild.googleapis.com/v1/projects/myproject/triggers/ca55b01d-f4e6-4b8b-b92b-b2e4f380788c:run"
}
I was worried about location, which is appEngine related, even there is no instances, the location shows to be in australia-southeast1, which is correct.
What could be the cause of the error? Or even what was not found? the job definition or the target?
After running gcloud beta builds triggers run TRIGGER which is the scheduled job runs, I found the cloudbuild.yaml does not exist in the targeted branch.
First, I wish the error in the scheduler could have been more meaningful and had some details.
Second, triggers all have conditions how they are triggered. Maybe the POST HTTP call to the trigger can allow an empty body to use default condition. In my case, the condition defined in the trigger was branch = test and in my scheduled job definition was branch = master. This mismatch caused the problem.
Hope this could help others to debug scheduled triggers.

Unable to start amplify mock InternalFailure: The request processing has failed because of an unknown error, exception or failure

I have just tried to start my amplify mock service and received the follow error:
InternalFailure: The request processing has failed because of an unknown error, exception or failure.
This has previously worked, a few hours ago with no resets or other changes.
To fix this, I did have success with removing amplify completely, doing amplify init & amplify add api but this means I lose my local data each time, but it happens randomly multiple times in the last few hours.
For the full log while error is taking place:
hutber#hutber:/var/www/unsal.co.uk$ amplify mock
GraphQL schema compiled successfully.
Edit your schema at /var/www/unsal.co.uk/amplify/backend/api/unsalcouk/schema.graphql or place .graphql files in a directory at /var/www/unsal.co.uk/amplify/backend/api/unsalcouk/schema
Failed to start API Mock endpoint InternalFailure
the problem probably comes from the SQLite file use for the mock (lock stories I guess). Delete the file in the mock-data/dynamodb/your_db_file.db folder and execute the again amplify mock api. The file recreates itself correctly. This avoids resetting the whole amplify project.

S3Client and Quarkus Native App Issueu with Runn

I am trying to create a lambda S3 listener leveraging Lambda as a native image. The point is to get the S3 event and then do some work by pulling the file, etc. To get the file I am using het AWS 2.x S3 client as below
S3Client.builder().httpClient().build();
This code results in
2020-03-12 19:45:06,205 ERROR [io.qua.ama.lam.run.AmazonLambdaRecorder] (Lambda Thread) Failed to run lambda: software.amazon.awssdk.core.exception.SdkClientException: Unable to load an HTTP implementation from any provider in the chain. You must declare a dependency on an appropriate HTTP implementation or pass in an SdkHttpClient explicitly to the client builder.
To resolve this I added the aws apache client and updated the code to do the following:
SdkHttpClient httpClient = ApacheHttpClient.builder().
maxConnections(50).
build()
S3Client.builder().httpClient(httpClient).build();
I also had to add:
[
["org.apache.http.conn.HttpClientConnectionManager",
"org.apache.http.pool.ConnPoolControl","software.amazon.awssdk.http.apache.internal.conn.Wrapped"]
]
After this I am now getting the following stack trace:
Caused by: java.security.InvalidAlgorithmParameterException: the trustAnchors parameter must be non-empty
at java.security.cert.PKIXParameters.setTrustAnchors(PKIXParameters.java:200)
at java.security.cert.PKIXParameters.<init>(PKIXParameters.java:120)
at java.security.cert.PKIXBuilderParameters.<init>(PKIXBuilderParameters.java:104)
at sun.security.validator.PKIXValidator.<init>(PKIXValidator.java:86)
... 76 more
I am running version 1.2.0 of qurkaus on 19.3.1 of graal. I am building this via Maven and the the provided docker container for Quarkus. I thought the trust store was added by default (in the build command it looks to be accurate) but am I missing something? Is there another way to get this to run without the setting of the HttpService on the S3 client?
There is a PR, under review at the moment, that introduces AWS S3 extension both JVM & Native. AWS clients are fully Quarkified, meaning configured via application.properties and enabled for dependency injection. So stay tuned as it most probably be available in Quarkus 1.5.0

Unable to launch task from a spring cloud data flow stream

I registered my task app in Spring Cloud Data Flow, created a definition for it and the status shows 'unknown'. I created the stream and trying to launch the task through task-sink and I get an error:
java.lang.IllegalStateException: failed to resolve MavenResource:
How to launch a task from the task-sink? Am I missing something? Any help is appreciated. Another question I have is how do I access the payload sent via TaskLaunchRequest in my task?
S1 http | step1: transformer-rabbit | log
S2 :S1.step1 > filter --expression=payload.contains('CUSTADDRMODRQ_V15') | task-processor | task-sink
task-sink is launching the task provided by the uri in the TaskLaunchRequest. It is looking for the resource as shown in the log
OUT Using manager EnhancedLocalRepositoryManager with priority 10.0 for /home/vcap/.m2/repository
OUT Using transporter HttpTransporter with priority 5.0 for https://repo.spring.io/libs-snapshot and finally failing.
The task is deployed in our repository and as mentioned I registered and created the definition for it as well.
This one is in cf environment and I am using SCDF server 1.0.0.M4.
In the application.properties for the task-sink i am providing maven.remote.repositories.snapshots.url=**
task create fis-ifx-event-task --definition "fis-event-task"
My goal is launching the task from the stream.
Thanks for the information. I am in fact using the BUILD-SNAPSHOT as I am unable to enable taks in 1.0.0M4 version. Here is the one I am using spring-cloud-dataflow-server-cloudfoundry-1.0.0.BUILD-20160808.144306-116. I am able to register and create task definitions. The status of the task definition is showing as 'unknown' even when I am using the sample task module provided by your team. But when I initiate the flow of the stream and when task-sink tries to launch the task, it is unable to find the maven resource. When I create the task definition, does the task module gets deployed? I don't see any app in Pivotal Apps Manager. As mentioned earlier, I provided maven.remote.repositories.snapshot.url in the application.properties file for the task-sink application. Another thing I observed is when I launch the task manually from dataflow shell it gives an error CF-UnprocessableEntity(10008): The request is semantically invalid: Unknown field(s): 'staging_disk_in_mb', 'staging_memory_in_mb' and also a message saying 'Source is empty'. Presently the task is supposed to print the timestamp and is not dependent on any input.
TaskProcessor code:
#EnableBinding(Processor.class)
#EnableConfigurationProperties(TaskProcessorProperties.class)
public class TaskProcessor {
#Autowired
private TaskProcessorProperties processorProperties;
public TaskProcessor() {
}
#Transformer(inputChannel = Processor.INPUT, outputChannel = Processor.OUTPUT)
#ELI(level = "info", eventType = ELIEventType.INBOUND)
public Object setupRequest(String message) {
Map<String, String> properties = new HashMap<String, String>();
properties.put("payload", message);
TaskLaunchRequest request = new TaskLaunchRequest(processorProperties.getUri(), null, properties, null);
return new GenericMessage<>(request);
}
}
TaskSink code:
#SpringBootApplication
#EnableTaskLauncher
#EnableBinding(Sink.class)
#EnableConfigurationProperties(TaskSinkProperties.class)
public class FisIfxEventTaskSinkApplication {
public static void main(String[] args) {
SpringApplication.run(FisIfxEventTaskSinkApplication.class, args);
}
}
I provided the stream I am using earlier in the post. Sink is receiving the TaskLaunchRequest with uri and payload as you can see here and unable to launch the task.
OUT registering [40, java.io.File] with serializer org.springframework.integration.codec.kryo.FileSerializer
2016-08-10T16:08:55.02-0600 [APP/0]
OUT Launching Task for the following resource TaskLaunchRequest{uri='maven://com.xxx:fis.ifx.event-task:jar:1.0-SNAPSHOT', commandlineArguments=[], environmentProperties={payload={"statusCode":0,"fisT
opic":"CustomerDataUpdated","payloadId":"CUSTADDRMODR``Q_V15","customerIds":[1597304]}}, deploymentProperties={}}
Before I begin, you have a number of questions here. In the future, it's better to break them up into multiple questions so that they are easier to find by other users and easier to answer. That being said:
A little context on the current state of things
In order to understand how things will work, it's important to understand the current state of things. The current releases of the software involved are:
Pivotal Cloud Foundry (PCF) - 1.7.12. This version is required for any task support.
Spring Cloud Task (SCT) - 1.0.2.RELEASE
Spring Cloud Data Flow CF (SCDF) - 1.0.0.BUILD-SNAPSHOT (current as of the date of this post).
Currently PCF 1.7.12+ has all the capabilities to run tasks. You can create v3 applications (the type of application used to launch a task), run it as a task, etc. However, the tooling around that functionality is not currently complete. There is no support for v3 applications in Apps Manager or the CLI. There is a plugin for the CLI that is more of a dev tool that can be used to help with some functions (it will show you logs, etc), but it is not fully functional and requires a specific version of the CLI to work [1]. This is one of the reasons that the task functionality within PCF is still considered experimental.
Spring Cloud Task is currently GA and supports all the functionality needed to effectively run tasks on CF. However, it's important to note that SCT doesn't handle orchestration so the actual launching of tasks on CF is the responsibility of either the user, or Spring Cloud Data Flow (the easier route).
Spring Cloud Data Flow's Cloud Foundry server implementation currently has functionality to launch tasks on PCF in the latest snapshots. We have validated this against 1.7.12 as well as the development branch of 1.8.
The task workflow within SCDF
Tasks are fundamentally different from stream applications within the context of SCDF. When you create a stream definition, you are given the option to deploy it. What this does is it actually downloads the Spring Boot über jars and deploys them to PCF as long running processes. If they go down, PCF, will relaunch them as expected, etc.
Tasks on the other hand, are not deployed. They are launched. The difference is that while you create a task definition, there is nothing deployed until you click launch. And when the task completes, the software is shut down and cleaned up. So while a stream definition may have states, it's really a one to one relationship between the definition and the deployed software. Where with a task, you can launch a task definition as many times as you want.
Your issues
Reading through your post, I see a few things that you are struggling with. Let me see if I can help:
Task Definitions within SCDF and launching them via a stream - When launching a task from a stream, the task registry within SCDF is not used. The sink expects the URL for the resource to be within the TaskLauchRequest.
Apps Manager and tasks - As mentioned above, there is no support for v3 applications in Apps Manager yet so you won't be able to see your tasks there.
Viewing the logs - In order to debug what's going wrong with launching your task on CF, you're going to want to view the logs. To do so, use the v3 CLI plugin mentioned above to view them. It's important to note that you can only tail live logs with the plugin, not view logs that have previously been rendered. Because of that, when testing, you'll want to tail the logs as soon as the app is created, before it's launched.
Error in SCDF Shell - The error you received from the SCDF shell (CF-UnprocessableEntity(10008):...) leads me to wonder if you have both the correct version of PCF (1.7.12+) and the correct version of the following other libraries:
spring-cloud-deployer-cloudfoundry - The latest snapshots
cf-java-client - 2.0.0.M10+
reactor-core - 3.0.0.RC1+
I hope this helps!
[1] https://github.com/cloudfoundry/v3-cli-plugin
Task support is not available in 1.0.0.M4 release of SCDF's CF-server. In this release, the task commands/REST-APIs should be disabled - see here. And for that reason, you wouldn't see any docs related to Tasks in the 1.0.0.M4 reference guide.
That said, the Task support is available/enabled in the BUILD-SNAPSHOT release. If you're locally building the CF-server and upon pushing it to CF, you could take advantage the task commands in the shell to create and launch task definitions.