I´m trying to create a Cognito using localstack locally but when I run:
awslocal cognito-idp create-user-pool --pool-name test
as mentioned on the docs I get the following error:
2022-11-01T19:21:56.136 ERROR --- [ asgi_gw_0] l.aws.handlers.logging : exception during call chain:
2022-11-01T19:21:56.136 INFO --- [ asgi_gw_0] l.aws.handlers.service : API action 'CreateUserPool' for service 'cognito-idp' not yet implemented or pro feature - check https://docs.localstack.cloud/aws/feature-coverage for further information
2022-11-01T19:21:56.137 INFO --- [ asgi_gw_0] localstack.request.aws : AWS cognito-idp.CreateUserPool => 501 (InternalFailure)
Anyone face this issue?
As documented on the on the localstack getting started page, certain features are limited to paying members of Localstack Pro.
The Pro version of LocalStack supports additional APIs and advanced features. You can find a comprehensive list of supported APIs on our ⭐ Feature Coverage page.
Following the link (which is the same link as in your error message you posted), Cognito is a paid feature of localstack. You have to pay for localstack Pro (or use the Pro trial) to get access to paid features.
Cognito Identity Provider (IdP) (Pro)
There is a guide on how to get started with localstack pro here.
Related
Specs:
The serverless Amazon MSK that's in preview.
t2.xlarge EC2 instance with Amazon Linux 2
Installed Kafka from https://dlcdn.apache.org/kafka/3.0.0/kafka_2.13-3.0.0.tgz
openjdk version "11.0.13" 2021-10-19 LTS
OpenJDK Runtime Environment 18.9 (build 11.0.13+8-LTS)
OpenJDK 64-Bit Server VM 18.9 (build 11.0.13+8-LTS, mixed mode,
sharing)
Gradle 7.3.3
https://github.com/aws/aws-msk-iam-auth, successfully built.
I also tried adding IAM authentication information, as recommended by the Amazon MSK Library for AWS Identity and Access Management. It says to add the following in config/client.properties:
# Sets up TLS for encryption and SASL for authN.
security.protocol = SASL_SSL
# Identifies the SASL mechanism to use.
sasl.mechanism = AWS_MSK_IAM
# Binds SASL client implementation.
# sasl.jaas.config = software.amazon.msk.auth.iam.IAMLoginModule required;
# Encapsulates constructing a SigV4 signature based on extracted credentials.
# The SASL client bound by "sasl.jaas.config" invokes this class.
sasl.client.callback.handler.class = software.amazon.msk.auth.iam.IAMClientCallbackHandler
# Binds SASL client implementation. Uses the specified profile name to look for credentials.
sasl.jaas.config = software.amazon.msk.auth.iam.IAMLoginModule required awsProfileName="kafka-client";
And kafka-client is the IAM role attached to the EC2 instance as an instance profile.
Networking: I used VPC Reachability Analyzer to confirm that the security groups are configured correctly and the EC2 instance I'm using as a Producer can reach the serverless MSK cluster.
What I'm trying to do: create a topic.
How I'm trying: bin/kafka-topics.sh --create --partitions 1 --replication-factor 1 --topic quickstart-events --bootstrap-server boot-zclcyva3.c2.kafka-serverless.us-east-2.amazonaws.com:9098
Result:
Error while executing topic command : Timed out waiting for a node assignment. Call: createTopics
[2022-01-17 01:46:59,753] ERROR org.apache.kafka.common.errors.TimeoutException: Timed out waiting for a node assignment. Call: createTopics
(kafka.admin.TopicCommand$)
I'm also trying: with the plaintext port of 9092. (9098 is the IAM-authentication port in MSK, and serverless MSK uses IAM authentication by default.)
All the other posts I found on SO about this node assignment error didn't include MSK. I tried suggestions like uncommenting the listener setting in server.properties, but that didn't change anything.
Installing kcat for troubleshooting didn't work for me, since there's no out-of-the box installation for the yum package manager, which Amazon Linux 2 uses, and since these instructions failed for me at checking for libcurl (by compile)... failed (fail).
The Question: Any other tips on solving this "node assignment" error?
The documentation has been updated recently, I was able to follow it end to end without any issue (The IAM policy is now correct)
https://docs.aws.amazon.com/msk/latest/developerguide/serverless-getting-started.html
The created properties file is not automatically used; your command needs to include --command-config client.properties, where this properties file is documented at the MSK docs on the linked IAM page.
Extract...
ssl.truststore.location=<PATH_TO_TRUST_STORE_FILE>
security.protocol=SASL_SSL
sasl.mechanism=AWS_MSK_IAM
sasl.jaas.config=software.amazon.msk.auth.iam.IAMLoginModule required;
sasl.client.callback.handler.class=software.amazon.msk.auth.iam.IAMClientCallbackHandler
Alternatively, if the plaintext port didn't work, then you have other networking issues
Beyond these steps, I suggest reaching out to MSK support, and telling them to update the "Create a Topic" page to no longer use Zookeeper, keeping in mind that Kafka 3.0 is not (yet) supported
I'm following this tutorial, yet I get stuck at the very end when I'm trying to deploy the app on the App Engine.
I get the following error message:
Updating service [default] (this may take several minutes)...failed.
ERROR: (gcloud.app.deploy) Error Response: [13] Flex operation projects/responder-289707/regions/europe-west6/operations/a0e5f3f4-29a7-49d8-98b5-4a52b7bf04ca error [INTERNAL]: An internal error occurred while processing task /app-engine-flex/insert_flex_deployment/flex_create_resources>2020-09-21T20:32:48.366Z12808.hy.0: Deployment Manager operation responder-289707/operation-1600720369987-5afd8c109adf5-6a4ad9a9-e71b9336 errors: [code: "RESOURCE_ERROR"
location: "/deployments/aef-default-20200921t223056/resources/aef-default-20200921t223056"
message: "{\"ResourceType\":\"compute.beta.regionAutoscaler\",\"ResourceErrorCode\":\"403\",\"ResourceErrorMessage\":{\"code\":403,\"message\":\"The caller does not have permission\",\"status\":\"PERMISSION_DENIED\",\"statusMessage\":\"Forbidden\",\"requestPath\":\"https://compute.googleapis.com/compute/beta/projects/responder-289707/regions/europe-west6/autoscalers\",\"httpMethod\":\"POST\"}}"
I don't really understand why though. I'm have authenticated my gcloud, made sure my account has App Engine Admin/Deployment rights. Have everything in place.
Any hints would be much appreciated.
You apparently do not have the rights for autoscaling resources. This could be due to a free account or that you need different rights to deploy an autoscaling service (other than App Engine Admin/Deployment).
Seeing as how you're doing the tutorial you could define a static resource amount, this is safer for your wallet as wel.
app.yaml
# add this
automatic_scaling:
min_num_instances: 1
max_num_instances: 2
As my last post at 403 Forbidden error for Gremlin to AWS Neptune, I could successfully connect to my Neptune Cluster DB via my Tinkerpop Gremlin console v 3.4.3 that installed at my EC2 instance as v 3.4.1 suggested at https://docs.aws.amazon.com/neptune/latest/userguide/iam-auth-connecting-gremlin-console.html didn't work for me.
\,,,/
(o o)
-----oOOo-(3)-oOOo-----
plugin activated: tinkerpop.server
plugin activated: tinkerpop.utilities
plugin activated: tinkerpop.tinkergraph
gremlin> :remote connect tinkerpop.server conf/neptune-remote.yaml
==>Configured <my neptune>.cluster-cm<cluster id>.ap-southeast-2.neptune.amazonaws.com/<private ip>:8182
gremlin> :remote console
==>All scripts will now be sent to Gremlin Server - [<my neptune>.cluster-cm<cluster id>.ap-southeast-2.neptune.amazonaws.com/<private ip>:8182] - type ':remote console' to return to local mode
However, I'm getting NoSuchMethodError error for all Gremlin commands (g.) that I used on the console.
e.g:
g.V()
gremlin> g.V()
org.apache.tinkerpop.gremlin.driver.RequestOptions$Builder.userAgent(Ljava/lang/String;)Lorg/apache/tinkerpop/gremlin/driver/RequestOptions$Builder;
Type ':help' or ':h' for help.
Display stack trace? [yN]Y
java.lang.NoSuchMethodError: org.apache.tinkerpop.gremlin.driver.RequestOptions$Builder.userAgent(Ljava/lang/String;)Lorg/apache/tinkerpop/gremlin/driver/RequestOptions$Builder;
at org.apache.tinkerpop.gremlin.console.jsr223.DriverRemoteAcceptor.send(DriverRemoteAcceptor.java:214)
at org.apache.tinkerpop.gremlin.console.jsr223.DriverRemoteAcceptor.submit(DriverRemoteAcceptor.java:168)
at org.apache.tinkerpop.gremlin.console.GremlinGroovysh.execute(GremlinGroovysh.groovy:110)
...
g.addV('person').property('name', 'justin')
gremlin> g.addV('person').property('name', 'justin')
org.apache.tinkerpop.gremlin.driver.RequestOptions$Builder.userAgent(Ljava/lang/String;)Lorg/apache/tinkerpop/gremlin/driver/RequestOptions$Builder;
Type ':help' or ':h' for help.
Display stack trace? [yN]Y
java.lang.NoSuchMethodError: org.apache.tinkerpop.gremlin.driver.RequestOptions$Builder.userAgent(Ljava/lang/String;)Lorg/apache/tinkerpop/gremlin/driver/RequestOptions$Builder;
at org.apache.tinkerpop.gremlin.console.jsr223.DriverRemoteAcceptor.send(DriverRemoteAcceptor.java:214)
at org.apache.tinkerpop.gremlin.console.jsr223.DriverRemoteAcceptor.submit(DriverRemoteAcceptor.java:168)
at org.apache.tinkerpop.gremlin.console.GremlinGroovysh.execute(GremlinGroovysh.groovy:110)
....
I have also tried the latest Apache Tinkerpop Gremlin Console 3.4.6, same error I had...
Thanks
I think the step you're missing is taking the temporary credentials provided by your EC2 instance's assigned IAM role and pushing those into the Default Credential Provider chain in order for them to be seen by the SigV4Channelizer used by the Gremlin Console. A high level overview of that process can be seen here: https://docs.aws.amazon.com/sdk-for-java/v1/developer-guide/setup-credentials.html
A more prescriptive way of handling this for Neptune can be found here: https://docs.aws.amazon.com/neptune/latest/userguide/iam-auth-temporary-credentials.html See the section titled, "Setting Up Amazon EC2 for Neptune IAM Authentication".
I just tried to use Gremlin console 3.4.1 and it's working as expected... I think it's due to Incompatible Version issue. I was using Gremlin console 3.4.6
I am trying to create a lambda S3 listener leveraging Lambda as a native image. The point is to get the S3 event and then do some work by pulling the file, etc. To get the file I am using het AWS 2.x S3 client as below
S3Client.builder().httpClient().build();
This code results in
2020-03-12 19:45:06,205 ERROR [io.qua.ama.lam.run.AmazonLambdaRecorder] (Lambda Thread) Failed to run lambda: software.amazon.awssdk.core.exception.SdkClientException: Unable to load an HTTP implementation from any provider in the chain. You must declare a dependency on an appropriate HTTP implementation or pass in an SdkHttpClient explicitly to the client builder.
To resolve this I added the aws apache client and updated the code to do the following:
SdkHttpClient httpClient = ApacheHttpClient.builder().
maxConnections(50).
build()
S3Client.builder().httpClient(httpClient).build();
I also had to add:
[
["org.apache.http.conn.HttpClientConnectionManager",
"org.apache.http.pool.ConnPoolControl","software.amazon.awssdk.http.apache.internal.conn.Wrapped"]
]
After this I am now getting the following stack trace:
Caused by: java.security.InvalidAlgorithmParameterException: the trustAnchors parameter must be non-empty
at java.security.cert.PKIXParameters.setTrustAnchors(PKIXParameters.java:200)
at java.security.cert.PKIXParameters.<init>(PKIXParameters.java:120)
at java.security.cert.PKIXBuilderParameters.<init>(PKIXBuilderParameters.java:104)
at sun.security.validator.PKIXValidator.<init>(PKIXValidator.java:86)
... 76 more
I am running version 1.2.0 of qurkaus on 19.3.1 of graal. I am building this via Maven and the the provided docker container for Quarkus. I thought the trust store was added by default (in the build command it looks to be accurate) but am I missing something? Is there another way to get this to run without the setting of the HttpService on the S3 client?
There is a PR, under review at the moment, that introduces AWS S3 extension both JVM & Native. AWS clients are fully Quarkified, meaning configured via application.properties and enabled for dependency injection. So stay tuned as it most probably be available in Quarkus 1.5.0
I have integrated my Cloud Foundry account with Cloud Bees as mentioned in the url -
http://docs.cloudfoundry.com/docs/dotcom/integration/cloudbees/
and trying to deploy few sample applications from github.
Build was successful every time but when I went for app-deployment using this plugin, it gave one exception (one particular exception for 2-3 applications I have tried).
[INFO] Deployment done in 1.2 sec
[cloudbees-deployer] Deploying as (jenkins) to the svcnvghi293 account
[cloudbees-deployer] Deploying null
com.cloudbees.plugins.deployer.exceptions.DeployException: Could not create DeployEvent
at com.cloudbees.plugins.deployer.impl.run.RunEngineImpl.createEvent(RunEngineImpl.java:132)
at com.cloudbees.plugins.deployer.impl.run.RunEngineImpl.createEvent(RunEngineImpl.java:51)
at com.cloudbees.plugins.deployer.engines.Engine.perform(Engine.java:82)
at com.cloudbees.plugins.deployer.DeployPublisher.perform(DeployPublisher.java:95)
at hudson.tasks.BuildStepMonitor$1.perform(BuildStepMonitor.java:19)
at hudson.model.AbstractBuild$AbstractBuildExecution.perform(AbstractBuild.java:728)
at hudson.model.AbstractBuild$AbstractBuildExecution.performAllBuildSteps(AbstractBuild.java:703)
at hudson.maven.MavenModuleSetBuild$MavenModuleSetBuildExecution.post2(MavenModuleSetBuild.java:994)
at hudson.model.AbstractBuild$AbstractBuildExecution.post(AbstractBuild.java:650)
at hudson.model.Run.execute(Run.java:1530)
at hudson.maven.MavenModuleSetBuild.run(MavenModuleSetBuild.java:477)
at hudson.model.ResourceController.execute(ResourceController.java:88)
at hudson.model.Executor.run(Executor.java:237)
Caused by: java.lang.NullPointerException
at com.cloudbees.plugins.deployer.impl.run.RunEngineImpl$EventImpl.<init>(RunEngineImpl.java:208)
at com.cloudbees.plugins.deployer.impl.run.RunEngineImpl.createEvent(RunEngineImpl.java:124)
... 12 more
Build step 'Deploy applications' marked build as failure
Finished: FAILURE
does anyone have any idea about this ?
Thanks in advance.
After a bit of digging I figured out which account you have.
The issue is that you had left the CloudBees RUN#Cloud host service in the list of host services to deploy to but you had not provided a complete configuration for it, e.g. see the "Application Id cannot be empty" red error text in this screenshot
I have removed this host section and saved your hellospring job. Build 8 shows a successful deployment.