For 3.0.0-M6, installing as per
https://docs.wso2.com/display/AM300/Installation+Guide
and then publishing the pet store api described at:
https://docs.wso2.com/display/AM300/Create+and+Publish+an+API
Then, when trying to start the gateway, this message is received:
ballerina: no bal files in the package: org/wso2/carbon/apimgt/gateway
I've seen a an older post at
unable to start ballerina as gateway
where some developer suggested adding an environment variable to have the publisher copy data directly into the file structure of the gateway, but it doesn't describe how to set the environment variable.
Is this still a viable solution? Is there any point in installing and running the five processes locally and expect deployment of apis to work locally? It seems to me the milestone is still a few milestones away from proper testing on localhost.
The docs on this are still a bit sparse...
The documentation has been updated now on how to start the gateway [1].
[1] https://docs.wso2.com/display/AM300/Installation+Guide
Related
We have APIs visible in QA after we did add re_indexing = 2, but the same thing won't work in our STG or PROD Environment. I am not sure how to debug this issue or where is the actual issue. Since its the same setup in all our environments and only verified and test changes are deployed to STG and then to PROD.
Any idea why its working in QA but not in higher environments.
We have checked the logs and All the APIs are correctly being added to Synapse Conf. in STG and PROD .
Also when trying to create another api with same name, then we are getting DUPLICATE API already exists.
WE are using EKS Based Implementation and Deployment pattern-2.
You can query from the solr data as mentioned in this blog post to narrow down the issue - https://medium.com/#sewmijayaweera/how-to-query-solar-data-16a7e051d5cf
If the data is not shown, the only possible option is to reindex the data in the upper environment as well.
aws-cli/2.1.21 Python/3.7.4 Darwin/19.6.0
amplify CLI 4.41.2
macOS 10.15.6
question:
Amplify fetching process is too long.
please help to me..
I was try below.
amplify init
amplify add api -> REST API
amplify push
Fetching started when last command.
console is
'Fetching updates to backend environment: dev from the cloud. '
I waited a hour.
but process is not complete.
Please tell me I have to confirm where.
other
Previously, I manually deleted the amplify related resource.
(e.g. cloudformation, s3,and more)
This may have been bad.
What is strange about you situation is that you are pushing information, but I'm not certain that you mentioned you had a stable cloud version. Meaning this, if there was an unstable version on the cloud and a heavily modified local version it could cause issue (such is what i have seen when working on it). My advice would be to ensure you have a cleanly design cloud version with amplify studio https://docs.amplify.aws/cli/ could help with general commands though "Amplify --help" is another option.
Aside from that, "Amplify pull" would overwrite your local system, but assuming it is a clean version you could then push it. I have actually save files off to my desktop and then copied them back in and that has worked for me. Fundamentally the issue is that it is a cloud system you are relying on. Major modifications will often be ignored or overwritten. Wish I could help more.
I am trying to use a development Endpoint to interactively run and edit ETL scripts but there seems to some issues in the development endpoint just after creating it as i am getting errors in scala/python REPL and also unable to do SSH tunnel to remote interpreter.
Let me explain what i did exactly - I created a development endpoint in the AWS console with all the default configurations. While creating the development endpoint i only provided three things 'Development endpoint name' and 'IAM Role' and my 'pub ssh key'. This is how it looks after creation
Then Right After creating the endpoint i am connecting to the spark/python REPL, I am able to connect to them successfully but within couple of minutes of connecting, the REPL starts throwing errors without writing a single line of code. This is happening in all the REPL present in the development endpoints.
Also When I am trying to do SSH tunneling to remote interpreter to connect my Local Zeppelin Notebook it is throwing - "bind: Cannot assign requested address".
Couple of things that are working though -
Able to do ssh to the endpoint.
Created a Sagemaker notebook in the AWS glue that is attached to this development endpoint and this notebook seems to be working fine, although surely it is adding an additional cost and i don't want to continue using it.
Can anyone please help what am i doing wrong? Am I missing any important steps that is needed to be done on the machine right after creating the development endpoint?
Thanks in Advance!
Not very sure about this error but if you are using it smaller datasets then probably you would like to use Docker implementation as it will not add any additional cost and you can go on with your developments.
You can refer this blog on how to set it up
https://towardsdatascience.com/develop-glue-jobs-locally-using-docker-containers-bffc9d95bd1
I have pushed my textract code on staging server, and now I am receiving an error.
It is working on a development system. I can't understand why it is happening.
I am using dotnet core 3.0
I am following code sample provided here. [https://github.com/aws-samples/amazon-textract-code-samples/tree/master/src-csharp]
I have a doubt regarding IAM credentials. For this, I installed AWS SDK tools for Windows and AWS CLI on staging server and, after that, ran commands (mentioned here [https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-configure.html#cli-quick-configuration] ) using Command prompt for configuring. I thought it (IAM) might be getting saved into the environment. But no success.
Code which uploads a file on S3 bucket is uploading it, but while making a request to Textract service, it is crashing.
System.Net.Http.HttpRequestException: Response status code does not indicate success: 400 (Bad Request)
I can't understand what's the issue.
On development, it is working.
Any help?
Finally we found solution. It was very weird issue, never thought it will come.
First thing
Message thrown by API was not clear. So we hosted it on some other server which had upgraded windows OS. There we came to know it is related to keys which was generated during creation of IAM user.
Second thing
Although we got cleared that our application (Amazon's Textract dll) is not able to read keys which we configured from here .
When we configure through CLI, it creates two files for saving credentials and read it from here. Refer below screen shot.
It was there but still application was not able to locate it on staging server. After searching for 4-5 days talking to the AWS support nothing happened.
Finally we dived into IIS made few changes and came to know it is happening at IIS. In IIS there is setting in Appool of instance it was Load User Profile.. By default it come as false but when we turned as true it crated a user like we have user for system log-in.
Refer below screen shots for changing this.
It creates user like this.
Hope it helps some one
As per the documentation https://docs.wso2.com/display/AM250/Deployment+Patterns#DeploymentPatterns-Pattern2 I am trying to setup WSO2 similar to Pattern 2. But from the documentation it is not quite clear what are the steps that needs to be followed.
Question:
1. How can I start Store, Publisher and Traffic Manager on same server?
Do I have to start them in single startup script or start them independently? When I try starting them independently I see there are conflicts. How to resolve those conflicts?
Regards,
Deepak
Question: 1. How can I start Store, Publisher and Traffic Manager on same server? Do I have to start them in single startup script or start them independently?
If you start the default WSO2 API Manager (without any profile parameter), it starts the "all-in-one" mode. This mode contains all features (store, publisher, key maanger, gateway, traffic manaager, ..). You don't have to do anything with it.
For the key manager and gateway, you should specify the profile parameter which should disable some non-essential feature for the specified profile.