Exporting WSO2 API - wso2

while exporting API, getting below error. Please suggest.
G:\WSO2\apimcli>apimcli export-api -n PizzaShackAPI -v 1.0.0 -r admin -e dev -k
apimcli: Error while exporting Reason: Get https://localhost:9443/carbon/admin/login.jsp: Auto redirect is disabled
Exit status 1
G:\WSO2\apimcli>apimcli export-api -n PizzaShackAPI -v 1.0.0 -r admin -e dev
apimcli: Error while exporting Reason: Get https://localhost:9443/api-import-export-2.6.0-v0/export-api?name=PizzaShackAPI&preserveStatus=true&provider=admin&version=1.0.0: x509: certificate signed by unknown authority
Exit status 1

Make sure you deployed the same version of api-import-export war which you configured in the add environment command[1].
apimcli add-env -n production \
--registration https://localhost:9443/client-registration/v0.14/register \
--apim https://localhost:9443 \
--token https://localhost:8243/token \
--import-export https://localhost:9443/api-import-export-2.6.0-v10 \
--admin https://localhost:9443/api/am/admin/v0.14 \
--api_list https://localhost:9443/api/am/publisher/v0.14/apis \
--app_list https://localhost:9443/api/am/store/v0.14/applications
In above case, it's api-import-export-2.6.0-v10.
[1] https://docs.wso2.com/display/AM260/Migrating+the+APIs+and+Applications+to+a+Different+Environment#Example-AddEnv

You should create Self Signed Certificates and add them .jks file in G:\WSO2\wso2am-2.6.0\repository\resources\security\client-truststore.jks. It worked..
This how to Create Self Signed Certificates: http://niranjankaru.blogspot.com/2016/01/create-your-own-ssl-certificate-for.html

I have sorted out the issue in my case as version compatibility among apimcli, import/export war file and WSO2 API-M server.
Issue was occurred due to the version (api-import-export-2.6.0-v10) mentioned as compatible by WSO2 not working properly with our APIM server and tried lowering the version and worked properly now.
WSO2 API-M version: 2.6.0
Import/Export tool version: APIMCLI v2.0.1
[Zip file downloaded for apimcli is ready for use no additional config was needed in my case]
Import/Export WAR file version: api-import-export-2.5.0-v1
[war file has been hot deployed to below path wso2am/2.6.0/repository/deployment/server/webapps/]
Below Commands executed:
Exported an already created API from DEV environment:
*$ ./apimcli export-api -n ProfileManagementNJ -v v1.0.0 -r admin -e dev -k
Successfully exported API!
Find the exported API at /home/stwso2/.wso2apimcli/exported/apis/dev/ProfileManagementNJ_v1.0.0.zip*
Imported the above exported API to ST environment:
*$ ./apimcli import-api -k -f /home/stwso2/.wso2apimcli/exported/apis/dev/ProfileManagementNJ_v1.0.0.zip -e st --preserve-provider false
Successfully imported API*
Actual error message details can be found as in below and are captured from console log:
$ ./apimcli export-api -n ProfileManagementNJ -v 1.0.0 -r admin -e st -k --verbose
Executed ImportExportCLI (apimcli) on Wed, 30 Oct 2019 13:41:52 UTC
[INFO]: Insecure: true
[INFO]: export-api called
[INFO]: ExportAPI: URL: https://172.26.41.4:9443/api-import-export-2.6.0-v10/export-api?name=ProfileManagementNJ&version=1.0.0&provider=admin&preserveStatus=true
apimcli: Error while exporting Reason: Get https://172.26.41.4:9443/carbon/admin/login.jsp: Auto redirect is disabled
Exit status 1
source: https://docs.wso2.com/display/AM260/Migrating+the+APIs+to+a+Different+Environment#Example-exportAPI

Related

Cannot login to an environment using WSO2 apictl tool

I'm using WSO2 API Controller 3.1.4 and API Manager 3.1.0.
First I added an environment using the below command and it was successfully added.
rocky#ProBook-450-G5:/data/wso2-products/apictl-3.1.4-linux-x64/apictl$./apictl add-env -e test \
> --apim https://localhost:9443 \
> --registration https://localhost:9443/client-registration/v0.16/register \
> --token https://localhost:8243/token
Successfully added environment 'test'
Then I tried to login above created test environment using the following command.
rocky#ProBook-450-G5:/data/wso2-products/apictl-3.1.4-linux-x64/apictl$ ./apictl login test -u admin -p admin -k --verbose
For the above command, I received below error response message.
Executed ImportExportCLI (apictl) on Wed, 19 Aug 2020 09:49:15 +0530
[INFO]: Insecure: trueSuccessfully added environment 'test'
Warning: Using --password in CLI is not secure. Use --password-stdin
Getting ClientID, ClientSecret: Status - 404
Error: %!s(<nil>)
Body:
Error occurred while login : Request didn't respond 200 OK: 404
When I tried client-registration in REST-API , I got a successful 200 response.
Is there any issue in the command that I used to create the environment?
You should add the environment as below according to the documentation [1].
./apictl add-env -e test \
--apim https://localhost:9443 \
--registration https://localhost:9443 \
--token https://localhost:8243/token
You should not specify the registration endpoint as https://localhost:9443/client-registration/v0.16/register. That is for the older versions (APIM 3.0.0 + APICTL 3.0.x [2])
[1] https://apim.docs.wso2.com/en/latest/learn/api-controller/getting-started-with-wso2-api-controller/#add-an-environment
[2] https://apim.docs.wso2.com/en/3.0.0/learn/api-controller/getting-started-with-wso2-api-controller/#add-an-environment
Refer here for a Demo of the above correct use case (Please make sure to remove the environment using "./apictl remove env test" before adding the environment again)

Django hosting with Azure Web App Server Error 500

I followed this tutorial as well as 2 others trying to host my project using Azure. https://learn.microsoft.com/en-us/azure/app-service/containers/tutorial-python-postgresql-app?tabs=bash#clone-the-sample-app I managed to host the sample web app used in the tutorial, but could not host my own project
**I keep getting "Server Error 500". I've spent around 36 hours trying to fix the problem.**
I checked the application logs - nothing
I checked the kudu/scm logs - nothing
I looked under "App Service logs" and checked the ftp logs - nothing
I checked to see if all the files had been uploaded at this location "<>.scm.azurewebsites.net/wwwroot/" The staticfiles successfully uploaded.
I went to "Web SSH" and installed all the dependencies** "pip install -r requirements.txt"
then did "python manage.py runserver" AND NO ERRORS, but it did not want to connect to "127.0.0.1:8000" or "localhost:8000" ???
I spend around 6 hours searching for answers - tried everything - nothing worked
WEBSITES_PORT set to 8000 (tried different ports and removed this setting after no luck)
I changed DEBUG to False and True - didn't work
I did set all the necessary environment variables (eg, DB_HOST, DB_PASSWORD ...)
The App Service plan is F1 (free)
I went to all the pages on my web app and got server error 500 on all the pages except when logging into admin, after logging into admin I got the error again.
Possible Solutions I thought might work
I might be missing an important "Application setting" ???
One of the dependencies might be causing the problem - but I highly doubt it
I dont know pls help sir
This was about what the logs kept saying
'''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''
2020-06-24T08:28:13.331Z INFO - Starting container for site
2020-06-24T08:28:13.331Z INFO - docker run -d -p 5480:8000 --name forexflowcom_0_136ed024 -e WEBSITE_SITE_NAME=forexflowcom -e WEBSITE_AUTH_ENABLED=False -e WEBSITE_ROLE_INSTANCE_ID=0 -e WEBSITE_HOSTNAME=forexflowcom.azurewebsites.net -e WEBSITE_INSTANCE_ID=9072c805cf2bc663ced034398777a5d5f6115a51e64a73b6fc69b73f64c8660e -e HTTP_LOGGING_ENABLED=1 appsvc/python:3.7_20200101.1
2020-06-24T08:28:16.751Z INFO - Initiating warmup request to container forexflowcom_0_136ed024 for site forexflowcom
2020-06-24T08:28:28.970Z INFO - Container forexflowcom_0_136ed024 for site forexflowcom initialized successfully and is ready to serve requests.
2020-06-24T09:34:28.003Z INFO - Starting container for site
2020-06-24T09:34:28.010Z INFO - docker run -d -p 5757:8000 --name forexflowcom_1_86357e3d -e WEBSITE_SITE_NAME=forexflowcom -e WEBSITE_AUTH_ENABLED=False -e WEBSITE_ROLE_INSTANCE_ID=0 -e WEBSITE_HOSTNAME=forexflowcom.azurewebsites.net -e WEBSITE_INSTANCE_ID=9072c805cf2bc663ced034398777a5d5f6115a51e64a73b6fc69b73f64c8660e -e HTTP_LOGGING_ENABLED=1 appsvc/python:3.7_20200101.1
2020-06-24T09:34:31.507Z INFO - Initiating warmup request to container forexflowcom_1_86357e3d for site forexflowcom
2020-06-24T09:34:49.002Z INFO - Container forexflowcom_1_86357e3d for site forexflowcom initialized successfully and is ready to serve requests.
2020-06-24T09:38:04.238Z INFO - Starting container for site
2020-06-24T09:38:04.240Z INFO - docker run -d -p 7958:8000 --name forexflowcom_2_79f5bea0 -e WEBSITE_SITE_NAME=forexflowcom -e WEBSITE_AUTH_ENABLED=False -e WEBSITE_ROLE_INSTANCE_ID=0 -e WEBSITE_HOSTNAME=forexflowcom.azurewebsites.net -e WEBSITE_INSTANCE_ID=9072c805cf2bc663ced034398777a5d5f6115a51e64a73b6fc69b73f64c8660e -e HTTP_LOGGING_ENABLED=1 appsvc/python:3.7_20200101.1
2020-06-24T09:38:08.317Z INFO - Initiating warmup request to container forexflowcom_2_79f5bea0 for site forexflowcom
2020-06-24T09:38:23.838Z INFO - Waiting for response to warmup request for container forexflowcom_2_79f5bea0. Elapsed time = 15.5210597 sec
2020-06-24T09:38:41.054Z INFO - Container forexflowcom_2_79f5bea0 for site forexflowcom initialized successfully and is ready to serve requests.
'''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''
EDIT // EDIT // EDIT // EDIT // EDIT
I found The solution
in settings.py I had:
try:
from .local_settings import *
except ImportError:
print("No local file, your in production")
after removing this It worked

Something wrong on deploy chaincode for hyperledger v1.0

I have tried to use docker toolbox to setup Hyperledger V1.0 in my local machines.
I according to this document:
http://hyperledger-fabric.readthedocs.io/en/latest/asset_setup.html
But when I tried to deploy chaincode.
$node deploy.js
I got an error message:
info: Returning a new winston logger with default configurations
info: [Chain.js]: Constructed Chain instance: name - fabric-client1, securityEnabled: true, TCert download batch size: 10, network mode: true
info: [Peer.js]: Peer.const - url: grpc://localhost:8051 options grpc.ssl_target_name_override=tlsca, grpc.default_authority=tlsca
info: [Peer.js]: Peer.const - url: grpc://localhost:8055 options grpc.ssl_target_name_override=tlsca, grpc.default_authority=tlsca
info: [Peer.js]: Peer.const - url: grpc://localhost:8056 options grpc.ssl_target_name_override=tlsca, grpc.default_authority=tlsca
info: [Client.js]: Failed to load user "admin" from local key value store
info: [FabricCAClientImpl.js]: Successfully constructed Fabric COP service client: endpoint - {"protocol":"http","hostname":"localhost","port":8054}
info: [crypto_ecdsa_aes]: This class requires a KeyValueStore to save keys, no store was passed in, using the default store C:\Users\daniel\.hfc-key-store
[2017-04-15 22:14:29.268] [ERROR] Helper - Error: Calling enrollment endpoint failed with error [Error: connect ECONNREFUSED 127.0.0.1:8054]
at ClientRequest.<anonymous> (C:\Users\daniel\node_modules\fabric-ca-client\lib\FabricCAClientImpl.js:304:12)
at emitOne (events.js:96:13)
at ClientRequest.emit (events.js:188:7)
at Socket.socketErrorListener (_http_client.js:310:9)
at emitOne (events.js:96:13)
at Socket.emit (events.js:188:7)
at emitErrorNT (net.js:1278:8)
at _combinedTickCallback (internal/process/next_tick.js:74:11)
at process._tickCallback (internal/process/next_tick.js:98:9)
[2017-04-15 22:14:29.273] [ERROR] DEPLOY - Error: Failed to obtain an enrolled user
at ca_client.enroll.then.then.then.catch (C:\Users\daniel\helper.js:59:12)
at process._tickCallback (internal/process/next_tick.js:103:7)
events.js:160
throw er; // Unhandled 'error' event
^
Error: Connect Failed
at ClientDuplexStream._emitStatusIfDone (C:\Users\daniel\node_modules\grpc\src\node\src\client.js:201:19)
at ClientDuplexStream._readsDone (C:\Users\daniel\node_modules\grpc\src\node\src\client.js:169:8)
at readCallback (C:\Users\daniel\node_modules\grpc\src\node\src\client.js:229:12)
Is this an question about unable to connect to ca? Or other causes?
Edit:
Environment:
OS: Windows 10 Professional Edition
Docker Toolbox: 17.04.0-ce
Go: 1.7.5
Node.js: 6.10.0
My steps:
1.Open Docker Quickstart Terminal and key commands.
$curl -L https://raw.githubusercontent.com/hyperledger/fabric/master/examples/sfhackfest/sfhackfest.tar.gz -o sfhackfest.tar.gz 2> /dev/null; tar -xvf sfhackfest.tar.gz
$docker-compose -f docker-compose-gettingstarted.yml build
$docker-compose -f docker-compose-gettingstarted.yml up -d
$docker ps
It has been confirmed that six containers have been activated
2.Download examples and install modules.
$curl -OOOOOO https://raw.githubusercontent.com/hyperledger/fabric-sdk-node/v1.0-alpha/examples/balance-transfer/{config.json,deploy.js,helper.js,invoke.js,query.js,package.json}
//This link didn't work, so I downloaded the required files from GitHub of fabric-sdk-node
$npm install --global windows-build-tools
$npm install
3.Try to deploy chaincode.
$node deploy.js
There were several problems, not the least of which that documentation was outdated and was for a preview release of Hyperledger Fabric. The docs are actually in the process of being removed as we need to update our examples / samples.
You mentioned Docker Toolbox - so are you trying to run all of this on Windows or Mac?
UPDATE:
So one of the issue with Docker Toolbox or Docker for Windows is that you cannot use localhost / 127.0.0.1 as the address when trying to communicate from apps on the host (even in the QuickStart Terminal) to the endpoints of the Docker containers. When the QuickStart Terminal first launches Docker, you'll see that it will output the IP address of the endpoint you should use when communicating with exposed ports.
I was having the same issue while following the latest "Writing Your First Application" tutorial (http://hyperledger-fabric.readthedocs.io/en/latest/write_first_app.html). I had installed all the pre-requisites and the fabric-samples and started the local network.
When I got to the step of enrolling the Admin user, $ node enrollAdmin.js, I was getting the same error message as above, Error: connect ECONNREFUSED, followed by the localhost domain.
As the first answer suggests, the root cause is that I'm running Docker Toolbox. I'm developing on an older Mac, OSX v10.9.5, so I couldn't use Docker for Mac.
To fix the issue, I replaced 'localhost' in the enrollAdmin.js code with the IP from Docker Toolbox.
Here are the steps I took:
Started Docker with Applications > Docker Quickstart Terminal
Copied the IP from this sentence: docker is configured to use the default machine with IP...
Opened the copy of enrollAdmin.js from fabric-samples/fabcar directory
Found this code:
// be sure to change the http to https when the CA is running TLS enabled
fabric_ca_client = new Fabric_CA_Client('http://localhost:7054', tlsOptions , 'ca.example.com', crypto_suite); // <-- This is the line to change
Replaced 'localhost' with the Docker IP, leaving the port :7054 as is.
Saved
Re-ran the command, $ node enrollAdmin.js
The script connected to the CA and successfully completed the Admin enrollment.
On to the next step!

Run a ESP local for development

When i try to run a local ESP then i get this error.
ERROR:Fetching service config failed(status code 403, reason Forbidden, url ***)
I have a new created service account this account works fine with gcloud cli.
System: OSX Sierra with Docker for MAC
this is the command that i use to start the container:
docker run -d --name="esp" --net="host" -v ~/Downloads:/esp gcr.io/endpoints-release/endpoints-runtime:1.0 -s 2017-02-07r5 -v echo.endpoints.****.cloud.goog -p 8082 -a localhost:9000 -k /esp/serviceaccount.json
UPDATE:
I have found the error i have set for the service name the verision and for the version the servicename.
Now i get no error but it not works, this is the console output from the container. From my view is all fine but it not works, i can't call the proxy with localhost:8082/***
INFO:Constructing an access token with scope https://www.googleapis.com/auth/service.management.readonly
INFO:Service account email: aplha-api#****.iam.gserviceaccount.com
INFO:Refreshing access_token
INFO:Fetching the service configuration from the service management service
nginx: [warn] Using trusted CA certificates file: /etc/nginx/trusted-ca-certificates.crt
This is the used correct command:
docker run -d --name="esp-user-api" --net="host" -v ~/Downloads:/esp gcr.io/endpoints-release/endpoints-runtime:1.0 -s echo.endpoints.***.cloud.goog -v 2017-02-07r5 -p 8082 -a localhost:9000 -k /esp/serviceaccount.json
Aron, I assume:
(1) you are following this user guide: https://cloud.google.com/endpoints/docs/running-esp-localdev
(2) And you do have a backend running on localhost:9000
Have you issued a curl request as suggested in that user guide to localhost:8082/***? does curl command get stuck or returns any error message?
If you don't have a local backend running yet, I would recommend you to follow the user guide above to run a local backend. Note this guide will instruct you to run it at port 8080, so you'll need to change your docker run command from "-a localhost:9000" to "-a localhost:8080" as well.
Also, please note this user guide is for linux env. We haven't tried this set up in a Mac env yet. We do notice some user gets this working on Windows docker with extra work, where he sets backend to "IP of docker NIC". Note "-a" is short for "--backend".
see https://groups.google.com/forum/?utm_medium=email&utm_source=footer#!msg/google-cloud-endpoints/4sRaSkigPiU/KY8g46NSBgAJ

Change Domain Problems

I have been trying to change the domain and have been running into issues. I hope someone can help me out here, I have documented the steps I went through below.
I requested the DNS resouuce from our dns admin team.
.mike-cf.company.com canonical name = mike-cf.company.com.
Name: mike-cf.company.com
Address: 10.52.88.123
I then installed with the -D switch:
bash < <(curl -s -k -B http ://raw.github.com/cloudfoundry/vcap/master/dev_setup/bin/vcap_dev_setup -D mike-cf.company.com)
I noticed that not all the config files in ~/cloudfoundry/.deployments/devbox/config changed, so I changed those using sed command:
$ cd ~/cloudfoundry/.deployments/devbox/config
$ sed -i 's/.vcap.me/.newdomain.com/g' *.yml
I restarted and things were looking good, I was able to run vmc target with no problem but I was not able to register a user.
$ vmc target http://api.mike-cf.company.com/
Successfully targeted to [http://api.mike-cf.company.com]
$ vmc register --email mike#company.com --passwd password
Creating New User: Error 100: Bad request
There was no entried in uaa.log only this in the cloud_controller.log
[2012-09-25 09:06:46.712110] cc - pid=20400 tid=8ee9 fid=4757 DEBUG -- ---> async\nrequest: post http://uaa.mike-cf.company.com/oauth/token\nheaders: {"content-type"=>"application/x-www-form-urlencoded", "accept"=>"application/json", "authorization"=>"Basic Y2xvdWRfY29udHJvbGxlcjpjbG91ZGNvbnRyb2xsZXJzZWNyZXQ="}\nbody: grant_type=client_credentials
[2012-09-25 09:06:46.718338] cc - pid=20400 tid=8ee9 fid=4757 DEBUG -- <---\nresponse: 404\nheaders: {"SERVER"=>"nginx", "DATE"=>"Tue, 25 Sep 2012 16:06:46 GMT", "CONTENT_TYPE"=>"text/html", "CONTENT_LENGTH"=>"162", "CONNECTION"=>"close"}\nbody: \r\n404 Not Foundhttp://uaa.mike-cf.company.com: 404 trace ["/home/mike/cloudfoundry/.deployments/devbox/deploy/rubies/ruby-1.9.2-p180/lib/ruby/gems/1.9.1/bundler/gems/uaa-dad29c9030f4/gem/lib/uaa/http.rb:56:in json_parse_reply'", "/home/mike/cloudfoundry/.deployments/devbox/deploy/rubies/ruby-1.9.2-p180/lib/ruby/gems/1.9.1/bundler/gems/uaa-dad29c9030f4/gem/lib/uaa/token_issuer.rb:157:inrequest_token'", "/home/mike/cloudfoundry/.deployments/devbox/deploy/rubies/ruby-1.9.2-p180/lib/ruby/gems/1.9.1/bundler/gems/uaa-dad29c9030f4/gem/lib/uaa/token_issuer.rb:128:in client_credentials_grant'", "/home/mike/cloudfoundry/cloud_controller/cloud_controller/app/models/uaa_token.rb:80:inaccess_token'", "/home/mike/cloudfoundry/cloud_controller/cloud_controller/app/models/uaa_token.rb:96:in user_account_instance'", "/home/mike/cloudfoundry/cloud_controller/cloud_controller/app/controllers/users_controller.rb:13:increate'", "/home/mike/cloudfoundry/.deployments/devbox/deploy/rubies/ruby-1.9.2-p180/lib/ruby/gems/1.9.1/gems/actionpack-3.0.14/lib/action_controller/metal/implicit_render.rb:4:in send_action'", "/home/mike/cloudfoundry/.deployments/devbox/deploy/rubies/ruby-1.9.2-p180/lib/ruby/gems/1.9.1/gems/actionpack-3.0.14/lib/abstract_controller/base.rb:150:inprocess_action'", "/home/mike/cloudfoundry/.deployments/devbox/deploy/rubies/ruby-1.9.2-p180/lib/ruby/gems/1.9.1/gems/actionpack-3.0.14/lib/action_controller/metal/rendering.rb:11:in process_action'", "/home/mike/cloudfoundry/.deployments/devbox/deploy/rubies/ruby-1.9.2-p180/lib/ruby/gems/1.9.1/gems/actionpack-3.0.14/lib/abstract_controller/callbacks.rb:18:inblock in process_action'", "/home/mike/cloudfoundry/.deployments/devbox/deploy/rubies/ruby-1.9.2-p180/lib/ruby/gems/1.9.1/gems/activesupport-3.0.14/lib/active_support/callbacks.rb:446:in `run_3844132275556875466__process_action_2824786929479189233_callbacks'"]
[2012-09-25 09:06:46.896386] cc_events - pid=20400 tid=8ee9 fid=4757 INFO -- [2012-09-25 09:06:46 -0700, :USER, "N/A", "POST:/users", "mike#company.com", :FAILED, "Bad request"]
I have found the problem, there seemed to be an issue with the vmc I was using. Once I downgraded VMC I am now able to connect.
gem uninstall vmc
gem install --version '= 0.3.18' vmc
Here is the thread that lead me to the answer
https://groups.google.com/a/cloudfoundry.org/forum/?fromgroups=#!topic/vcap-dev/enY2qKnSJWI
is it possible to see the content of the uaa config file? Make sure it has the correct IP address specified for the NATS message bus, the line should look something like this;
mbus: nats://nats:nats#192.168.1.10:4222/
If that IP address is incorrect, it needs to be changed. I take it the server it is installed on has a static IP address? was it assigned before you installed VCAP?