Currently, I am exploring postman for API automation and I am stuck with continuous integration process.
Is it possible to do Continuous Integration with Postman using GoCD.
I am only getting the documentation for CI with Jenkins and Postman.
You can use the GoCD to add about any command line task
https://www.go.cd/documentation/user/current/configuration/admin_add_task.html
If you are using postman collections and newman for running these API tests you can configure GoCD to use the same commands used on jenkins (like on their blog)
http://blog.getpostman.com/2015/09/03/how-to-write-powerful-automated-api-tests-with-postman-newman-and-jenkins/
which is
newman -c jenkins_demo.postman_collection --exitCode 1
Related
So, i am a beginner in (like literally just started) unit and integration testing using mocha and chain for backend, i want to test a feature where we create new user, and the new user is created by signing-in with google. How to test the google authentication?
Earlier i thought of triggering /google/callback endpoint directly, but, turns-out, it is only supposed to get response from Google Authentication server.
So, how can i test google authentication using mocha and chai?
Please help!
Thanks in advance!
I am trying to deploy an already built image to Cloud Run using .NET Cloud Client Libraries.
I need exactly the same behavior as gcloud run deploy hello --image=us-docker.pkg.dev/cloudrun/container/hello but with .NET Cloud Client Libraries.
Unfortunately, I cannot find an API that does that in https://cloud.google.com/dotnet/docs/reference.
I also tried downloading Cloud SDK from https://cloud.google.com/sdk/docs/install and inspecting the code with PyCharm.
The API is called Cloud Run Admin API.
Cloud Run Admin API
There is an SDK for .NET.
Cloud Run Admin API Client Library for .NET
Namespace Google.Apis.CloudRun.v1
Creating a Cloud Run service is fairly complicated. I recommend that you study the REST API first so that you understand the request body. The .NET library models the REST API.
Method: namespaces.services.create
The key item is the service resource:
Resource: Service
There is a quick way to learn the API request body. Create a simple Cloud Run example and then add the command line option --log-http. Save the output to a file and then study the HTTP request parameters and request body to decipher the very large data structures that are required to create a service.
gcloud run deploy --log-http
I wrote two articles on the Cloud Run Admin API:
Google Cloud Run Deep Dive – Understanding the APIs – Part 1
Google Cloud Run Deep Dive – Understanding the APIs – Part 2
Note: I wrote those articles two years ago. Cloud Run has advanced a lot since then. However, these articles will help you understand the low-level details of the service that were not published elsewhere at the time.
I am wanting to build a CI/CD pipeline Github app. What CI tool can I use to leverage to build this?
I want my application to handle the Github OAuth so as far as the user is concerned, they only connect to their Github but behind the scenes, I run Pipelines through Jenkins, CircleCI, AWS Codepipelines or something similar.
These all require the user authorise these Apps via their own OAuth but I'm hoping for a solution where I can pass an existing access token or clone a repo and send the CI tool the code to then execute the Pipeline on.
Does anybody know of any CI/CD tools that work via providing a Github access token or sending a clone of the code from Github to or would I have to look at rolling my own CI tool for something like this?
If I understood your question correct you just need to add Github personal api token to jenkins credentials as 'username and password'. You can use it in your pipelines.
Also I leave some useful links for you:
GitHub Permissions and API token Scopes for Jenkins
Github branch source plugin - which will automate job creation on jenkins side and will create webhooks on github side.
Youtube channel of Cloudbees - this video is about configuring Github branch source.
I have written several webservices in Python and Ruby and would like to integrate them with WSO2 Integration Studio. I tried following the instructions on the docs about sending messages to services here but it's about Java microservices only. Am I supposed to deploy my services elsewhere and only use http endpoints to integrate them? Thank you
You can only copy JAR files to the /wso2/msf4j/deployment/microservices folder and deploy them in the MSF4J profile of WSO2 EI.
Am I supposed to deploy my services elsewhere and only use http endpoints to integrate them?
Yes, This is the way to achieve this.
I configured a parse server on my AWS elastic beanstalk using this guid I've tested it and it all works fine
Now I can't find a way to deploy parse dashboard on my server.
I did deployed parse dashboard on my local host and connected it to the application on server, But this way I cannot manage (Add and remove) my apps.
Another problem is that parse dashboard missing cloud code on default, I found this on git, but I cant understand where do I add the requested endpoints, is it something like adding app.use('/scripts', express.static(path.join(__dirname, '/scripts'))); on the index.js file?
in order to deploy parse-dashboard to your EC2 you need to follow the Deploying Parse Dashboard section in parse-dashboard github page
parse-dashbard github page
Please make sure that when you deploy parse-dashboard you are using https and also basic authentication (it is also part of the guide)
Now regarding the cloud code: the ability to deploy cloud code via parse CLI and to view the nodejs code in parse dashboard are not available in parse-server but those are parse.com features. Cloud code in parse-server is handled by modifying the main.js file which exist under the cloud folder and deployment should be done manually by you but the big advantage in parse-server cloud code is that you can use any NodeJS module that you want from there and you are not restricted to the modules that were used by parse.com .
Another point about the dashboard. What you can do is to create an express application and then add parse-server and parse-dashboard as a middleware to your express application and deploy the whole application to AWS and then you can enjoy both parse-server (that will be available under the /parse path, unless you changed it to something else) and parse dashboard that will be available under the /dashboard path
Enjoy :)