i have implemented the Authorize.net Automatic recorring billing using XM (in test mode). I need to test my work. But i don't know how to test it.
Does any one know this?
To test it simply sign up for a developer account and then use their development server to process ARB subscriptions. It works just like the live server does so you can fully test out all of the functionality before going live and being confident that it will all work.
Related
I am using cloud firestore + cloud functions + firestore auth to support my game.
I developed the main part of the app with unit tests in the app plus typescript tests for cloud functions. Now I want to add security rules to secure the data.
When I do so, requiring the calls to be authenticated, all my unit tests in unity (naturally) fails, as I do not authenticate a user but mocks them as data representation of the user in the db.
I want to keep using my unit tests in unity but still requiring the real db to demand authentication.
I have tried to look around for mock auth, or auth test environment, but found nothing except the library rules-unit-testing.
I see the content of it with specialized logic for mocking user, making me think that I am understanding this the wrong way by trying to do this in unity. My question is, How to continue to do game tests in unity, which requires interacting with the firestore server, while keeping security rules?
I am answering my own question after more time.
My analysis was that I ran into issues because I had coupled my code too tightly: server logic was on the client side and broke when introducing security rules. I decided to move logic to cloud functions and have only simple client side calls (sets, gets, updates, http functions).
So if someone runs into similar problems (architecture hampers use of best practices) I suggest to re-think the architecture. Feels obvious when writing...
Have fun coding all ^_^
I am currently planning to do some web application vulnerability testing on an EC2 server with OWASP ZAP.
From my very quick google search, I found that AWS has stated that penetration testing services are allowed without approval (https://aws.amazon.com/security/penetration-testing/).
However, to double down, I am wondering if anyone in the community has done this without issue.
Thanks!!!
Yes, I frequently ran ZAP scans in AWS while I was at Mozilla. They were of course all against apps that I was permitted to test.
You should be fine unless someone complains - if they do that then Amazon are likely to send you a warning and then disable your account if you dont reply with a good explanation, or if it keeps happenning of course.
My goal is to build an application that can dynamically monitor my Stock Portfolio (Stock Options actually). So, I am building my business logic in a TDD approach using C# on .NET core. I haven't much thought about the interface because the following is true:
1) My broker is ETrade so I will have to authenticate and use their api for my position information
2) I need this application to run from 9:30 AM - 4:00 PM EST Monday - Friday
As I am nearing completion of my 1st MVP business logic, I am now starting to think about where I will delpoy the final solution and hence I am seeking the community for feedback.
I have heard, but not worked much with Microservices (AWS, Azure, etc.) so I'm not sure if that is the direction I want to look. (Also, I have a tight timeline and don't want to have to learn too much to get this thing deployed - but I am open to any solution). Excluding Microservices and the Cloud I have considered the following:
a) "I could run the program from a Console application"?
(answer) I would have to either:
(a) get a dedicated server to do or
(b) try to ensure that I can leave a laptop running at home or something, blah, blah
(conclusion) Both are plausible decisions.
b) "I could run the program as a Windows Service"
(answer) I would have to either
(a) (same as above)
(b) (same as above)
(conclusion) Both are plausible decisions.
c) "I could run the program as a Web Site"
(answer) I would have to either
(a) (same as above)
(b) (same as above)
(conclusion) Both are plausible decisions.
c) "I could investigate The Cloud (Microservices)"
(answer) ???
(conclusion)
So, in closing, basically, given the requirements of up-time between those hours and I would like to be able to access the app from any internet browser. I have logic that needs to ping various endpoints pretty much every minute during market hours. So I am not sure how I would handle this using a Web Application because if (by chance) the browser is closed, the Web Application stops running and thus would defeat my needs! Does the cloud help here? Maybe I should just use a Windows Service and make my logs accessible on the web. Or I deploy the TraderBot in a Windows Service and also build a Web Application to receive real-time intel from the TraderBot Windows Service / Logs / and-or DB? Not sure, but I appreciate any knee-jerk responses you all have!
I really like to connect pieces of tech to solve complex problems. Though it's not that complex.
Solution 1: Cloud-based, specifically on AWS
Use AWS Lambdas(Serverless compute) to hit the API to get prices or whatever info you are seeking and then store it in DynamoDb(A NoSQL DB). Use CloudWatch Rules(Serverless CRON job) to invoke your lambda periodically.
Then SPA Single page application to view values stored in DynamoDb. It can be a static website hosted on S3 also.
Or
A mobile app can also serve the purpose of viewing the data from DynamoDb.
Solution 2: Mobile-Only
Why not build the app purely for mobiles like iOS or Android. Check here I've coded one app just to track the price of different alt-coins of different exchanges.
With the mobile-only app, your app will fetch the prices periodically(Using alarms API in case of Android) and will store in its local database(SQLite in case of Android) and then you can open the app any time to see the latest values.
More solutions can be thought of, But I think above are a good approach for solving this problem rather than buying a VPS or blowing your laptop all 24X7 #ThinkCloud
PS: Initial thoughts only, Ask more to enhance the solution... :)
I'm building a website to allow people to donate to a local charity quickly and easily. The charity allows direct donations, but it's primary function is to do "per mile" style donations, but with pull ups. In that past, they have collected the pledges ("I'll pay $1 per pull up"), then manual contacted people for payment after the event. This isn't very slick and very time consuming.
What I'd like to be able to do is collect a pledge and payment information, then charge people automatically after the event. From what I've seen, I should put a hold/authorization on their account, then capture it with the appropriate amount after the event. But reauthorizing will only allow up to 115% of the original, and I can't very well just authorize a large amount and let it sit for two months before reauthorizing and capturing it.
I know this can be done, but I haven't messed with this side of things before, and the REST API from paypal doesn't have an obvious solution. Is there something I'm missing? Should I be going about this a different way?
You can use reference transactions. I would recommend sticking with the Classic API for now, though. REST isn't as mature yet and doesn't have all the same functionality quite yet.
So in the classic API you would use Express Checkout and/or Payments Pro. You can process an original authorization and then simply void it, or use the card verification process with Payments Pro.
You won't need to capture an original amount, so you won't need to worry about the 115% cap on the capture.
Instead, you'll use the DoReferenceTransaction API to process any amount you need to at any time from that user's account account.
With Express Checkout you have to be sure to include a billing agreement in the setup. This guide outlines that whole process.
With Payments Pro you just do the original card verification / auth and then pass that auth ID into the DoReferenceTransaction API.
In either case, if you're working with PHP this PayPal PHP SDK will make all of the API calls very quick and easy for you.
The company where I work is a software vendor with a suite of applications. There are also a number of web services, and of course they have to be kept stable even if the applications change. We haven't always succeeded with this, and sometimes a customer finds that a service is not behaving as before after upgrading.
We now want to handle this better. In general, web services shouldn't change, and if they have to, at least we will know about it and document the change.
But how do we ensure this? One idea is to compare the WSDL files with the previous versions at every release. That will make sure the interfaces don't change, but it won't detect that the behavior changes, for example if a bug is introduced in some common library.
Another idea is to build up a suite of service tests, for example using soapUI. But then we'll never know if we have covered enough cases.
What are some best practices regarding this?
I think, you can definitely be confident of the stability of the services If you keep updating your service tests with the latest changes in the service and I think this is one of the best practices people use before they deploy.
Also, In general, I think what would probably matter is how well the unit testing is being done by the developers who are writing the components(libraries) used by the services. Are those unit tests being updated with the changes in the components being used by the service.
There as two kinds of changes for a web service, breaking change and non-breaking change. Breaking change is like changing the signature of a web method or changing a datacontract schema. Non-breaking change is like adding a new web method or adding an optional member to a datcontract. In general your client should continue to work with a non-breaking change. I don't know which technology you are using but use versioing in service namespace and datacontract namespace following W3C recommendations. You can even continue to host different versions at different endpoints. This way your clients will break if they try to use a new version of your service without re-generating the proxy from the new version of WSDL or continue to use the old version.
Some WCF specific links are
http://msdn.microsoft.com/en-us/library/ms731060.aspx
http://msdn.microsoft.com/en-us/library/ms733832.aspx
I wouldn't consider behaviour change as a change in SOA sense. That is more like fixing defects.
IMO, aside from monitoring the WSDL for changes (which is really only necessary if you have a willy-nilly implementation ("promote-to-production") strategy), the only way to really ensure that everythign is operational and stable, is to perform continuous, automated, periodic, functional testing with a test suite that provides complete coverage of both the WSDL and the underlying application functionality, including edge cases. The test cases should be version controlled just like the app and WSDL, and should be developed in parallel to new versions of the app (not afterward, as a reaction).
This can all be automated with SoapUI. Ideally, logging results somewhere that can be accumulated and reported on some dashboard, so that if somethign breaks, you know when it broke, and hopefully correlate that to an event such as an application update, or something more benign such as a service pack being pushed, electrical work being performed, etc..
However... do as I say, not as I do. I have been unsuccessful in pushing this strategy at work. Your votes will tell me whether I should push harder or do something else!