Say I have two environments (test and production) with two different URLs. I also have two services (serviceA and serviceB) that needs different header values. I could deal with this with four environments in Postman:
testServiceA: url for test, header value for serviceA
testServiceB: url for test, header value for serviceB
productionServiceA: url for production, header value for serviceA
productionServiceB: url for production, header value for serviceB
Here I have duplication of both the URLs and the headers. As I add another url, I need six environments in total:
testServiceA: url for test, header value for serviceA
testServiceB: url for test, header value for serviceB
productionServiceA: url for production, header value for serviceA
productionServiceB: url for production, header value for serviceB
stagingServiceA: url for staging, header value for serviceA
stagingServiceB: url for staging, header value for serviceB
And as I add another service which requires a changed header value, I need another 3:
testServiceA: url for test, header value for serviceA
testServiceB: url for test, header value for serviceB
productionServiceA: url for production, header value for serviceA
productionServiceB: url for production, header value for serviceB
stagingServiceA: url for staging, header value for serviceA
stagingServiceB: url for staging, header value for serviceB
testServiceC: url for test, header value for serviceC
productionServiceC: url for production, header value for serviceC
stagingServiceC: url for staging, header value for serviceC
How can I avoid this? It would be great if I could choose multiple environments as active. Then I could place a checkmark next to "staging" and "serviceC" for example.
For a solution specific to Paw:
Paw makes has the concept of environment domains, which allows for easier control on your environment values. Basically an environment domain can have multiple environments, which are representations of the same environment value.
In your case, you could have 3 environments domains (serviceA, serviceB, serviceC), for which you would have 3 environments (test, staging, production)
In general, this allows for a lot of flexibility, as multiple environment domains can be used together in a single request. For instance, one could imagine a Server environment domain with different environments (us-east-1, us-west, ...), which could combine with, say a Version environment domain (v1.0, v1.1, v2.0, etc.), and combine them into a single request to check whether version 2.0 works on us-east-1, and so on.
For a solution specific to Postman:
You can use some {{}} intricacies to supercharge some environments.
Environment variables can refer to each other:
Now, when you refer to the environment variable {{some-important-header}}somewhere, it will actually refer to the {{{{mode}}-some-important-header}}, which in this case is {{test-some-important-header}}, or -1. Every time you want to change mode, you have to change the environment variable value mode to the correct value, like production, or staging.
It's not cleanest solution, but it avoids creating a bunch of environments due to coupling.
Related
I'm trying to configure the Google Cloud loadbalancer to do the following:
I have a website running on a Wordpress machine in a VM instance which I want users to access when they enter outairnet.com.
And I have a separate html file that I want users to access when they access outairnet.com/map.
WP is running on a compute engine VM, connected to a VM instance and to a backend service. The seperate html file is on a service bucket, connected to a backend bucket.
I've triedd to configure a very simple path forwarding rule, which made sense to me. But it just adds up to anyone trying to access outairnet.com/* gets to the WP (which is fine)
but accessing outairnet.com/map doesn't point to the storage bucket with the html file, however accessing outairnet.com/index.html does point to the separate html file.
My LB config looks like this.
I think I'm on to the problem but still can't solve it.
it looks like google console adds a /* rule even when I try to delete it.
so its a /* path rule that catches everything despite having a more specific rule like /mypath/* in addition.
but after removing it is just readded automatically for some reason. why?
It's possible - there are a few steps involved such as creating a bucket with your static page, adding it as a backend service in your load balancer and creating a new path-rule in it to redirect the requests.
And now the details:
Create a new bucket - pick the name you like (outairnet-static or something that will be meaningful to you so you don't delete by accident). You can ignore all the tutorials telling that it has to have the exact name of your domain - since it will only be hosting a file accessible under outairnet.com/mylink/ it will work regardless of the name used. I tested it.
Create a directory in your bucket named exactly ax the path under which you want it to be. If you want outairnet.com/mylink/ then directory's name has to be mylink. Upload your files into that directory. Name your main index file index.html unless you want to provide full file path.
Make the bucket avaialble to everyone.
Go to your LB configuration and edit backend services; add a new backend bucket.
Go to your Host and Path Rules and configure a new path; Enter the name of your site and the path (Remember that the path value must be /mylink/*.) and select the bucket you've just created from the dropdown list.
No changes necessary for the frontend. Save the changes and in a few moments it should be working.
I just added another path rule with just "/" directing to the VM and it seemed to do it, but now the only glitch is www.outairnet.com/map is fine but outairnet.com/map without www directs to the vm and not the bucket
I have an application where I want to serve static files to my customers.
To protect these files I'm using AWS cloud-front, and I've setup my distribution to require that you have a signed url to access the files.
However there is 1 file on my CDN I want to make public to everyone, no restricted access required.
I know I could make a second Cloudfront distribution without security, and serve the file through that one. However this would make the client resolve 2 separate (sub)domains.
So ideally I would like all this to work from a single Cloudfront (sub)domain, but I don't know if it's possible.
I looked into signing a url that lasts forever, but it looks like there are too many things that can "invalidate" the url before it's expire time such as the tokens expiring.
You can have a look under the "Behaviors" tab of your cloudfront distribution. There you can specify different actions based on the path that is requested.
So if you want the public path to be at /public then you can add that as a new behavior and in that same window set Restrict Viewer Access (Use Signed URLs or Signed Cookies)to No.
There should already be a Default(*) behavior. When that new behavior is added it should be added as a higher precedence than the default behavior automatically.
I have a legacy site and a new site, and I'm using Cloudfront to route traffic to the two different server groups based on URL path (eg. /new goes to the new servers, everything else to the old ones).
In AWS Cloudfront, my Default (*) path pattern captures all traffic not caught by the other rules, and routes these requests to the legacy site. This site explicitly prevents caching with its headers and I want to override this.
It looks from the AWS Console, though, like I can't do this:
All the other cache behaviours (eg. for the /new path pattern) allow me to set these options. Does this mean that Cloudfront doesn't allow customisation of TTLs for the default path? If not, the only way I can fix these cache headers will be to manually change them at the source origin, which I'd prefer to avoid.
Is there a way I can change these settings for my default route?
i have some static files, which is served from nginx.
And i have nginx conf that direct
http//xx.yy.com/style.css => /web/style/xx.css
Now i want to use AWS Cloudfront to server this static css/js files.
How can in do this in Cloudfront?
At the end of day, i want to be able to dynamically direct request to different files or folders according to subdomain.
for example :
http//xx.yy.com/style.css => /web/style/xx.css
http//zz.yy.com/style.css => /web/style/zz.css
http//xx.yy.com/api.js => /web/api/xx.js
http//zz.yy.com/api.js => /web/api/zz.js
CloudFront 'origin' decisions are largely based on two things - URL and path.
Within a single CloudFront distribution you can have multiple 'behaviour' rules. Each rule can have its own origin, so you could say:
For request path:
/foo/
Use origin:
http://foo.origin.com/
Each distribution can have multiple 'alternate domain names', but you cannot say, within a single distribution, 'for this hostname, use this origin', that can only be specified on a path basis.
At the end of day, i want to be able to dynamically direct request to different files or folders according to subdomain.
http//xx.yy.com/style.css => /web/style/xx.css
http//zz.yy.com/style.css => /web/style/zz.css
Nonetheless, you do have an option here. CloudFront can be configured to whitelist headers, including the Host header. Whitelisted headers are used as part of the cache key variant. Subsequently if you configure:
Configure 1 Cloudfront Distribution
Set 2 Alternative Domain Names (xx.yy.com and zz.yy.com)
Whitelist 'Host' as header to be sent to origin
On the origin side, configure name based virtual hosting with the appropriate rewrites (e.g. if xx.yy.com/style.css, then serve xx.css)
This should result in a working configuration for what you've described, although does involve more logic in the Nginx layer than would be preferable.
Please note that if you wanted xx.yy.com and zz.yy.com to go to different origin (HTTP servers) for the same directory, you'd need to use different CloudFront distributions. Furthermore, CloudFront doesn't allow you rewrite file parts of URLS (e.g. /foo.css -> origin/bar.css), so doing it on a specific file basis would be tricky.
Can I have the soap:address location in a WSDL relative to the WSDL location, or at least relative to the server?
For instance I want to write:
<soap:address location="https://exampleserver.com/axis2/services/ExampleService" />
as:
<soap:address location="/axis2/services/ExampleService" />
This would enable faster deployment to multiple servers, like test servers. Also, in the case of axis2c if I want my service to be used both from HTTP or HTTPS life becomes harder for developers using my service as they can't simply import the WSDL from it's default location "?WSDL".
The WSDL describes to clients the message formats, types, parameters etc needed to interact with the web service. This is then used by tools like WSDL2C to generate the code needed for the interaction.
But even if you expose your service on HTTP or HTTPS, the client stub code will be the same. You don't regenerate your client stubs for each endpoint address. The client stays the same, it's the access point that changes.
This address should not be hardcoded in the generated client code, it must be a configurable URL inside the client application.
Sure, you have an URL specified inside the WSDL and it's a nuisance when you deploy your web service in the dev server, and then to staging and next into production. The endpoints will be different in each environment (maybe multiplied by 2 for HTTP + HTTPS) but at this point your developers are not affected because you don't regenerate the code.
When it comes to access the web service, you would still have different addresses (for dev, staging and prod servers) even if it would be relative to something or absolute. So I don't see how it is helpful to have relative address inside the WSDL since you still have to manage the access points into the client configuration.
There are two ways of getting the WSDL.
One where a hard-coded wsdl is served, for example:
https://hostname/contextname/services/myAPIService/myAPI.wsdl
and another one where a generated wsdl is served, for example:
https://hostname/contextname/services/myAPIService?wsdl
If you use the dynamic option it will use this code:
req.getRequestURL().toString();
to get the URL that will be used in the generated WSDL. This code is in the class ListingAgent (in the package org.apache.axis2.transport.http).
From what you mentioned in your question if you want to have relative location it must be because you want to use it in multiple servers, so you would need to use the dynamic option.
One problem I found with the dynamic options is that if in the original WSDL the location is using HTTP, then in the generated one it will still use HTTP even if you have used HTTPS to access it. (This happens in version 1.5 which is the one my project is using)
Another problem is if you are using a load balancer, because the generated WSDL will be generated with the location of the final server instead of the balancer. An option for this would be to extend the classes AxisServlet and ListingAgent to replace the code mentioned above.
After a long search I'm almost sure that soap:address's location attribute has to be an absolute URL. This gets things more complicated if you work with different environments, such as development, test and production.
Maybe a workaround would be to read, on the client side, the first part of the URL from a config file (e.g. https://exampleserver.com) and the final part from the WSDL (e.g. /axis2/services/ExampleService) and combine them to build an absolute path. The former will allow you to switch among environments.