Refresh DNS in coldfusion 9 - coldfusion

Does anyone know of a way to flush the Cached DNS in the JVM in ColdFusion 9 without restarting the services.

You might try setting a custom value in the java.security file that would change the TTL for the dns lookups, but I don't believe there is a programmatic way to clear the cache.
http://tjordahl.blogspot.com/2004/10/cfmx-and-dns-caching.html

Related

Couchdb with Clouseau plugin is taking more storage than expected

I've been using an AWS instance with CouchDb as a backup to IBM's Cloudant database of my application (using replication).
Everything seems to work fine but I've been noticing the permanent increase of Volume size in the AWS instance (it gets full all the time with the annoying problem of increasing a volume when there's no space in the partition).
Actual use of storage
The data in the screenshot is using almost 250 GB. I would like to know the possible reason for this issue, my guess is that the Clouseau plugin is using more space to enable the search index queries.
As I'm not an expert with this database, Anyone could explain to me why this is happening and how could I mitigate the issue?
My best regards!
If you are only backing up a Cloudant database to a CouchDB instance via replication, you should not need Clouseau enabled.
Clouseau is only required for search indices and if you are not doing queries on your backup database you can disable Clouseau in there. The indices are not backed up in the replication.

AWS Amplify Connecting to GoDaddy - Documentation Unclear - Redirects Too Many Times

I am trying to connect my Amplify app to a GoDaddy website and the AWS instructions are not clear on how to do this.
Following these instructions I created a CNAME record to point to my Amplify app.
(Image from the documentation)
I have a "master.xxxxxxxx.amplifyapp.com" and a "feature.xxxxxxxx.amplifyapp.com", am I supposed to use one of these or just the "xxxxxxxx.amplifyamp.com"?
It seems from the docs that these records take up to 2 days to update and I do not want to waste 4 days attempting this by trial and error.
Edit
Following #Rodrigo M's answer I used the 'master.xxxxxxxx.amplifyapp.com' route for the CNAME record but when I go to the page all I see is the error:
This page isn’t working xxxxx.domain.com redirected you too many times.
And then when I look in the Network tab I see that the page did a bunch of 302 redirects where the name and the initiator were "Index.html".
Does anyone have any ideas of what is going wrong?
Each of the AWS Amplify domains that you reference refer to a branch of your app eg master or feature. Use the full domain name eg master.xxxxxxxx.amplifyapp.com as the target of your CNAME record for the branch you want to expose on your custom domain.
All of the standard DNS propagation warnings say allow 24 to 48 hours but in practice it's usually much much quicker so don't worry about waiting for two days too much.
I can see your DNS TTL is set for 1 hour. This value is how long the DNS system will cache your DNS records. Which means you can make a change and it would take up to an hour for those records to be updated throughout the internet. You could drop that to 5 minutes or less if you want to do trial and error testing or make quick switches to a different branch.
Godaddy doesn't support ANAME/ALIAS so you can't connect it properly. However you can forward the domain without www
Scroll down to the Forwarding section of the go daddy DNS page and set up a Temporary (302) http forward from yourdomain.com to www.yourdomain.com
It took about 30min for this to take affect for me.

Best practices while deploying a web app in S3, cloudFront and Route53 [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 5 years ago.
Improve this question
I have a static web app(SPA) deployed in S3 and I am serving the app using CloudFront and routing the domain using Route53. Now, I want the Route53 and cloudFront to have maximum TTL in their respective caches. There is a similar question like this, but it is out-dated.
My questions:
Does setting the CloudFront cache to one year(365 days) is good and when any new updates to S3 occurs, we can invalidate the cache using the API or console?
Assuming that the Alias record does not change often, setting the Route53 NS cache to 2 days(48 hours) is correct? If we have to change, then do we need to be pre-cautious and wait for 2 days to reflect.
I believe that setting the Route53 and CloudFront cache to maximum will give the best experience(Low latency) to the users. Please correct me If am wrong.
Q1: if you are pretty sure your object does live that long, then it might be OK to use the CloudFront cache for 1 year. You can always invalidate your objects using the Web console or by using a script like this:
#!/bin/sh
aws configure set preview.cloudfront true
INVALIDATION_ID=$(date +"%S")
INVALIDATION_JSON="{
\"DistributionId\": \"<YOUR_DISTRIBUTION_ID>\",
\"InvalidationBatch\": {
\"Paths\": {
\"Quantity\": 1,
\"Items\": [
\"/*\"
]
},
\"CallerReference\": \"$INVALIDATION_ID\"
}
}"
aws cloudfront create-invalidation --cli-input-json "$INVALIDATION_JSON"
Please note, if you need to invalidate, then you CANNOT invalide the users browser cache. So I would only choose a high setting like that for files, of which I am absolutely certain they won't change (in example, Videos).
I found it useful to chose my cache time according to what Google recommends. You'll find some input here.
However, I would not cache an SPA so hard: you will have changes pretty often there, I assume.
Q2: I think it is general best practice to put Route 53 TTL to a higher number. Just remember you cannot switch DNS so fast then. Usually before a DNS switch, just lower the TTL to a lower number a few days in advance. As you are using AWS, with Alias-Resources this should not be a problem that much, the DNS switches are done without hassle.
Generally speaking, I agree with your approach. You sacrifice some flexibility, but it's usually worth it.

what's an easily scalable way to set a cookie on my domain?

I am trying to do some simple cookie tracking and need to find an easily scalable way to set a cookie. The setup needs only to set a cookie, no server side logic needed, no uniqueness or token required. Something as simple as "HAS_VISITED=true;" is all I really need. Is there some cloud service that does this? I need it to be on my domain so I can't really have another domain do this. I've looked into Varnish to set cookies but that means that I need to set up a server that will scale. The scale could be very large ( > 4k requests / sec ) so I dont really trust myself to setup a load-balancer/EC2 configuration with any real confidence.
I am really hoping that someone has already solved this. If there isn't a service to do this what would the cheapest setup (CPU/resource wise) be?
You can do only with Varnish, you need something like this:
set req.http.Cookie = "HAS_VISITED=true;";

File uploading with ColdFusion, too big of file timing out?

A client has the admin ability to upload a PDF to their respective directory and have it listed on their website. All of this works dandy until a PDF reaches a certain file size that makes the server time out. This causes an error and the file uploaded will not succeed.
As mentioned in the title, we are using ColdFusion with a command. Is there any java/jquery/flash modules or applications that could resolve this issue?
Edit: For clarification, this is the web server timing out and not ColdFusion.
You can change setting in CFadministrator > settings > Request Size Limits
On the action page, you can use CFSETTING to extend the timeout, allowing the page to run longer than it otherwise is allowed:
<cfsetting requesttimeout="{seconds}">
Obviously, replace {seconds} with the number of seconds you want to allow.
For clarification, this is only if it is CF timing out, and not the web server or client (browser).
Also, most web servers also have a file size limit set for uploads. Make sure this is set to a reasonable size.
You might want to consider using the cffileupload tag. It's a flash based uploader that might give you a better result.
Alternatively, you might be able to find some way using a flash upload tool to break up the upload into chunks and put it back together again somehow to meet the hard limits of your web server.
If you figure it out please be sure to share it here, this is an interesting problem!
Your will need to pay attention to the iiS configuration > RequestLimits > maxallowedcontentLength
setting, as well as the request timeout setting in ColdFusion administrator.
If a large file is uploaded that exceeds 30 MBytes, then iiS will throw a 404 error by default.
Suggest you increase the setting (I changed mine to 300 MBytes) to the maximum you might expect then change the timeout setting in ColdFusion to suit the size of file and the bandwidth that is available on your web hosting site in concert with bandwidth available to your clients (worst case).
You ought to test the upload with an approprate size file to make sure it all works but make sure that the site you test from has equivalent bandwidth to your clients. E.g. test from a site that uses ADSL.