Show memory usage of a coldfusion page - coldfusion

Coldfusion has a server monitor that will show "Requests by memory usage". Is there a way to show a request size of a page on the page itself?

You can take a before and after value and subtract the two. You can find the memory being used like this:
runtime = CreateObject("java","java.lang.Runtime").getRuntime();
totalMemory = runtime.totalMemory() / 1024 / 1024;//currently in use

Related

What determines AWS Redis' usable memory? (OOM issue)

I am using AWS Redis for a project and ran into an Out of Memory (OOM) issue. In investigating the issue, I discovered a couple parameters that affect the amount of usable memory, but the math doesn't seem to work out for my case. Am I missing any variables?
I'm using:
3 shards, 3 nodes per shard
cache.t2.micro instance type
default.redis4.0.cluster.on cache parameter group
The ElastiCache website says cache.t2.micro has 0.555 GiB = 0.555 * 2^30 B = 595,926,712 B memory.
default.redis4.0.cluster.on parameter group has maxmemory = 581,959,680 (just under the instance memory) and reserved-memory-percent = 25%. 581,959,680 B * 0.75 = 436,469,760 B available.
Now, looking at the BytesUsedForCache metric in CloudWatch when I ran out of memory, I see nodes around 457M, 437M, 397M, 393M bytes. It shouldn't be possible for a node to be above the 436M bytes calculated above!
What am I missing; Is there something else that determines how much memory is usable?
I remember reading it somewhere but I can not find it right now. I believe BytesUsedForCache is a sum of RAM and SWAP used by Redis to store data/buffers.
As Elasticache's docs suggest that SWAP should not go higher than 300 MB.
I would suggest checking the SWAP metric at that time.

List VM sizes in Microsoft Azure Compute based on Type or Category

We are trying to list all available sizes for particular location using the API "GET https://management.azure.com/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Compute/virtualMachines/{vmName}/vmSizes?api-version=2017-12-01". It returns nearly 22400 sizes. Is it really contains this many sizes under some region? Is there any elegant way to get VM sizes based on type.
For Example:
1. Get VM sizes based on General purpose, Memory optimized, Storage optimized etc.
2. Get VM Sizes based on RAM size, CPU count etc.
I used the sample posted by Laurent (link below) and it returned all available VM sizes' names, cores, disks, memory, etc. in the region (use parm location=region). If you put some code around it you should be able to do example 2.
Get Virtual Machine sizes list in json format using azure-sdk-for-python
def list_available_vm_sizes(compute_client, region = 'EastUS2', minimum_cores = 1, minimum_memory_MB = 768):
vm_sizes_list = compute_client.virtual_machine_sizes.list(location=region)
for vm_size in vm_sizes_list:
if vm_size.number_of_cores >= int(minimum_cores) and vm_size.memory_in_mb >= int(minimum_memory_MB):
print('Name:{0}, Cores:{1}, OSDiskMB:{2}, RSDiskMB:{3}, MemoryMB:{4}, MaxDataDisk:{5}'.format(
vm_size.name,
vm_size.number_of_cores,
vm_size.os_disk_size_in_mb,
vm_size.resource_disk_size_in_mb,
vm_size.memory_in_mb,
vm_size.max_data_disk_count
))
list_available_vm_sizes(compute_client, region = 'EastUS', minimum_cores = 2, minimum_memory_MB = 8192)

Tweepy iterating over tweepy.Cursor(api.friends).items()

I'm trying to get the friends of a user and append them to a list given a condition:
for friend in tweepy.Cursor(api.friends).items():
if friend not in visited:
screen_names.append(friend.screen_name)
visited.append(friend.screen_name)
However I obtain an error:
raise RateLimitError(error_msg, resp)
tweepy.error.RateLimitError: [{u'message': u'Rate limit exceeded', u'code': 88}]
Could you give me any hint on solving this problem? Thanks a lot
By default, friends method of API class, returns only list of 20 users per call, and by Twitter API you are limited to 15 calls only per window (15-minute). Thus you can only fetch 20 x 15 = 300 friends within 15-minutes.
Cursor in tweepy is another way of getting results without managing cursor value on each call to Twitter API.
You can increase the count of results fetched by per call, by including an extra parameter count.
tweepy.Cursor(api.friends, count = 200)
Maximum value of count can be 200. If you've friends more than 200 x 15 = 3000, than you need to use normal api.friends method, with maintaining cursor value and using sleep to distribute call timing. See GET friends/list page for detailed info.
Since tweepy 3.2+ you can instruct the tweepy library to wait for rate limits. This way you don't have to do that in your code.
To use this feature you would initialize your api handle as follows:
self.api = tweepy.API(auth, wait_on_rate_limit=True, wait_on_rate_limit_notify=True)
The documentation for the new variables is below.
wait_on_rate_limit – Whether or not to automatically wait for rate limits to replenish
wait_on_rate_limit_notify – Whether or not to print a notification when Tweepy is waiting for rate limits to replenish
Per Twitter's API documentation, you have reached your query limit. It looks like the rate limits are in effect for every 15 minutes of querying, so try again in 30 minutes or use a different IP Address to hit the API. If you scroll down Twitter's documentation, you will see your code 88.

How to define REST API with pagination

I am defining a simple REST API to run a query and get the results with pagination. I would like to make both client and server stateless if possible.
The client sends the query, offset and length. The server returns the results. For example, if there are 1000 results and client sends offset = 10 and length = 20, the server either returns 20 results since #10 till #29 if the total number of the results >= 30 or all results since #10 if the total < 30.
Client also needs to know to the total number of the results. Since I would like to keep both client and server stateless the server will always return total with the results.
So the protocol looks like following:
Client: query, offset, length ----------->
<----------- Server: total, results
The offset and length can be defined optional. If offset is missing the server assumes it is 0. If length is missing the server returns all the results.
Does it make sense ? How would you suggest define such a protocol ?
There is no standard in REST API design.
Since it's a query, not retrieving resource by its id, the search criteria is put into query string parameter, followed by the optional offset and length parameter.
GET /resource?criteria=value&offset=10&length=3
Assume your'd like to use JSON as response presentation, the result can be like this:
{
"total":100,
"results":[
{
"index":10,
"id":123,
"name":"Alice"
},
{
"index":11,
"id":423,
"name":"Bob"
},
{
"index":12,
"id":986,
"name":"David"
}
]
}
My way to implement pagination uses implicit information.
The client can only get "Pages". NO OFFSET OR LIMIT given by client.
GET /users/messages/1
The server in giving the first page with predefined amount of elements, e.g., 10. The Offset is calculated from the page number. Therefore the client dont have to worry about total amount of elements. This information can be provided in a header. To retrieve all elements (exceptional case) the client hast to write a loop and increment the page count.
Advantages: Cleaner URI; true pagination; offset, limit, lenght are clear defined.
Disadvantages: Getting all elements is hard, flexibility lost
Dont overload URIs with meta information. URIs are for resources identification

Log Slow Pages Taking Longer Than [n] Seconds In ColdFusion with Details

(ACF9)
Unless there's an option I'm missing, the "Log Slow Pages Taking Longer Than [n] Seconds" setting isn't useful for front-controller based sites (e.g., Model-Glue, FW/1, Fusebox, Mach-II, etc.).
For instance, in a Mura/Framework-One site, I just end up with:
"Warning","jrpp-186","04/25/13","15:26:36",,"Thread: jrpp-186, processing template: /home/mysite/public_html_cms/wwwroot/index.cfm, completed in 11 seconds, exceeding the 10 second warning limit"
"Warning","jrpp-196","04/25/13","15:27:11",,"Thread: jrpp-196, processing template: /home/mysite/public_html_cms/wwwroot/index.cfm, completed in 59 seconds, exceeding the 10 second warning limit"
"Warning","jrpp-214","04/25/13","15:28:56",,"Thread: jrpp-214, processing template: /home/mysite/public_html_cms/wwwroot/index.cfm, completed in 32 seconds, exceeding the 10 second warning limit"
"Warning","jrpp-134","04/25/13","15:31:53",,"Thread: jrpp-134, processing template: /home/mysite/public_html_cms/wwwroot/index.cfm, completed in 11 seconds, exceeding the 10 second warning limit"
Is there some way to get query string or post details in there, or is there another way to get what I'm after?
You can easily add some logging to your application for any requests that take longer than 10 seconds.
In onRequestStart():
request.startTime = getTickCount();
In onRequestEnd():
request.endTime = getTickCount();
if (request.endTime - request.startTime > 10000){
writeLog(cgi.QUERY_STRING);
}
If you're writing a Mach-II, FW/1 or ColdBox application, it's trivial to write a "plugin" that runs on every request which captures the URL or FORM variables passed in the request and stores that in a simple database table or log file. (You can even capture session.userID or IP address or whatever you may need.) If you're capturing to a database table, you'll probably not want any indexes to optimize for performance and you'll need to rotate that table so you're not trying to do high-speed inserts on a table with tens of millions of rows.
In Mach-II, you'd write a plugin.
In FW/1, you'd put a call to a controller which handles this into setupRequest() in your application.cfc.
In ColdBox, you'd write an interceptor.
The idea is that the log just tells you what pages arw xonsostently slow sp ypu can do your own performance tuning.
Turn on debugging for further details for a start.