Google Compute Engine memory Utilization - google-cloud-platform

I got a "recommendation" to add more memory to my 1 vCPU, 1.75 GB Google Compute Engine instance. I added a GB, and all is quiet.
However it has increased my overall cost about 50% (if I am reading it right - a task in and of itself), and I'd like to know what my memory utilization is.
I see it tracking CPU, Disk, and network, but not memory. I looked at the monitoring options and don't see memory as an option for GCE.
How do I monitor memory over time? I want to make sure I am running efficiently AND cheaply.
( see this question never got answered Memory usage metric identifier Google Compute Engine)

There are a couple of methods you could use to monitor the memory usage of a Compute Engine instance.
The first involves the use of the Stackdriver Monitoring Agent. This can be installed on the instance, and provides additional metrics including memory usage. For more information on this please see here.
Alternatively you could use a more 'Linux-esque' approach. For example, you could use the watch command to track used/free memory at intervals and output this to a file. The following command would allow you to do this:
watch -n 2 free 'wc -l my.log | tee -a memory.log'
This would create an output file ('memory.log') displaying your memory usage at 2 seconds intervals (To change the interval, change the number 2 to however many seconds you require).

Related

Querying table with >1000 columns fails

I can create and ingest data into a table with 1100 columns, but when I try to run any kind of query on it, like get all vals:
select * from iot_agg;
It looks like I cannot read it with the following error
io.questdb.cairo.CairoException: [24] Cannot open file: /root/.questdb/db/table/iot_agg.d
at io.questdb.std.ThreadLocal.initialValue(ThreadLocal.java:36)
at java.lang.ThreadLocal.setInitialValue(ThreadLocal.java:180)
at java.lang.ThreadLocal.get(ThreadLocal.java:170)
at io.questdb.cairo.CairoException.instance(CairoException.java:38)
at io.questdb.cairo.ReadOnlyMemory.of(ReadOnlyMemory.java:135)
at io.questdb.cairo.ReadOnlyMemory.<init>(ReadOnlyMemory.java:44)
at io.questdb.cairo.TableReader.reloadColumnAt(TableReader.java:1031)
at io.questdb.cairo.TableReader.openPartitionColumns(TableReader.java:862)
at io.questdb.cairo.TableReader.openPartition0(TableReader.java:841)
at io.questdb.cairo.TableReader.openPartition(TableReader.java:806)
...
Ouroborus might be right in suggesting that the schema could be revisited, but regarding the actual error from Cairo:
24: OS error, too many open files
This is dependent on the OS that the instance is running on, and is tied to system-wide or user settings, which can be increased if necessary.
It is relatively common to hit limits like this for multiple different DB engines which handle large amounts of files. This is commonly configured with kernel variables to set the maximum number of open files. Checking the max limit for open files can be done on Linux and MacOS with
ulimit -n
You can also use ulimit to set this to a value you need. If you need to set it to 10,000, for example, you can do this with:
ulimit -n 10000
edit: There is official documentation for capacity planning when deploying QuestDB which takes several factors such as CPU, memory, network capacity, and a combination of these elements into consideration. For more information, see the capacity planning guide

Could not allocate a new page for database ‘TEMPDB’ because of insufficient disk space in filegroup ‘DEFAULT’

ETL developer reports they have been trying to run our weekly and daily processes on ADW consistently. While for the most part they are executing without exception, I am now getting this error:
“Could not allocate a new page for database ‘TEMPDB’ because of insufficient disk space in filegroup ‘DEFAULT’. Create the necessary space by dropping objects in the filegroup, adding additional files to the filegroup, or setting autogrowth on for existing files in the filegroup.”
Is there a limit on TEMPDB space associated with the DWU setting?
The database is limited to 100TB (per the portal) and not full.
Azure SQL Data Warehouse does allocate space for a tempdb, at around 399 GB per 100 DWU. Reference here.
What DWU are you using at the moment? Consider temporarily raising your DWU aka service objective or refactoring your job to be less dependent on tempdb. Lower it when your batch process is finished.
It might also be worth checking your workload for anything like cartesian products, excessive sorting, over-dependency on temp tables etc to see if any optimisation can be done.
Have a look at the Explain Plans for your code, and see whether you have a lot more data movement going on than you expect. If you find that one query does moved a lot more into Q tables, you can probably tune it to avoid the data movement (which may mean redesigning tables to distribute in a different key).

How Hadoop calculate physical memory and virtual memory during a job execution

I have few queries related to the counters used in Hadoop to display memory usage.
A map reduce job executed on a cluster gives me below menitoned counter values. Input file used is just in KBs, but these counter shows 35GB and 420 GB usage.
PHYSICAL_MEMORY_BYTES=35110662144
VIRTUAL_MEMORY_BYTES=420121841664
For another different job on same input file it shows 309 MB (physical) and 3G(vitual) usage
PHYSICAL_MEMORY_BYTES=309526528
VIRTUAL_MEMORY_BYTES=3435827200
First job is more CPU intensive than other and creates more objects than the other one but still its usage shown seems very high.
So I just wanted to know how this memory usage is calculated. I tried going through some posts and gave an over view on this below link which seems to be
requirement task for describing these variables (https://issues.apache.org/jira/i#browse/MAPREDUCE-1218 ) but couldnt find how these are calculated. It does gives me an idea on how these values are passed to Job Tracker,but no information on how these are determined. So if some one could give some insight on this than it would be really helpfull.
You can find few references here and here. The second link in particular to map and reducer job and how slots are decided based on memory allocations. Happy Learning

Performance Impact on Elastic Map reduce for Scale Up vs Scale Out scenario's

I just ran Elastic Map reduce sample application: "Apache Log Processing"
Default:
When I ran with default configuration (2 Small sized Core instances) - it took 19 minutes
Scale Out:
Then I ran it with configuration: 8 small sized core instances - it took 18 minutes
Scale Up:
Then I ran it with configuration: 2 large sized core instances - it took 14 minutes.
What do think about performance of scale up vs scale out when we have bigger data-sets?
Thanks.
I would say it depends. I've usually found the raw processing speed to be much better using m1.large and m1.xlarge instances. Other than that, as you've noticed, the same job will probably the same amortized or normalized instance hours to complete.
For your jobs, you might want to experiment with a smaller sample data set at first and see how much time that takes, and then estimate how much time it would take for the full job using large data sets to complete. I've found that to be the best way to estimate the time for job completion.

Anyone benchmarked virtual machine performance for build servers?

We have been trying to use virtual machines for build servers. Our build servers are all running WinXP32 and we are hosting them on VMWare Server 2.0 running on Ubuntu 9.10. We build a mix of C, C++, python packages, and other various deployment tasks (installers, 7z files, archives, etc). The management using VMWare hosted build servers is great. We can move them around, shared system resources on one large 8-core box, remotely access the systems through a web interface, and just basically manage things better.
But the problem is that the performance compared to using a physical machine seems to range from bad to horrid depending upon what day it is. It has proven very frustrating. Sometimes the system load for the host will go above 20 and some times it will be below 1. It doesn't seem to be based on how much work is actually being done on the systems. I suspect there is a bottleneck in the system, but I can't seem to figure out what it is. (most recent suspect is I/O, but we have a dedicated 1TB 7200RPM SATA 2 drive with 32MB of cache doing nothing but the virtual machines. Seems like enough for 1-2 machines. All other specs seem to be enough too. 8GB RAM, 2GB per VM, 8 cores, 1 per vm).
So after exhausting everything I can think of, I wanted to turn to the Stack Overflow community.
Has anyone run or seen anyone else run benchmarks of software build performance within a VM.
What should we expect relative to a physical system?
How much performance are we giving up?
What hardware / vm server configurations are people using?
Any help would be greatly appreciated.
Disk IO is definitely a problem here, you just can't do any significant amount of disk IO activity when you're backing it up with a single spindle. The 32MB cache on a single SATA drive is going to be saturated just by your Host and a couple of Guest OS's ticking over. If you look at the disk queue length counter in your Ubuntu Host OS you should see that it is high (anything above 1 on this system with 2 drive for any length of time means something is waiting for that disk).
When I'm sizing infrastructure for VM's I generally take a ballpark of 30-50 IOPS per VM as an average, and that's for systems that do not exercise the disk subsystem very much. For systems that don't require a lot of IO activity you can drop down a bit but the IO patterns for build systems will be heavily biased towards lots of very random fairly small reads. To compound the issue you want a lot of those VM's building concurrently which will drive contention for the disk through the roof. Overall disk bandwidth is probably not a big concern (that SATA drive can probably push 70-100Meg/sec when the IO pattern is totally sequential) but when the files are small and scattered you are IO bound by the limits of the spindle which will be about 70-100 IO per second on a 7.2k SATA. A host OS running a Type 2 Hypervisor like VMware Server with a single guest will probably hit that under a light load.
My recommendation would be to build a RAID 10 array with smaller and ideally faster drives. 10k SAS drives will give you 100-150 IOPs each so a pack of 4 can handle 600 read IOPS and 300 write IOPs before topping out. Also make sure you align all of the data partitions for the drive hosting the VMDK's and within the Guest OS's if you are putting the VM files on a RAID array. For workloads like these that will give you a 20-30% disk performance improvement. Avoid RAID 5 for something like this, space is cheap and the write penalty on RAID 5 means you need 4 drives in a RAID 5 pack to equal the write performance of a single drive.
One other point I'd add is that VMware Server is not a great Hypervisor in terms of performance, if at all possible move to a Type 1 Hypervisor (like ESXi v4, it's also free). It's not trivial to set up and you lose the Host OS completely so that might be an issue but you'll see far better IO performance across the board particularly for disk and network traffic.
Edited to respond to your comment.
1) To see whether you actually have a problem on your existing Ubuntu host.
I see you've tried dstat, I don't think it gives you enough detail to understand what's happening but I'm not familiar with using it so I might be wrong. Iostat will give you a good picture of what is going on - this article on using iostat will help you get a better picture of the actual IO pattern hitting the disk - http://bhavin.directi.com/iostat-and-disk-utilization-monitoring-nirvana/ . The avgrq-sz and avgwq-sz are the raw indicators of how many requests are queued. High numbers are generally bad but what is actually bad varies with the disk type and RAID geometry. What you are ultimately interested in is seeing whether your disk IO's are spending more\increasing time in the queue than in actually being serviced. The calculation (await-svctim)/await*100 really tells you whether your disk is struggling to keep up, above 50% and your IO's are spending as long queued as being serviced by the disk(s), if it approaches 100% the disk is getting totally slammed. If you do find that the host is not actually stressed and VMware Server is actually just lousy (which it could well be, I've never used it on a Linux platform) then you might want to try one of the alternatives like VirtualBox before you jump onto ESXi.
2) To figure out what you need.
Baseline the IO requirements of a typical build on a system that has good\acceptable performance - on Windows look at the IOPS counters - Disk Reads/sec and Disk Writes/sec counters and make sure the average queue length is <1. You need to know the peak values for both while the system is loaded, instantaneous peaks could be very high if everything is coming from disk cache so watch for sustained peak values over the course of a minute or so. Once you have those numbers you can scope out a disk subsystem that will deliver what you need. The reason you need to look at the IO numbers is that they reflect the actual switching that the drive heads have to go through to complete your reads and writes (the IO's per second, IOPS) and unless you are doing large file streaming or full disk backups they will most accurately reflect the limits your disk will hit when under load.
Modern disks can sustain approximately the following:
7.2k SATA drives - 70-100 IOPS
10k SAS drives - 120-150 IOPS
15k SAS drives - 150-200 IOPS
Note these are approximate numbers for typical drives and represent the saturated capability of the drives under maximum load with unfavourable IO patterns. This is designing for worst case, which is what you should do unless you really know what you are doing.
RAID packs allow you to parallelize your IO workload and with a decent RAID controller an N drive RAID pack will give you N*(Base IOPS for 1 disk) for read IO. For write IO there is a penalty caused by the RAID policy - RAID 0 has no penalty, writes are as fast as reads. RAID 5 requires 2 reads and 2 writes per IO (read parity, read existing block, write new parity, write new block) so it has a penalty of 4. RAID 10 has a penalty of 2 (2 writes per IO). RAID 6 has a penalty of 5. To figure out how many IOPS you need from a RAID array you take the basic read IOPS number your OS needs and add to that the product of the write IOPS number the OS needs and the relevant penalty factor.
3) Now work out the structure of the RAID array that will meet your performance needs
If your analysis of a physical baseline system tells you that you only need 4\5 IOPS then your single drive might be OK. I'd be amazed if it does but don't take my word for it - get your data and make an informed decision.
Anyway let's assume you measured 30 read IOPS and 20 write IOPS during your baseline exercise and you want to be able to support 8 instances of these build systems as VM's. To deliver this your disk subsystem will need to be able to support 240 read IOPS and 160 write IOPS to the OS. Adjust your own calculations to suit the number of systems you really need.
If you choose RAID 10 (and I strongly encourage it, RAID 10 sacrifices capacity for performance but when you design for enough performance you can size the disks to get the capacity you need and the result will usually be cheaper than RAID5 unless your IO pattern involves very few writes) Your disks need to be able to deliver 560 IOPS in total (240 for read, and 320 for write in order to account for the RAID 10 write penalty factor of 2).
This would require:
- 4 15k SAS drives
- 6 10k SAS drives (round up, RAID 10 requires an even no of drives)
- 8 7.2k SATA drives
If you were to choose RAID 5 you would have to adjust for the increased write penalty and will therefore need 880 IOPS to deliver the performance you want.
That would require:
- 6 15k SAS drives
- 8 10k SAS drives
- 14 7.2k SATA drives
You'll have a lot more space this way but it will cost almost twice as much because you need so many more drives and you'll need a fairly big box to fit those into. This is why I strongly recommend RAID 10 if performance is any concern at all.
Another option is to find a good SSD (like the Intel X-25E, not the X-25M or anything cheaper) that has enough storage to meet your needs. Buy two and set them up for RAID 1, SSD's are pretty good but their failure rates (even for drives like the X-25E's) are currently worse than rotating disks so unless you are prepared to deal with a dead system you want RAID 1 at a minimum. Combined with a good high end controller something like the X-25E will easily sustain 6k IOPS in the real world, that's the equivalent of 30 15k SAS drives. SSD's are quite expensive per GB of capacity but if they are used appropriately they can deliver much more cost effective solutions for tasks that are IO intensive.