My kitchen instance is configured with 256 memory ram, but it is running very slow. Then my questions are:
How can I configure it to improve the memory ram size?
Is there any tutorial or post where I can find a solution?
It varies based on the Test Kitchen driver. Heres an example for vagrant:
driver:
name: vagrant
customize:
memory: 1024
Related
I have a Spring MVC web application which I want to deploy in cloud.It could be AWS, Azure, Google cloud. I have to find out how much RAM & hard disk is needed. Currently I have deployed the application at my local machine in tomcat. Now I go to localhost:8080 & click on server status button. Under OS header it tells me:
Physical memory: 3989.36 MB Available memory: 2188.51 MB Total page
file: 7976.92 MB Free page file: 5233.52 MB Memory load: 45
Under JVM header it tells me:
Free memory: 32.96 MB Total memory: 64.00 MB Max memory: 998.00 MB
How to infer RAM & hard disk size from these data? There must be some empirical formula, like memory of OS + factor*jvm_size & I assume jvm size = memory size of applications. And while deploying to cloud we will not deploy all those example applications.
These stats from your local machine is in the idle state and does not have any traffic so it will definitely taking and consuming fewer resources.
You can not decide the clouds machine size on the bases of local machine memory stats but it will help little like the minimum resources can consume the application.
So the better way is to perform some load test if you are expecting the huge number of user then design accordingly on the base of load test.
The short way is to read the requirement or recommended system of a deployed application.
Memory
256 MB of RAM minimum, 512 MB or more is recommended. Each user
session requires approximately 5 MB of memory.
Hard Disk Space
About 100 MB free storage space for the installed product (this does
not include WebLogic Server or Apache Tomcat storage space). Refer to
the database installation instructions for recommendations on database
storage allocation.
you can further look here and here
so if you want desing for staging or development then you can choose one of these two AWS instancs.
t2.micro
1VCPU 1GB RAM
t2.small
1vCPU 2GB RAM
https://aws.amazon.com/ec2/instance-types/
From some reason download traffic from virtual machine on GCP (Google Cloud Platform) with Debian 9 is limited to 50K/s? Upload seems to be fine, inline with my local upload link.
It is the same with scp or https download. Any suggestions what might be wrong, where to search?
Machine type
n1-standard-1 (1 vCPU, 3.75 GB memory)
CPU platform
Intel Skylake
Zone
europe-west4-a
Network interfaces
Premium tier
Thanks,
Mihaelus
Simple test:
wget https://hrcki.primasystems.si/Nova/assets/download.test.html
Output:
--2018-10-18 15:21:00-- https://hrcki.primasystems.si/Nova/assets/download.test.html Resolving
hrcki.primasystems.si (hrcki.primasystems.si)... 35.204.252.248
Connecting to hrcki.primasystems.si
(hrcki.primasystems.si)|35.204.252.248|:443... connected. HTTP request
sent, awaiting response... 200 OK Length: 541422592 (516M) [text/html]
Saving to: `download.test.html.1' 0% [] 1,073,152 48.7K/s eta
2h 59m
Always good to minimize variables when trying to diagnose. So while it is unlikely the use of HTTP is why things are that very slow, you might consider using netperf or iperf3 to measure TCP bulk transfer performance between your VM in GCP and your local system. You can do that either "by hand" or via PerfKit Benchmarker https://cloud.google.com/blog/products/networking/perfkit-benchmarker-for-evaluating-cloud-network-performance
It can be helpful to have packet traces - from both ends when possible - to look at. You want the packet traces to be started before the test - it is important to see the packets used to establish the TCP connection(s). They do not need to be "full packet" traces, and often you don't want them to be. Capturing just the first 96 bytes of each packet would be sufficient for this sort of investigating.
You might also consider taking snapshots of the network statistics offered by the OSes running in your GCP VM and local system. For example, if running *nix taking a snapshot of "netstat -s" before and after the test. And perhaps a traceroute from each end towards the other.
Network statistics and packet traces, along with as many details about the two endpoints as possible are among the sorts of things support organizations are likely to request when looking to help resolve an issue of this sort.
I'm trying to determine the optimal settings for my ColdFusion PRODUCTION server. The server has the following specs.
ColdFusion: Enterprise Version 10 O/S: Windows Server 2012R2
Standard Processor: Intel(R) Xeon(R) CPU E5-2660 v2 # 2.20GHz
Installed Memory (RAM): 20.0 GB System Type: 64-bit
Operating System, x64-based processor
My Java and JVM settings from the CFIDE are:
Minimum Heap Size (in MB): 2048 Maximum Heap Size (in MB): 4096
JVM Arguments
-server -XX:MaxPermSize=192m -XX:+UseParallelGC -Xbatch -Dcoldfusion.home={application.home} -Dcoldfusion.rootDir={application.home} -Dcoldfusion.libPath={application.home}/lib -Dorg.apache.coyote.USE_CUSTOM_STATUS_MSG_IN_HEADER=true -Dcoldfusion.jsafe.defaultalgo=FIPS186Random
I have multiple websites running on this production server, all of which use ColdFusion. The database server is completely separate, so all that this server is responsible for is the ColdFusion application and web server processes.
The websites are completely data-driven, all pulling from the database located on my production database server. Lately, I've been seeing the ColdFusion service locking up, as it is maxing out the CPU. The memory is stable, it's only the CPU that is maxing out.
Can anyone make suggestions as to how I can tune it to improve overall performance while reducing strain on the CPU?
Java Version
java version "1.8.0_73" Java(TM) SE Runtime Environment (build
1.8.0_73-b02) Java HotSpot(TM) Client VM (build 25.73-b02, mixed mode)
Thank you!
Are you running all of these sites on a single instance of ColdFusion? If so, I would recommend running multiple instances of CF. Each instance can run the same JVM settings given your total available memory.
Minimum Heap Size (in MB): 2048
Maximum Heap Size (in MB): 4096
So that's a max of about 16GB of memory allocated to a total of 4 instances of CF. Then you balance out the volume of sites run on each instance based on site usage. You might have one that needs its own instance and the rest can be spread across the other three.
It's also possible to run all of the sites on all of the instances using a load balancer to pass requests from one instance to another. Either approach should insure that one site doesn't cause the rest to run poorly or not at all.
I am familiar with Docker, Rkt and LXD, but if I did not have the ability to install all these tools, what would be the basic mechanisms to provide isolation of CPU, memory and Disk for a particular process?
CPU - I want to say that only 1 socket of the two is usable by this process
Memory - I don't want this process to use more than 10GB memory
Disk - I don't want the process to use more than 100GB of disk and have visibility (ls should not list it) of files that are not created by this process
I think installing Docker, Rkt and what-not is very heavy weight solution for something basic that I am trying to accomplish
Is cgroups the underlying API I should tap into to get what I need? If so, is there a good book to learn about CGroups
I am running on EC2 - RHEL and Ubuntu both.
See man page for cgroups(7) for introduction, the full documentation of cgroup interface is maintained in linux kernel:
cgroup v1
cgroup v2
On top of that, on a distribution with systemd and cgroup v2 interface, cgroup features should be used via systemd and not directly. See also man page for systemd.resource-control.
For distribution specific information, see:
RHEL 6 Resource Management Guide
RHEL 7 Resource Management Guide
Quick answers to your questions
I want to say that only 1 socket of the two is usable by this process
This could be done via cpuset controller from cgroup v1 (both on RHEL 6 and RHEL 7).
I don't want this process to use more than 10GB memory
See memory controller of cgroup v1 interface or MemoryLimit of systemd resource control interface.
I don't want the process to use more than 100GB of disk
This is out of cgroups area of control, use disk quotas instead.
have visibility (ls should not list it) of files that are not created by this process
This is out of cgroups functionality, use either filesystem access right, filesystem namespaces or PrivateTmp systemd service option, depending on your use case.
I have a horizon app running in digitalocean on 1GB RAM machine.
I try to set permissions with:
hz set-schema ./.hz/schema.toml
but getting the following error:
error: rethinkdb stderr: warn: Cache size does not leave much memory for server and query overhead (available memory: 779 MB).
Tried to use "cache-size" option in rethinkdb config file, but still getting the same error (I restarted the service).
Do I need to enlarge my digitalocean machine or I can do something with the existing one?
It would be better to give RethinkDB more memory (I usually give it at least 2 gigs), but if your load is light that size should be fine. I think the recommendation is to make the cache size about half your available memory, but you can also lower it if that turns out to be unstable.