Minimal VPS configuration for vibed? - d

What is minimal configuration of VPS for Vibed? It's look like that vibed's memory print is very small. But building of it can take much more RAM. So what is minimum?

I'm able to build and run the basic vibe-d application using 512MB of RAM. However, this dips down to about 256MB when using dub build --build-mode=singleFile. As #greenify said in his comment, the best option is to build on a development machine and copy the binary across to your server to be run. However, if you want to do all of the building on your VPS, then I would say 512MB of RAM as a good amount to have.

Related

Can I run Eclipse che on a small VPS?

I have a relatively small VPS that I use as a remote dev environment :
1 vCore
2 Mb of RAM
I plan to have up to 3 dev environments on the VPS. I dont need to run 2 simultaneously however.
The biggest project is roughly the same size as a small Magento eShop. It is actually run by Python and Django.
The environment runs on Ubuntu + Nginx + uWCGI but this could be changed.
I can code remotely in the VPS using Eclipse RSE or Codeanywhere.
However Eclipse CHE offer very interesting functionalities for this type of remote environment.
The main risk is that the VPS configuration is very small. It is exactly the minimal configuration stated in the doc. I don't know if I can use it this way without making things really slow...
My instinct is "no", I think 2MB of RAM is not sufficient given that the Che workspace server is itself a Java application that needs about 750 MB of RAM. If you are running the workspace somewhere else, and just using the VPS as a compute node for container workspaces, I would suspect the answer is still no, as your container OS and language runtime will need more than 2MB of RAM. If you meant 2GB of RAM, it's still difficult, but maybe feasible, to run a workspace with a full Django environment on there, using a workspace server running on a separate host.
It sure would be nice to see if you can make it work, though - and I would love to hear if you make it work!

job scheduling in windows pc

When I submitting to job I am using the following format
My questions are follows
1) Can I use my personal pc to execetute simulations. Like using VirtualBox and using one of the linux distribution ?
2) Is it possible to execute .out in the windows machine?
Sorry for my poor questions I am not expert on Linux systems.
Thanks.
However now I don't have access to execute my job files on the LSF
server.
You should talk with your cluster administrator about this. Transferring data from laptop to cluster is a common task. I'm sure that they have best practices.
1) Can I use my personal pc to execetute simulations. Like using
VirtualBox and using one of the linux distribution ?
Impossible to say with the limited information in the question. e.g., if your simulation software is licensed, then your laptop may not be eligible to use it.
2) Is it possible to execute .out in the windows machine?
Again, with the limited information in the question, its impossible to say. But in general Linux binaries cannot be directly run on Windows.

GitLab + Laravel 5, faster build, maybe without docker?

Hy There!
Please excuse me for not knowing mutch about GitLab, I'll summ up what do I wish for and please tell me if it is possible or not. And if yes, then refer a how to for me please. :slight_smile:
I wish to implement GitLab to store our repos which are mainly Laravel 5 projects. Also I wish to run some tests on them, like PHPUnit test, Behat, etc. For this, I currently use the docer ability of GitLab to build a project. It puts the files into a docker, there I have to run composer install, and a few other things. But this takes soooo long! It just slows down the development.
Is it possible to: Run "composer install" and "npm install" and other things that we need to set up the website ONCE on the repository, and from then on, we can do only the testing.
After you setup docker to cache the dependency downloads, your next step is to move your runners to a new host, or give your current host more RAM.
I'm using GitLab's Omnibus installation and my GitLab instance uses 1.7GB of RAM with very little traffic, and my runners use up to 1GB when running some of my builds and tests. If your GitLab instance and runners have a similar memory footprint, then your machine will start to use the swap memory during tests and that will really slow down your runners.
Also, your runners likely have high CPU usage when running tests, and add on top of that the CPU required when your system is using swap memory, and you start to slow down there too.
I would recommend moving the runners to a different machine, for performance and security reasons. If you can't do that, then at least increase the RAM to 3GB.

Is there a way to test the PsExec tool on a single machine?

I want to run a PsExec demo, that simulates running a command on multiple machines in a network
Would I need a VMWare tool? I'm looking at Workstation 12 at the moment.
No, just ensure your firewall rules are correct and sysinternal tools are installed on each machine, file and printer sharing are enabled and that your antivirus product does not block psexec.
VMTools gives you things like drag and drop, shared drives, improved graphics and code wise PowerCli. That can do things like Invoke-script which allows you to run commands on other machines this might be alternative you want to use.

Upgrading the JRE used by ColdFusion

I have a ColdFusion 8.1 application. It gets heavy use and I see jrun.exe getting very high memory usage in the task manager. This is a 32-bit windows 2003 server. When Jrun gets around a gig of memory usage ColdFusion will stop responding at some point. The logs are a little vague, but I start to see garbage collection and heap errors in the ColdFusion log. I assume that the JRE is running out of memory.
I have the max JVM heap set to 1.2gig. After some experimenting, this seemed to be the biggest amount I could allocate and still have ColdFusion start ok. I realize that going to 64-bit might solve the problem, but that is not an option at this time.
I am considering upgrading the JRE (it is at v6.x dated pre-2008, though I don't know the exact version. I am using the JRE that came with ColdFusion 8.1. Has anyone gone through this? I assume it's just a matter of installing the new JRE and pointing ColdFusion to the new JRE directory in the ColdFusion server settings.
tia
don
it's EXTREMELY easy to do.
1) download the Java SE Development Kit and install it like normal.
2) open up the jmv.config for cf in a text editor, located in c:\coldfusion8\runtime\bin
3) comment out the existing java.home line with a by putting a "#" at the beginning of the line add a new java.home line below it pointing to your jvm installation.
As an example, my java.home and jvm.config look like this:
java.home=C:/Program Files/Java/jdk1.6.0_11/jre
4) restart the CF services.
As a bonus, you can running JavaRa and free up some space by deleting all the old versions of the JRE.
Adobe has a Knowledge Base that covers issues like this. Check out http://www.adobe.com/go/2d547983 for instructions.
Sean Corfield has an article that provides some info on using Java 6 with ColdFusion 8 here:
http://corfield.org/blog/index.cfm/do/blog.entry/entry/Java_6_and_ColdFusion_8
As long as you install 1.6.0_10 or greater, you should be fine. You might check out ColdFusionBloggers.org from time to time in case other JVM issues come to light in the future.
You didn't specify whether or not you were using the stand-alone server instance or a multi-server configuration. If you're getting a heavy volume of traffic and have a dual core machine with a lot of physical memory, I would consider looking into the multi-server set-up for CF8 and putting together a cluster with load balancing. This will help to distribute your traffic across several instances of CF8 and, assuming you have a beefy server, make better use of the physical resources that you have available.
-Rick
Consider moving Java 7. Java 7 has the G1 Garbage collector which is better at memory deallocation.
If you are having out of memory issues it could be because
functions are not using var or local scope
<cfdump> is used in a production system
Sessions are too large or are not set to expire in a reasonable amount of time
Queries are way too large SELECT * can cause that.
Excessive number Query of Queries.
The site is connecting to a slow database. Resources are held until the DB returns data
DSN has the data buffer set to more than 64k