I'm trying to set up my Meteor apps on AWS EB and I've successfully deployed 2. Weird thing is one of them is using 30% CPU when idle, as opposed to 0.3% on the other one.
Both are running METEOR#1.4.2.3, both are on t2.large EC2 instances. I previously had the apps on Galaxy without any issues (have to make the switch because we got a generous amount of credits from AWS)
The only difference is the app that's idle at 30% has Meteor settings be loaded on startup and the other one doesn't use any Meteor settings since it's just used to connect to the DB and display info (as a microservice)
Are you issuing this
meteor build --server ${ROOT_URL} --verbose --directory ${BUILD_NODEJS_DIR} --mobile-settings build/${SETTINGS_JSON_FILE}
to bundle your code ready for deployment ... then over on your cloud provider are you executing your meteor app by calling
node main.js
This adheres to current meteor standards of deployment ( version 1.4.2.3 ) ... which I am using to deploy a meteor app onto AWS EC2 and see no high CPU usage when its idle
Silly me, was using Node version 6+ that isn't supported by Meteor fully yet, switching to 4.6.1 did the trick.
Related
We push a java app on cloud foundry using cf push with below manifest file
applications:
- name: xyz-api
instances: 1
memory: 1G
buildpack: java_buildpack_offline
path: target/xyz-api-0.1-SNAPSHOT.jar
I understand that, PAAS (ex: cloud foundry) is a layer on top of IAAS(ex:vcenter hosting linux and windows VM's).
In manifest file, buildpack just talks about userspace runtime libraries required to run an app.
Coming from non-cloud background, and reading this manifest file, I would like to understand...
1) How to understand the operating system(OS) environment, that an app is running? On which operating system...
2) How app running on bosh instance different from docker container?
1) How to understand the operating system(OS) environment, that an app is running? On which operating system...
The stack determines the operating system on which your app will run. There is a stack attribute in the manifest or you can use cf push -s to indicate the stack.
You can run cf stacks to see all available stacks.
In most environments at the time of writing, you will have cflinuxfs2. This is Ubuntu Trusty 14.04. It will be replaced by cflinuxfs3 which is Ubuntu Bionic 18.04, because Trusty is only supported through April of 2019. You will always have some cflinuxfs* stack though, the number will just vary depending on when you read this.
In some environments you might also have a Windows based stack. The original Windows based stack is windows2012r2. This is quite old as I write this so you probably won't see it any more. What you're likely to see is windows2016 or possibly something even newer depending on when you read this.
If you need more control than that, you can always push a docker container. That would let you pick the full OS image for your app.
2) How app running on bosh instance different from docker container?
Apps running on Cloud Foundry aren't deployed by BOSH directly. The app runs in a container. The container is scheduled and run by Diego. Diego is a BOSH deployed VM. So there's an extra layer in there.
At the core, the difference between running your app on Cloud Foundry and running an app in a docker container is minimal. They both run in a Linux "container" which has limitations put on it by kernel namespaces & cgroups.
The difference comes in a.) how you build the container and b.) how the container is deployed.
With Cloud Foundry, you don't build the container. You provide your app to CF & CF builds the container image based on the selected stack and the additional software added by buildpacks. The output in CF terminology is called a "droplet", but it basically an OCI image (this will be even more so with buildpacks v3). When you need to upgrade or add new code, you just repeat the process and push again. The stack and buildpacks, which are automatically updated by the platform, will in turn provide you with a patched & up-to-date app image.
With Docker, you manually create your image building it up from scratch or from some trusted base image. You add you own runtimes & application code. When you need to upgrade, that's on you to pull in updates from the base image & runtimes or worse to update your from-scratch image.
When it comes to deployment, CF handles this all for you automatically. It can run any number of instance of your app that you'd like & it will automatically place those so that your app is resilient to failures in the infrastructure & in CF.
With Docker, that's on you or increasingly often on some other tool like Kubernetes.
Hope that helps!
As far as my understanding goes, the Kubernetes engine is meant for deploying applications that can be load balanced, for example, having an application which unhashes a string. If pod-a is on high load, it would be offloaded to pod-b. Correct me if I am wrong here, since if this is false, my following question will not make sense.
After exploring it for few hours I can't seem to figure out how to deploy a C++ application to the Kubernetes cluster. How would I do so?
What I tried:
I tried to follow the guide: Interactive Tutorial - Deploying an App, however, I couldn't understand how I would get my C++ app as an image that could be deployed.
What the C++ application is:
At the moment it proxies TCP traffic to another HOST designated by clients' HOSTNAME. It is pretty much a reverse proxy, however, this is NOT an HTTP application.
Is Kubernetes the right choice?
-
Kubernetes is really useful to loadbalance workloads, to provide high availability in case of failure to speed up test processes, and to increase safety during production rollout through different strategies and increase security through segregation.
However, not all the kind of workloads can take advantage of all the features introduced by Kubernetes.
For example, if your application is built in such a way it needs a stable amount of RAM and CPU, the code as well is really stable and you need merely one replica, then maybe Kubernetes and containers are not the best choice (even if you can perfectly use them), and you should rather implement everything on a big monolithic server/virtual machine.
But if you need to deploy it on a different cloud provider, and it should run merely some hours every day, maybe then it can make use as well of those features. If you are willing to add a layer, make sure that you need the features it introduces, otherwise it would be merely an overhead.
Note that Kubernetes it is not capable of splitting your workload alone. Therefore I do not know if what you mean by "If pod-a is on high load, it would be offloaded to pod-b" likely yes it is possible, but you have to instruct it to do so.
Kubernetes takes care to run your POD, making sure there have been scheduled on nodes where enough memory and CPU is available according to your specification, you can set up autoscaling procedures as well to support high workload periods or to scale even the cluster itself. Your application should have been created in such a way to support a divide and conquer pattern, otherwise you will likely have three nodes, one pod running on one node, two idle and a overhead that you could have avoided.
If your C++ application POD unhashes a strings and a single request could consume all the resources of a node Kubernetes will not "spit" the initial workload and will not create for you more PODS scheduling them across the cluster! Of course you can achieve something similar, but it will not come for free and you will likely need to modify your C++ code.
For sure you can take advantage of Kubernetes, running your application on it is pretty easy, but maybe you will have to modify something in the architecture to fully make advantage of those features.
Deploy the C++ application
The process to deploy your application in Kubernetes is pretty standard. Develop it locally, create a Docker image with all the libraries and components you need, test it locally, push it to the registry, and create the deployment in Kubernetes.
Let's say that you have all the resources needed to run your application and your executable file in a local folder. Create the Docker file.
Example, modify to implement your application, I have reported it as an example to show syntax:
# Download base image, Ubuntu 16.04 (Xenial Xerus)
FROM ubuntu:16.04
# Update software repository
RUN apt-get update
# Install nginx, php-fpm and supervisord from the Ubuntu repository
RUN apt-get install -y nginx php7.0-fpm supervisor
# Define the environment variable
ENV nginx_vhost /etc/nginx/sites-available/default
[...]
# Enable php-fpm on the nginx virtualhost configuration
COPY default ${nginx_vhost}
[...]
RUN chown -R www-data:www-data /var/www/html
# Volume configuration
VOLUME ["/etc/nginx/sites-enabled", "/etc/nginx/certs", "/etc/nginx/conf.d", "/var/log/nginx", "/var/www/html"]
# Configure services and port
COPY start.sh /start.sh
CMD ["./start.sh"]
EXPOSE 80 443
Built it running:
export PROJECT_ID="$(gcloud config get-value project -q)"
docker build -t gcr.io/${PROJECT_ID}/hello-app:v1 .
gcloud docker -- push gcr.io/${PROJECT_ID}/hello-app:v1
kubectl run hello --image=gcr.io/${PROJECT_ID}/hello-app:v1 --port [port number if needed]
More information is here.
This a silly question, but couldn't find an answer. I'm running a rails app on jruby, and I use sidekiq to proccess background jobs. Do I really have to run sidekiq in another instance of jvm (is that what happens running bundle exec sidekiq) ?
Jruby is too much RAM consuming so this is not possible with my aws t2.micro instance.
Due to high memory consumption your AWS microinstance will choke eventually. There is one way to have both Ruby App and Background process sidekiq running.
Either you can boot another EC2 micro with sidekiq. You app instance and sidekiq server will share the same Redis instance. So, your BG processing wont be disrupted.
Another way is to bootup a Heroku free dyno.
Or you can move to CRuby or MRI.
hope this helps
I'm working on a project that I haven't touched in about 4 months. Before everything on the deploy was working fine, but now I'm getting an error when trying to deploy an update.
Failed to pull Docker image amazon/aws-eb-python:3.4.2-onbuild-3.5.1: Pulling repository amazon/aws-eb-python time="2016-01-17T01:40:45Z" level="fatal" msg="Could not reach any registry endpoint" . Check snapshot logs for details. Hook /opt/elasticbeanstalk/hooks/appdeploy/pre/03build.sh failed. For more detail, check /var/log/eb-activity.log using console or EB CLI.
In the eb-activity log, it further states [CMD-AppDeploy/AppDeployStage0/AppDeployPreHook/03build.sh] : Activity execution failed, because: Pulling repository amazon/aws-eb-python before repeating what was shown in the UI.
The original was using a Preconfigured Docker 64bit Debian jessie v1.3.1 running Python 3.4. I've tried upgrading to the latest, which is version 2.0.6, but it never completes (don't need to get into specifics of that error, separate issue and I'd like to stay on 1.3.1 if possible). I've also tried upgrading to the latest 1.x but it has the same result of upgrading to 2.0.6.
Any ideas, or anything else I should be looking for clues?
Docker Hub has deprecated pulls from Docker clients on 1.5 and earlier. Make sure that your docker client version is at least above 1.5. See https://blog.docker.com/2015/10/docker-hub-deprecation-1-5/ for more information.
I have developed a standalone application using Java7 and Spring. When I deploy the application on CloudFoundry everything works fine at the beginning. When I run mvn cf:apps I see all apps have the status STARTED. However after some hours it seems the app crashes, its status is still STARTED but when I try to retrieve the logs I get the following error:
[ERROR] Failed to execute goal org.cloudfoundry:cf-maven-plugin:1.0.0.M4:logs (default-cli) on project [....]: An exception was caught while executing Mojo. 500 Internal Server Error -> [Help 1]
When I redeploy the application, it works again, but only for some time. I also noticed the following phenomen, when I check the deployed apps with the VMC tool instead of Maven, then the standalone apps do not show as running, but instead their status is 0%:
name status usage runtime url
standaloneapp1 0% 2 x 512M java7 standaloneapp1.cloudfoundry.com
standaloneapp2 0% 1 x 512M java7 standaloneapp2.cloudfoundry.com
webapp running 1 x 512M java7 webapp.cloudfoundry.com
I have the following questions:
Is it normal that the VMC tools shows standalone apps status as 0%?
How can I get more information about my applications, to find out what goes wrong?
P.S: My standalone apps seem to require quite a lot of RAM, when I ran the standalone apps with 128MB or 256MB then I always got out of memory errors. When I run the apps locally they do not need that much ram, both apps only have a small main method and some beans for RabbitMQ and MongoDB. I'm not sure if this problem is related though.
To get more information about your apps, use the "vmc logs " command.
However, Cloud Foundry v1 is going away after Jun 30th so you may want to consider migrating your apps to v2 running at run.pivotal.io (new docs at docs.cloudfoundry.com).