how to run a one-off task on cloudfoundry to upload data before starting Python app - cloud-foundry

Hi this is my first experience trying to deploy a Python app on cloud using CF. I am having issues deploying my app; I sincerely appreciate if anyone can help me or point me to the right direction to solve the issue.
The main problem is the app that I am trying to deploy is large size due to a lot of python dependencies. The size of my app directory is 200 Kb. The first error I observed was: Staging fails due to "Failed to upload payload for droplet" . I think the reason is when all Python dependencies are downloaded from requirements.txt file and finally the droplet is created its size is too large for upload. The droplet size=982. 3 Mb.
The first solution I tried was vendoring app where I created a vendor directory containing all python dependencies but the size of vendor directory was greater that 1Gb, which causes the upload size exceed 1Gb limit and leads to failure in uploading app files.
The second solution I am working on is to upload all installed Python libraries on an object store (in my case S3 bucket which is bounded to my app) and then download the dependencies folder called Pypackages to the app's root directory: /home/vcap/app, so I want to have /home/vcap/app/Pypackages exist before my app starts on the cloud. But I couldn't do it successfully yet. I have included a python script in my app directory which downloads files from S3 bucket successfully. (I have put the correct absolute path for download in downloadS3.py script ie, /home/vcap/app/Pypackages) I want to run this script using "python downloadS3.py" as a one-off task. First I tried the solution here : Can I have multiple commands run in a manifest.yml file?
and although I can see the status of the task is SUCCEED via '$cf tasks my-app-name' , /home/vcap/app/Pypackages does not exist.
I also tried to run one-off task as the steps below:
1-
$ cf push -c 'python downloadS3.py && sleep infinity' -i 1 --no-route
2-
$ cf push -c 'null'
I have printed the contents of /home/vcap/app on my app, ie when app is started and I enter the url in my browser (I don't know what is the right way to see the contents of root directory). Anyway, the problem is Pypackages are not downloaded to the correct root directory. I am not sure if I am running the one-off task in a wrong way or if there is a better solution to make my app work.
I appreciate any helps! (edited)

Diego Cells stage apps and upload droplet to blobstore via cloud controller, the max file can be uploaded is configurable at Ops Manager > TAS for VMs > Application Developer Control > Maximum File Upload Size (MB), default is 1024MB. Seems this is causing restriction, if you can get it increased with your admin help...
Tasks run in their own containers so possibly not an option. I think Python buildpack collects and install the packages before creating the droplet, so don't think copying packages directly to /app directory will be of much help.
If you have data files then you can use .profile file and do some scripting to copy them from S3 or server/NFS location into the /app directory. Something like
wget http://s3.location.com/data_files
cp data_files /home/vcap/app/
But if all these are packages and increasing the size is not feasible then you may need to look to break the app..

Related

AWS Elastic Beanstalk - .ebextensions

My app currently uses a folder called "Documents" that is located in the root of the app. This is where it stores supporting docs, temporary files, uploaded files etc. I'm trying to move my app from Azure to Beanstalk and I don't know how to give permissions to this folder and sub-folders. I think it's supposed to be done using .ebextensions but I don't know how to format the config file. Can someone suggest how this config file should look? This is an ASP.NET app running on Windows/IIS.
Unfortunately, you cannot use .ebextensions to set permissions to files/folders within your deployment directory.
If you look at the event hooks for an elastic beanstalk deployment:
https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/customize-containers-windows-ec2.html#windows-container-commands
You'll find that commands run before the ec2 app and web server are set up, and
container_commands run after the ec2 app and web server are setup, but before your application version is deployed.
The solution is to use a wpp.targets file to set the necessary ACLs.
The following SO post is most useful
Can Web Deploy's setAcl provider be used on a sub-directory?
Given below is the sample .ebextensions config file to create a directory/file and modify the permissions and add some content to the file
====== .ebextensions/custom_directory.config ======
commands:
create_directory:
command: mkdir C:\inetpub\AspNetCoreWebApps\backgroundtasks\mydirectory
command: cacls C:\inetpub\AspNetCoreWebApps\backgroundtasks\mydirectory /t /e /g username:W
files:
"C:/inetpub/AspNetCoreWebApps/backgroundtasks/mydirectory/mytestfile.txt":
content: |
This is my Sample file created from ebextensions
ebextensions go into the root of the application source code through a directory called .ebextensions. For more information on how to use ebextensions, please go through the documentation here
Place a file 01_fix_permissions.config inside .ebextensions folder.
files:
"/opt/elasticbeanstalk/hooks/appdeploy/pre/49_change_permissions.sh":
mode: "000755"
owner: root
group: root
content: |
#!/usr/bin/env bash
sudo chown -R ec2-user:ec2-user tmp/
Following that you can set your folder permissions as you want.
See this answer on Serverfault.
There are platform hooks that you can use to run scripts at various points during deployment that can get you around the shortcomings of the .ebextension Commands and Platform Commands that Napoli describes.
There seems to be some debate on whether or not this setup is officially supported, but judging by comments made on the AWS github, it seems to be not explicitly prohibited.
I can see where Napoli's answer could be the more standard MS way of doing things, but wpp.targets looks like hot trash IMO.
The general scheme of that answer is to use Commands/Platform commands to copy a script file into the appropriate platform hook directory (/opt/elasticbeanstalk/hooks or C:\Program Files\Amazon\ElasticBeanstalk\hooks\ ) to run at your desired stage of deployment.
I think its worth noting that differences exist between platforms and versions such as Amazon Linux 1 and Linux 2.
I hope this helps someone. It took me a day to gather that info and what's on this page and pick what I liked best.
Edit 11/4 - I would like to note that I saw some inconsistencies with the File .ebextension directive when trying to place scripts drirectly into the platform hook dir's during repeated deployments. Specifically the File directive failed to correctly move the backup copies named .bak/.bak1/etc. I would suggest using a Container Command to copy with overwriting from another directory into the desired hook directory to overcome this issue.

Cannot chmod file on Openshift online v3 : Operation not permitted

I am migrating a Django application from Openshift v2 to v3 (In case you don't know, RedHat is shutting down v2 on September 30th, see: https://blog.openshift.com/migrate-to-v3-v2-eol/)
So, I am following this blog post to help me: https://blog.openshift.com/migrating-django-applications-openshift-3/ . I am new to all these Docker / Kubernetes concepts the new version is build upon.
I was able to make some progress : I managed to get a successful build of my app. Yet it crashes at deployment time:
---> Running application from script (app.sh) ...
/usr/libexec/s2i/run: line 42: /opt/app-root/src/app.sh: Permission denied
Indeed, app.sh has lost its x permission. I log into the failing container as debug and see it:
> oc debug dc/<my app>
> (app-root)sh-4.2$ ls -l /opt/app-root/src/app.sh
-rw-rw-r--. 1 default root 127 Sep 6 21:20 /opt/app-root/src/app.sh
The blog posts states "Ensure that the app.sh file is executable by running chmod +x app.sh.", which I did on my local repo. Whatever, I want to do it again directly in the pod, but it doesn't work:
(app-root)sh-4.2$ chmod +x /opt/app-root/src/app.sh
chmod: changing permissions of ‘/opt/app-root/src/app.sh’: Operation not permitted
So, how can I set the x permission to app.sh ? Thank you
Without looking into more details, any S2I builder image will gladly use your custom supplied run script to start the application in an alternative way.
Create .s2i/bin/ (mind the dot) in your source code directory, place the run script into it and rebuild the app in OpenShift - it will automatically use your custom run script upon deployment.
This is the preferred way of starting applications using custom commands in OpenShift.
Regarding your immediate problem, there is a very simple reason why you can not change the permissions of the script: you were trying to modify the permissions in the deployed pod, and not the builder pod. Deployed pods run using different UIDs, usually somewhere in the range of 100000000, and definitely do not match the file ownership as generated by the build. Hence permission denied.
The root cause of your problem (app.sh losing executable permissions) must be in the way the build process installs those files, and indeed looking at the /usr/libexec/s2i/assemble script in the base image does seem to reveal the culprit. The last two lines are:
# set permissions for any installed artifacts
fix-permissions /opt/app-root
If you wanted to change this part of the build instead of using a custom run script, I suggest you then create .s2i/bin/assemble in your project's source code and make it look sort of like this:
#!/bin/bash
echo "Running stock build:"
${STI_SCRIPTS_PATH}/assemble
echo "Fixing the mess:"
chmod 755 /opt/app-root/src/app.sh
This will fix whatever the stock build process does to file permissions, and will do it using the same UID as the rest of the build, so file ownership shouldn't be an issue.
as I stumbled upon this issue myself I've found a way to resolve it.
You have to make your file app.sh executable and push it in your repo as such.
If git does not track this modification as it did for me, you have to use: git update-index --chmod=+x app.sh for it to work.

AWS Elastic Beanstalk deploy not working

I'm new to AWS Eleastic Beanstalk. I'm trying to deploy a new application through awsebcli and I'm getting the following error:
"Error: OSError :: [WinError 145] The directory is not empty '.elasticbeanstalk\app_versions'
I was able to init the eb application. I am running the command line under administrator privileges.
Please Help.
I've just ran into the same issue.
"eb deploy" temporarily creates a subfolder "app_versions" in the ".elasticbeanstalk" folder at the root of the project that contains the zip file to be uploaded to S3. Once done, the folder gets deleted. Check whether any software on your computer might be responsible for preventing this.
The cause for me was a files-syncing software (Dropbox-like) that was watching the entire project for file/folder changes.
I'm developing a Django Application and I get this message -
Uploading app to S3. This may take a while. Upload Complete.
How to fix every time it happens
Disable/Pause file syncing applications, such as: Google Drive Sync/OneDrive/DropBox
Delete the (If exists) mysite.elasticbeanstalk\app_versions , don't worry, it's created each time you type "eb deploy"
Open Command prompt in the folder mysite\ and run the command
pip freeze > requirements.txt
Navigate mysite\ and run again eb deploy should work
The message I get when it's not working
The message I get when it's working

How do I upload a webpage to Bluemix using the cf CLI?

I'm trying to upload an index.html page to Bluemix using the cf CLI. I'm not sure if I'm approaching this with the right mentality. I'm thinking of uploading this HTML file as we usually do with normal hosting services, through FTP. With Bluemix I assume I should be using the push command in cf and treat this index.html as an app. Is this right?
If this is right, I'm not getting how to use this command. Can you give me an example of full command to push/upload this page?
The cf push command would be the one to use to 'upload' your application to the Bluemix server. However, it does more than just upload. In Bluemix there is a concept of a runtime or buildpack, the idea being this will be the runtime to run your application. So if you uploaded a Java application you would pair it with the Java Liberty Buildpack/runtime. If you uploaded a PHP application then you would pair it with the PHP buildpack.
If you pushed just a HTML file with no buildpack then you would likely get an error indicating the buildpack could not be determined. Bluemix tries to guess the type of buildpack you want based on the type of files uploaded, and then pull the buildpack from an internal cache. The cf push command allows you to explicitly state the buildpack to use -b so there is no guess work and no need to rely on only the buildpack that Bluemix currently knows about.
In your case, for a static HTML file you would need some type of http server like nginx as the 'runtime'. Notice that Bluemix currently does not have a built-in buildpack for this, so you'd have to get it from somewhere else. There are a few buildpacks available already, but the best one to use would be this one: https://github.com/cloudfoundry-community/staticfile-buildpack . To use it simply supply that url with the -b option on the cf push command from the root directory of your application i.e.
cf push yourappname -b https://github.com/cloudfoundry-community/staticfile-buildpack
Be sure you are issuing this command from your app directory.
The yourappname will be part of the URL for your website/app
For an actual example, we will upload your index.html which exist in folder C:\Users\XYZ\Documents\projects\ProjectHelloWorld and we will call this app HelloWorld. Here is what we would do:
C:\> cd C:\Users\XYZ\Documents\projects\ProjectHelloWorld
C:\Users\XYZ\Documents\projects\ProjectHelloWorld> cf push HelloWorld -b https://git
hub.com/cloudfoundry-community/staticfile-buildpack
Bluemix will then upload everything in that local directory to the server and also grab the buildpack from the URL location and stage your application code with the buildpack, Bluemix will then attempt to start the application. This is an example Bluemix output when the push command succeed:
Creating app HelloWorld in org xyz#gmail.com / space test as xyz#gmail.com...
OK
Creating route HelloWorld.mybluemix.net...
OK
Binding HelloWorld.mybluemix.net to HelloWorld...
OK
Uploading HelloWorld...
Uploading app files from: C:\Users\XYZ\Documents\projects\ProjectHelloWorld
Uploading 1M, 21 files
Done uploading
OK
Starting app HelloWorld in org xyz#gmail.com / space test as xyz#gmail.com...
-----> Downloaded app package (960K)
Cloning into '/tmp/buildpacks/staticfile-buildpack'...
grep: Staticfile: No such file or directory
-----> Using root folder
-----> Copying project files into public/
-----> Setting up nginx
grep: Staticfile: No such file or directory
-----> Uploading droplet (3.4M)
1 of 1 instances running
App started
OK
Showing health and status for app HelloWorld in org xyz#gmail.com / space
test as xyz#gmail.com...
OK
requested state: started
instances: 1/1
usage: 1G x 1 instances
urls: HelloWorld.mybluemix.net
last uploaded: Tue Nov 25 14:50:44 +0000 2014
For more details:
See the github page for the buildpack on how to structure your application (public folder etc)
See Bluemix Docs website. It has a lot of demos and examples.
See Takehiko Amano's Bluemix demo. Is a good and easy to understand demo.
you can either deploy your app directly using "cf push ..." or via creating a manifest.yml file.if you create manifest.yml file inside you app code path,only cf push is sufficient.
below is the reference link for this:
http://clouds-with-carl.blogspot.in/2014/02/deploy-minimal-nodejs-application-to.html
Hope it clears your doubt!!
Yeah as whitfiea mentioned its pretty simple. You need to use the cf push command. For example if you had a static website with an index.html file.
For example the following.
[02:30 PM] jsloyer#jeffs-mbp [friendme]>ls
index.html
To push that app to Bluemix run the following.
cf push yourappname -b https://github.com/cloudfoundry-community/staticfile-buildpack.git
https://www.ng.bluemix.net/docs/#starters/index.html
In this browse Creating Web Apps->Building a web app-> Uploading an app
It says;-
You can use a sample Java™ web application to get started. This sample application displays the list of environment variables that are available. You can download the sample Java web application from the community sample site. The sample application contains a single JSP and the WEB-INF/web.xml file.
Extract the downloaded file, and a new directory that contains the application is created. From the newly created application directory, issue the cf push command. In the following example, you can use a unique name testEnv for the application and 512M for memory allocation. The name must be unique in the whole Bluemix environment.
$ cf push testEnv -m 512m
->So as per your requirement, you can add your html file along with the JSP file before uploading the application.
Hopefully this help...

Bitnami Redmine backup strategy

We started using Redmine at work. I know it uses MySQL as the database, and Apache 2 as the web server. How can Redmine be properly backed up so that it can be reloaded quickly when anything goes wrong?
This will do just fine:
mysqldump --single-transaction --user=user_name --password=your_password redmine_database > backup.sql
It will dump the entire contents of the redmine_database to the backup.sql file.
Update:
As far as backing up "apache", as I state in my comment below - you don't need or want to back up your apache installation. If you ever need to recover your system, apache would need to be reinstalled as with any other application. If you are referring to the actual files and directories within your redmine installation, those as well don't need to be backed up except for the files/ directory which contains user uploaded files to redmine. You can backup your entire redmine installation (to be safe) with the following command:
tar czvf redmine_backup.tar.gz /path/too/redmine/installation
Run it as a VM (JumpBox has a quickstartable one, I believe) then periodically pause or shutdown the VM and backup/copy the entire virtual disk.
I know this doesn't help with an existing installation, but it's what I'd recommend to anyone planning backups before they implement. That's not meant to be snide, just helpful to anyone else reading this thread.
Bitnami apps are self contained, so another option if you can afford some downtime, is simply to shutdown the server, and zip the directory contents ... You may want to do this maybe once a week, in addition to your mysqldump backups. This way you also capture any changes that may have happened in Apache, etc.
Read the Redmine user guide (look at the bottom).
Also, don't forget to backup the attached files.
Redmine backups should include:
Data (stored in your redmine database)
attachments (stored in the files directory of your Redmine install)
Here is a simple shell script that can be used for daily backups (assuming you're using a MySQL database):
# Database
/usr/bin/mysqldump -u <username> -p<password> <redmine_database> | gzip > /path/to/backup/db/redmine_`date +%y_%m_%d`.gz
# Attachments
rsync -a /path/to/redmine/files /path/to/backup/files
Redmine sets table charset as "latin1".
So, if you use non-latin1 charset (CJK in UTF-8 or something), you should give following option to backup script.
mysqldump -u root -p --default-character-set=latin1 --skip-set-charset bitnami_redmine -r backup.sql
It skips "set charset blah-blah-blah" on sql dump and you would get a clean(=dump without interpretation) dump.
By the way, you have to back up the files directory as well; it holds all uploaded files. I installed the Bitnami Redmine stack on Windows.
For MySQL, I use MySQLAdmin to schedule database backup every day.
And I use aceBackup to automatic backup database dump files and Redmine uploaded files to a remote FTP server.
When the server is something wrong, I can just reinstall the Bitnami Redmine stack, and import early dumped database file, then cover Redmine's files directory with backup files.
And that's OK.
This separate program (Bitnami Redmine stack) and data (database & uploaded files) perfectly.