I'm currently running a Django app on the Azure server. I have a MySQL database to access using SSL. The SSL certificates I need to access the server are physically in the repo and I got my Django settings file to point to these using a relative path.
I have Azure set up to do continuous deployment from BitBucket. Problem is, at the end of the deployment, it will copy over all the files EXCEPT for the .pem files that I need.
I have to manually copy over the certificates everytime I push a commit. The files are in static/certs/*.pem
Is there something wrong with Azure? Or BitBucket? Or is there a better way of doing this?
I figured it out. Anything put manually inside the static folder gets cleaned out by Azure during deployment.
Just don't put anything inside the static folder
Related
I have deployed my application using copilot deploy which works. It creates a load balancer and when I go to the designated url I can view my react app. However, I'm trying to create a CI workflow using github actions.
My github actions appear to work, they appear to deploy the app. But when I go to the new designated url, I get Uncaught SyntaxError: Unexpected token '<'
If I go to that same url and hit a specific route on it, that actually does work. So I can do url/test and it return "hello world" but it won't return the bundle for the application or it's returning a broken version of it for some reason.
I can't figure out why using copilot deploy normally works, but this doesn't
For context, my app is set up like this. In the root folder there is a Server folder that has the node server file with the routes. In the root folder is also a src folder with the react code. There is a public file. There is the docker file containing instructions. And then there is the build file. So far I've been generating the build ahead of time and then deploying everything. The node server then sends the build.
So presumably, something about the way the docker container is being built via github actions is significantly different than the way it is building using copilot deploy. But, my understanding is that in both cases it is following the same docker file. So I can't figure out what is different about the directory structure it is creating, or maybe its having trouble creating the bundle at all. If anyone has any insight it would be appreciated.
Thanks!
I have a docker-based Flask app that I have been developing and it's nearing completion. I am currently moving to hosting it on AWS. The app allows users to generate various forms of content (usually image files) that are saved into a UGC folder within the /static folder of the app in my dev environment. This temporary solution worked fine in dev but it isn't going to suffice when ported to live as the static/ugc folder will be destroyed with each image update.
I therefore need an alternative solution and have been investigating EFS. Does anybody have experience with this service? Or in hosting persistent static files outside of a Docker app container in general and could advise?
You should probably look at using the S3 object storage service, via the boto3 python client.
There's also a flask extension, Flask-S3 which allows you to host general assets on S3 automatically. You'd probably need to code the logic for user-uploaded content yourself.
I am trying to deploy a c++ Http web server on OepnShift3 then I referred this.
The problem is:
Shall I put the source code on OpenShift or compile it first then put the executable file on OpenShift?
Is possible to access the OpenShift3 server via Xshell or Ftp?
Any way to get the OepnShift2 account?
It is no longer possible to get accounts on OpenShift 2.
For OpenShift 3, if you wanted to use a custom HTTP server you would need to be able to build a Docker image which includes it and any other files you need. If you can get the Docker image built, then you can deploy it to OpenShift 3.
Although you can get an interactive terminal in the container which runs your application, it doesn't work like traditional web hosting. That is, it isn't a shell access account where you would upload files using FTP or some other means.
Can you explain more about what it is you want to host? Depending on what you are doing there may be builder images already supported by OpenShift which can pull down files from a Git repository and build an image for you.
If OpenShift is new to you, I would suggest you try out:
https://learn.openshift.com
so you understand some of what it can do and how you interact with it.
Also grab down the free eBook and read it:
https://www.openshift.com/promotions/for-developers.html
I have created a Angular 2 form which posts the form data to a postgres DB using a Rest API. Now, I want to serve my Angular 2 app on AWS S3. I googled on this and I found that creating a webpack is a solution but not able to create one. I want to know where to start with, to bundle my code and serve it on s3.
GitHub link for Form: https://github.com/aanirudhraj/Angular2form_signaturepad_API
Thanks for the Help!!
The quickest way is to build the app using angular-cli and then deploy the content of the 'dist' directory as a static site in S3 (an S3 bucket can be configured to host a static site; make sure you assing read permission to 'anybody' to avoid http 4xx return codes).
You just need to host it as a static site on S3.
Check this: http://docs.aws.amazon.com/AmazonS3/latest/dev/WebsiteHosting.html
I infer from your code that you are using angular-cli.
Create a dev/production build
ng build --dev / ng build --prod
Content of your dist folder will contain bundled files for deployment. Your primary file for refrence will be 'index.html' as this will load you angular app.
You need to decide what kind of server you'll be using to serve you webapp.
For development purpose when we do ng serve , webpack-dev-server is used as a static file server (local development). I'll recommend should go with the most comfortable/cost effective solution you can have when deploying to actual server.
Static file Server
Directly hosting website is aws space as a static website.
Aspnet Core with static file server middleware. (*)
Nodejs Express with static file server middleware.(*)
Java serverlet for serving static files. (*)
(*)Following aproach will also allow you to have some server-side code if you require in future.
When you deploy your ng2-app, you should use AOT(ahead of time) compile.
I guess you are using JIT(just in time) compile.
In angular2 guide page,
With AOT, the browser downloads a pre-compiled version of the application. The browser loads executable code so it can render the application immediately, without waiting to compile the app first.
When you use JIT compile, your browser will download vendor.js which is defined by angular2 compiler and it will compile your app just in time. It will be too slow and your client have to download vendor file. When you use AOT, you dont have to use vendor file, so resources are being smaller.
I recommend to use AOT compile when you deploy your app, and use lazy loading for resource size.
If you are curious about ng2 AOT compile, read this guide.
angualar2-cookbook-AOT
And here is example angular2 app with webpack2 and lazy load.
use file structure and config files in here.
When I tested with example app, files bundled with aot was smaller than 500KB.
angular2-webpack2-aot
When you use aot compile with #ngtools/webpack or whatever,
just put all files in dist directory which have files compiled with aot in your S3 bucket, and I recommend to use aws cloudfront cache for your s3 bucket resources.
I have deployed rails spree app to AWS Beanstalk successfully. Then I added some new products together with their images. The app by default saves the images in my_app/public/spree/products folder. Every thing went fine until I deployed new code. The new code is deployed successfully but the "products" folder is gone. I now have to re-upload all of my images manually. Anyone has any idea what is going here?
Please let me know if you need any further info.
Thanks!!
Application will contain a public folder and under of this, we have spree products/taxons images folder...Those files are static by nature, so it’s a good idea to serve them using S3 because for every deployment our code replace.
Elastic beanstalk servers are somehow out of your control, meaning AWS controls when they are restarted and even terminated and rebuilt. Therefore, you shouldn't store anything on local disks (which is what Spree does by default).
The solution to your problem is simply to store them on S3 as described here.