NUXT build vs generate and sitemap - build

My case:
I want to have dynamic routes (ex: page/1). I want to have accessible web also without JS because of SEO and crawlers. So I load data with asyncData. It is ok.
I have nodejs hosting. Nowadays I used deploy in way yarn build and yarn start.
But I also want to use #nuxtjs/sitemap. But it is generated just with yarn generate (minimally in my case).
Probably I miss some point, can you direct me to right way?
Thank you.

In Server mode (yarn dev or yarn build && yarn start), the sitemap.xml file will be dynamically generate live by the server on each call to your http://example.com/sitemap.xml URL.
in Static mode (yarn generate), the sitemap.xml file will be statically generate once on nuxt generate process in your dist directory.

Related

Django: Restrict frontend developer access to backend code

We run a "classic" django website (rendering templates in the backend) and host the complete source code on github.
We also use the "classic" folder structure:
/source/ # For python related django code and packages
/templates/ # Just the normal django templates
/static/img/ # static images
/static/sass/ # Sass files
/static/css/ # Generated css files from sass (django-pipeline)
For the development purpose we use vagrant and ssh into the machine to start the django development server. Changes are pushed to github.
This is our current state and it works good for developer which are involved in the project.
But we also have for example external designers which should not have access to the backend.
Problem:
Designer should only have access to the /static/ folder (locally and on git) or if they also want to modify the html structure they need access to the /static/ and /templates/ folder.
So how can they run the Project without having access to the backend files? (Policy, installing and explaining vagrant is time consuming for non developers ...)
I couldn't find a solution but I have the following idea:
Create for the static folder a new repository
Create a server with everything what is needed to run the server
Mount the local /static/ (and /templates/) folder from the developer on the server. So that the local changes are served by the server. That way the designer don't need access to the backend source and also don't need the vagrant overhead. (Not sure how to solve this problem)
This is currently the only way which I see. Is there maybe a better solution and how can I implement step 3?

Deploy Django app with Docker

I'm attempting to deploy a Django app via docker, first locally, and then to a cloud server. I could not find an answer to my initial question before I attempt this: if I run docker-machine create, I'm guessing this should be run from within my virtualenv, right?
This would then grab all of my specific app dependencies, and begin to build certificates to throw in the container? If not, please explain otherwise..
Yes you are correct.
I will try to help you by my experience, if you wanna deploy django apps via docker.
First you need to setup docker machine in your local machine. Please see the
instruction. By default driver that will be used is --driver
virtualbox default.
List what kind of specifics dependencies images of your apps. Ex:
you need nginx, postgres, uwsgi, or you need to fetch an image then
modified that image you can use dockerfile (its the best practice
for you).
I suggested you to use docker-compose. Really its make our project
pretty easy to manage. You have to define all images that you need
for your app in docker-compose file Please read this reference.
After you finished develop your app then you want to deploy in production server (cloud) you just need to copy all your project then running your docker-compose. All images dependencies will be automatically pulled in the cloud.
As a reference, you can see this project (this is an open source project that I developed.) On that project, I use make file to manage docker-compose command and it make easy to manage.
An example of dockerfile
An example of docker-compose.yml
An example of Makefile
Hope this will help you.

Does ember's dist output require node?

When you build an Ember application the output is placed in the dist folder. Can I take this output and stick it into IIS on a server without installing node? I understand that during development I need to have node.js installed. I'm asking if the production host server will require node.js if I'm hosting in IIS?
A built Ember application (the contents of the /dist folder) is composed of files that can be served statically, so there's no requirement for node.js.
You should be able to serve them with IIS without a problem, just make sure you configure the routes properly if you're using the history API (location: 'auto'/location: history).

Re-build files in Ember-CLI without running server

I am planning on moving from "EmberJS" to the Ember-cli, though I have a small problem. Is it possible only to run file watcher instead of serving/using ember serve that will run local server? As I am running my PHP backend on the Google App Script I have already a local python HTTP server running in localhost:8080 I do not need another one to run in localhost:4200
If I don't run ember serve my local changes in development environment wont get updated. Is there a better way of doing this? Is it possible to use assets in the app folder when running in development environment? and use dist folder for staging/live environments?
As mentioned in the guide, you can use the build command with the --watch flag.
ember build --watch
That will keep rebuilding your changes but not actually run the server.
As for your second question:
Is it possible to use assets in the app folder when running in development environment? and use dist folder for staging/live environments?
I don't believe so. You can change the output-path property in your .ember-cli config file, but you can't have one that's specific to a certain environment. You could always write a quick script to move the files though. :)

best way to deploy jetty application--too many options?

I need to deploy a production version of a web application. So far, I've been testing it with mvn jetty:run. I've used actual jetty installations before, but they seem only necessary when you want to serve multiple wars on the same web server. In some ways this is the most staightforward however (mvn package and copy it over).
My other options are to create a runnable jar (mvn assembly:single) that starts a server, but I need to tweak the configuration so that the static content src/main/webapp is served and the web.xml can be found.
I've also read about a "runnable war". This might avoid the src/main/webapp problem since these files are already laid out in the warfile. I don't know how to go about doing this, however.
I could also stick with mvn jetty:run, but this doesn't seem like the best option because then the production deployment is tied to code instead of being a standalone jar.
Any opinions on the best way or pros and cons of these different approaches? Am I missing some options?
The jetty-console-maven-plugin from simplericity is simple to use and works great. When you run mvn package you get two wars--one that is executable. java -jar mywar.war --help gives usage, which allows a bit of configuration (port, etc.).
I'm not that familiar with maven, but this is how we approach deployment using embedded Jetty:
We create a single-file JAR with the embedding jetty app and the necessary lib jars packed.
We deploy the static content in a WAR file (which you can package into the JAR as well). Everything is generated by an ANT file that:
1) Build the static files WAR (this also creates the web.xml)
2) Copies the WAR into the application resources
3) Compiles an executable JAR
To get the embedded Jetty to "find and serve" your static files, add the war with a WebAppContext to the Jetty handlers:
Server jetty = new Server(port);
HandlerList handlers = new HandlerList();
WebAppContext staticContentAsWar = new WebAppContext();
staticContentAsWar.setContextPath("/static/");
staticContentAsWar.setWar(resource_Path_to_WAR);
handlers.addHandler(set);
jetty.setHandlers(handlers);
jetty.start();
HTH