Does anyone have a concrete example of having Node.js in a virtual environment and then using it in WebStorm? - webstorm

I now know one can't create a virtual environment (VENV) using WebStorm, so apparently the only option available for creating a VENV for a Node.js application is 'nodeenv'.
Has anyone already had success using nodeenv to create a VENV for Node.js and then using that VENV as part of a WebStorm-based project? If so, would you please outline the steps taken to use a VENV with WebStorm?

Using WebStorm2021.2 on Ubuntu 18.04.
So...I figured it out!  Here are the steps:
1. Installed python-based virtualenv using: sudo apt-get install virtualenv
2. Open WebStorm
     1) Left-click on ‘New Project’ button
     2) Navigate to directory where you want to put your project
     3) Left-click on ‘New Folder’ icon
     4) Type in the name of your project
     5) Left-click ‘Ok’
     6) Left-click ‘Create’
3. Open terminal in WebStorm
     1) ‘(base)’ will be showing on the left
     2) type ‘vitrtualenv wrapper_env (but one may type any name to replace ’wrapper_env' for the virtual environment)
     3) type '. wrapper_env/bin/activate
     4) ‘(wrapper_env) (base)’ will be showing on the left (required to install and use nodeenv)
     5) type ‘pip install nodeenv’
          1- Successfully installed nodeenv-1.6.0
     6) type ‘nodeenv nodejs_env’ (but one may type any name to replace ’nodejs_env' for the virtual environment)
          1- * Install prebuilt node (16.6.2) ..... done. (The latest node and npm versions are automatically installed)
     7) type ‘. nodejs_env/bin/activate’
     8) ‘(nodejs_env) (wrapper_env) (base)’ will be showing on the left
     9) type ‘node -v’ to verify node
          1- v16.6.2
    10) type ‘npm -v’ to verify node package manager
          1- 7.20.3
    11) Change settings in WebStorm by: (must be done whenever a new nodeenv environment is created)
          1- Setting>Languages & Frameworks>Node.js and NPM
               1> Left-click on drop-down arrow on right side of ‘Node interpreter:’ line
                    1. Left-click on ‘Add...’
                    2. Left-click on ‘Add Local...’ and navigate to where the ‘node’ interpreter is within deep_env/bin
              2> The ‘Package manager:’ line should be automatically updated with the compatible npm
    12) Deactivate both virtual environments by:
          1- Type ‘deactivate_node’ (deactivates nodejs_env)
              1> ‘(wrapper_env) (base)’ will be showing on the left
          2- Type ‘deactivate’ (deactivates wrapper_env)
              1> ‘(base)’ will be showing on the left
  13) Close project
  14) Open project having a nodeenv virtual environment already created
  15) Open terminal in WebStorm:
        1- (base) will be showing on the left
        2- type ‘. nodejs_env/bin/activate’
              1> (nodejs_env) (base) will be showing on the left
  16) When done with work session:
        1- Type ‘deactivate_node’
        2- Close project

Related

How to add cors in response header of oat++ hls server

auto response = controller->createResponse(Status::CODE_200, controller->livePlaylist->generateForTime(time, 5)->toString());
      response->putHeader("Accept-Ranges", "bytes");
      //response->putHeader(allow_origin = "*");
      response->putHeader(Header::CONNECTION, Header::Value::CONNECTION_KEEP_ALIVE);
      response->putHeader(Header::CONTENT_TYPE, "application/x-mpegURL");
      response->putHeader(Header::CORS_METHODS,"GET, POST, PUT, OPTIONS, DELETE");
      response->putHeader(Header::CORS_ORIGIN, "*");
      response->putHeader(Header::CORS_HEADERS, "DNT, User-Agent, X-Requested-With, If-Modified-Since, Cache-Control, Content-Type, Range");
      response->putHeader(Header::CORS_MAX_AGE, "1728000");
      return _return(response);
This is a response header of Async Endpoint.
Is this a proper way of adding cor in response header of hls stream using oat++ framework.
What is the Correct method for adding in cors in oat++ framework ?
Oat++ has built-in functionality to handle CORS.
Just add these request and response interceptors to your AsyncHttpConnectionHandler
#include "oatpp/web/server/interceptor/AllowCorsGlobal.hpp"
...
OATPP_CREATE_COMPONENT(std::shared_ptr<oatpp::network::ConnectionHandler>, serverConnectionHandler)([] {
OATPP_COMPONENT(std::shared_ptr<oatpp::web::server::HttpRouter>, router); // get Router component
auto connectionHandler = oatpp::web::server::AsyncHttpConnectionHandler::createShared(router);
/* Add CORS-enabling interceptors */
connectionHandler->addRequestInterceptor(std::make_shared<oatpp::web::server::interceptor::AllowOptionsGlobal>());
connectionHandler->addResponseInterceptor(std::make_shared<oatpp::web::server::interceptor::AllowCorsGlobal>());
return connectionHandler;
}());

AWS EB + Nginx, update access.log format or create new log

I'm running an app on AWS' Elastic Beanstalk using the configuration Node.js running on 64bit Amazon Linux/4.5.0, with Nginx.
I would like to add the request header "X-My-Header" as a field to the access.log. Barring that, I would take creating a new log file using the compound default nginx logs + my header. I've found several similar questions specifically about logging with nginx, but the EB aspect throws an extra curveball with how the nginx configs are updated through an /.ebextensions config file.
I've accomplished creating a log file, but it isn't getting populated with anything. I also tried just updating the access.log file, but that doesn't seem to have taken, either. I saw other people adding headers would use the format "$http_", and it seems like an http request header of "X-Header-Example" gets formatted to "$http_header_example" (see "$http_user_agent" in the nginx compound default), though not wanting to waste time with the assumption, note that I added both "$http_x-my-header" and "$http_x_my_header".
Attempt 1: Update existing access.log format
files:
/etc/nginx/conf.d/01_proxy.conf:
owner: root
group: root
content: |
log_format my_log_format '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" - "$http_x_my_header" - "$http_x-my-header"';
access_log /var/log/nginx/access.log my_log_format;
Result: access.log does not include any additional fields. It doesn't even have empty ""s, or the -.
Attempt 2: Create a new log file
files:
/etc/nginx/conf.d/01_proxy.conf:
owner: root
group: root
content: |
log_format my_log_format '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" - "$http_x_my_header" - "$http_x-my-header"';
access_log /var/log/nginx/new_log.log my_log_format;
Result: new_log.log now appears in var/log/nginx when I export logs from the EB dashbaord. However, it's completely empty.
I read some other similar questions mentioning deleting files and restarting the server sometimes helps. I tried restarting the application and even completely rebuilding the environment through the EB dashboard, and neither led to different results.
I largely based my solution on this medium article, section 2.1. However, when I tried adding the container_command to my .config file, my entire environment stopped working. I had to revert to a different deployment, and then rebuild the environment to get it running again.
Any tips?
My goal is to associate this request header with the requests coming in. Ideally I could update the existing default access.log. I will settle for a separate file. Or, if you have any other suggestions as to how I may be able to get access to this info, I'm all ears! Thanks.
Edit A new attempt:
Here it shows that you can completely replace the default nginx.config, so I tried removing my other file and instead copy/pasting the default from the medium article from before into a /.ebextensions/nginx/nginx.config file, except adding my changes there. I updated log_format main to include my "$http_x_my_header" values.
Unfortunately, the deployment failed with this message:
The configuration file .ebextensions/nginx/nginx.config in application version contains invalid YAML or JSON. YAML exception: Invalid Yaml: expected '', but found Scalar in "", line 7, column 1: include /usr/share/nginx/modules ... ^ , JSON exception: Invalid JSON: Unexpected character (u) at position 0.. Update the configuration file.
The offending line is include /usr/share/nginx/modules, which exists and works fine in the default that medium article provided.
I was hoping this would be a dirty fix that I could at least get some results from, but alas, it seems to have another roadblock.
I've answered this question through my own answer in a similar question:
AWS EB + nginx: Update access.log format to obfuscate sensitive get request parameters
The short of it: For Node AWS EB environments, the server directive of the nginx config exists inside an auto generated 00_elastic_beanstalk_proxy.conf file. Within here, they call access_log /var/log/nginx/access.log main, so adding a ebextension config trying to change access_log gets overridden.
My solution was twofold: override the main log_format by uploading a custom nginx.conf based on the default (AWS says you can do this, but recommends you by pulling the one created by default, and re-checking it when you update the version of the environment's image), and I also had to do the same with the auto generated file to perform some logic that sets the new variable I wanted to log.
For more details, see the answer linked above, which has more information on the process.
My solution is to override the nginx.conf.
The AWS doc for nodejs platform is to delete the existing /etc/nginx/conf.d/00_elastic_beanstalk_proxy.conf and replace with your own config. I have tested it works as well. For Docker platform, you need to delete /sites-enabled/elasticbeanstalk-nginx-docker-proxy.conf.
This following code is for docker platform only. Version: Docker running on 64bit Amazon Linux/2.16.7
You should find the default code of nginx.conf in your platform.
Add .ebextensions/nginx/00-my-proxy.config to your build folder
files:
/etc/nginx/nginx.conf:
  mode: "000644"
  owner: root
  group: root
  content: |
      # Elastic Beanstalk Nginx Configuration File
      user nginx;
      worker_processes auto;
      error_log /var/log/nginx/error.log;
      pid /var/run/nginx.pid;
      events {
          worker_connections 1024;
      }
      http {
        include /etc/nginx/mime.types;
        default_type application/octet-stream;
        access_log /var/log/nginx/access.log;
        log_format healthd '$msec"$uri"$status"$request_time"$upstream_response_time"$http_x_forwarded_for';
        upstream docker {
            server 172.17.0.2:3000;
            keepalive 256;
        }
        log_format timed_combined '"$http_x_forwarded_for"'
        '$remote_addr - $remote_user [$time_local] '
                    '"$request" $status $body_bytes_sent '
                    '"$http_referer" "$http_user_agent" '
                    '$request_time $upstream_response_time $pipe';
        map $http_upgrade $connection_upgrade {
                default       "upgrade";
                ""           "";
        }
        server {
            listen 80;
              gzip on;
            gzip_comp_level 4;
            gzip_types text/plain text/css application/json application/x-javascript text/xml application/xml application/xml+rss text/javascript;
              if ($time_iso8601 ~ "^(\d{4})-(\d{2})-(\d{2})T(\d{2})") {
                  set $year $1;
                  set $month $2;
                  set $day $3;
                  set $hour $4;
              }
              access_log /var/log/nginx/healthd/application.log.$year-$month-$day-$hour healthd;
              access_log /var/log/nginx/access.log timed_combined;
              location / {
                  proxy_pass           http://docker;
                  proxy_http_version   1.1;
                  proxy_set_header   Connection           $connection_upgrade;
                  proxy_set_header   Upgrade               $http_upgrade;
                  proxy_set_header   Host               $host;
                  proxy_set_header   X-Real-IP           $remote_addr;
                  proxy_set_header   X-Forwarded-For       $proxy_add_x_forwarded_for;
              }
        }
      }
In EB docker platform, the server block in Nginx config is in another file
/etc/nginx/sites-enabled/elasticbeanstalk-nginx-docker-proxy.conf
server {
listen 80;
access_log /var/log/nginx/access.log;
}
The entry config nginx.conf is
include       /etc/nginx/conf.d/*.conf;
   include       /etc/nginx/sites-enabled/*;
So adding this customized config will only lead to this result - empty log file.
files:
/etc/nginx/conf.d/01_proxy.conf:
owner: root
group: root
content: |
log_format my_log_format '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" - "$http_x_my_header" - "$http_x-my-header"';
access_log /var/log/nginx/new_log.log my_log_format;
log_format ...
access_log ...
server {
...
}
I have wrote the details in my github.

Drupal 7: How to create one node page for multiple content types?

there are 8 content types in my website and 4 of them have the same structure and the difference is just their name. I want to create a node page for them but I guess it is inefficient to create one .tpl.php file for each one. I use the following method to create node page for a certain content type:
create page and rename it to page--node--Machine-Name-of-ContentType.tpl.php
add this function to template.php
function ThemeName_preprocess_page(&$variables)
{
if (isset($variables['node']))
{
$suggest = "page__node__{$variables['node']->type}";
$variables['theme_hook_suggestions'][] = $suggest;
}
}
Is there any way to create one node page for a multiple content types?
function theme_preprocess_page(&$vars) {
 if (isset($vars['node']->type)) {
   switch ($vars['node']->type) {
     case 'news':
     case 'blog':
     case 'event':
     case 'page':
       $vars['theme_hook_suggestion'] = 'page__alt';
       break;
   }
}
will use page--alt.tpl.php file

Mongoengine delete file gridfs

I am using mongoengine to insert images in mongodb's GridFS.
Insert everything is ok, but I now want to delete, and I'm not getting.
I am using version 0.8.2 and I'm mongoengine to do so:
class Animal(Document):
genus = StringField()
family = StringField()
photo = FileField()
marmot = Animal(genus='Marmota')
marmot.photo.delete()
Only he did not delete anything or gives error.
What am I doing wrong? Someone can help me?
I managed to delete, thus:
marmot = Animal.objects.get(id='51c80fb28774a715dc0481ae')
marmot.photo.delete()
The issue is that I'm doing my upload to GridFS with the following code:
    if request.method == 'POST':
        
        my_painting = Movie.objects.get(id=id)
                
        files = []
        for f in request.FILES.getlist('file'):
           mf = mongoengine.fields.GridFSProxy()
           mf.put(f, filename=f.name, legend='Oi')
           files.append(mf)
           print files
           my_painting.MovieCover = files
        my_painting.save()
Inserts okay.
But when I delete, using the same above gives me the following error:
'BaseList' object has no attribute 'delete'
Someone can help me?

django's forbidden(403) response when session is expired, how to change it to unauthorized(401)

I am trying to upload a file in an application. I empty my browsing data or somehow end my session and then I hit upload. I select a file from my filesystem and I get a forbidden (403) error from Django's server. csrf.py's code get executed.
if not constant_time_compare(request_csrf_token, csrf_token):
               logger.warning('Forbidden (%s): %s',
                              REASON_BAD_TOKEN, request.path,
                   extra={
                       'status_code': 403,
                       'request': request,
                   }
               )
               return self._reject(request, REASON_BAD_TOKEN)
Now, that I dont want to change Django's code, how do I present 401 to the user and not 403. I dont want to capture 403 from server and change it to 401 in my Javascript. Any other solutions?? Thanks
The CSRF_FAILURE_VIEW setting allows you to
write your own view for CSRF failures. You could write one that returns HTTP status 401.