I and my team of developers have Vagrant machines running on our Windows 10 computers that have local file systems which contain our scripts to run a developer version of our web application.
We connect to an AWS RDS DB, but the connection is extremely slow to the database.
If we use our database IDE (dbforge) and run SQL queries, then they run fast.
Is there anything we can do to our vagrant boxes or setup that would speed this connection up?
Here are the contents of our Vagrantfile
# -*- mode: ruby -*-
# vi: set ft=ruby :
# All Vagrant configuration is done below. The "2" in Vagrant.configure
# configures the configuration version (we support older styles for
# backwards compatibility). Please don't change it unless you know what
# you're doing. Vagrant.configure("2") do |config|
# The most common configuration options are documented and commented below.
# For acomplete reference, please see the online documentation at
# https://docs.vagrantup.com.
# Every Vagrant development environment requires a box. You can search for
# boxes at https://vagrantcloud.com/search.
config.vm.box = "hcp" config.vm.network "private_network", ip:
"10.23.45.30" config.vm.synced_folder "etc/", "/etc/hcp"
config.vm.synced_folder "html/", "/var/www/html"
config.vm.synced_folder "html_data/", "/var/www/html/_data"
# Disable automatic box update checking. If you disable this, then
# boxes will only be checked for updates when the user runs
# `vagrant box outdated`. This is not recommended.
#config.vm.box_check_update = false
# Create a forwarded port mapping which allows access to a specific port
# within the machine from a port on the host machine. In the example below,
# accessing "localhost:8080" will access port 80 on the guest machine.
# NOTE: This will enable public access to the opened port
# config.vm.network "forwarded_port", guest: 80, host:8080
# Create a forwarded port mapping which allows access to a specific port
# within the machine from a port on the host machine and only allow access
# via 127.0.0.1 to disable public access
# config.vm.network "forwarded_port", guest: 80, host: 8080, host_ip: "127.0.0.1"
# Create a private network, which allows host-only access to the machine
# using a specific IP.
# config.vm.network "private_network", ip: "192.168.33.10"
# Create a public network, which generally matched to bridged network.
# Bridged networks make the machine appear as another physical device on
# your network.
# config.vm.network "public_network"
# Share an additional folder to the guest VM. The first argument is
# the path on the host to the actual folder. The second argument is
# the path on the guest to mount the folder. And the optional third
# argument is a set of non-required options.
# config.vm.synced_folder "../data", "/vagrant_data"
# Provider-specific configuration so you can fine-tune various
# backing providers for Vagrant. These expose provider-specific options.
# Example for VirtualBox:
#
# config.vm.provider "virtualbox" do |vb|
#
# Display the VirtualBox GUI when booting the machine
# vb.gui = true
#
#
# Customize the amount of memory on the VM:
# vb.memory = "1024"
# end
#
# View the documentation for the provider you are using for more
# information on available options.
# Enable provisioning with a shell script. Additional provisioners such as
# Ansible, Chef, Docker, Puppet and Salt are also available. Please see the
# documentation for more information about their specific syntax and use.
# config.vm.provision "shell", inline:<<-SHELL
# apt-get update
# apt-get install -y apache2
#
SHELL end
Related
I have a simple backend srevice that I just deployed with copilot.
However, I don't know where to access it?
According to AWS console it's running and active. I can even see it in the logs that it has been started.
My manifest:
# The manifest for the "user-service" service.
# Read the full specification for the "Backend Service" type at:
# https://aws.github.io/copilot-cli/docs/manifest/backend-service/
# Your service name will be used in naming your resources like log groups, ECS services, etc.
name: user-service
type: Backend Service
# Your service does not allow any traffic.
# Configuration for your containers and service.
image:
# Docker build arguments. For additional overrides: https://aws.github.io/copilot-cli/docs/manifest/backend-service/#image-build
build: ./Dockerfile
port: 9000
cpu: 256 # Number of CPU units for the task.
memory: 512 # Amount of memory in MiB used by the task.
count: 1 # Number of tasks that should be running in your service.
# Optional fields for more advanced use-cases.
#
variables: # Pass environment variables as key value pairs.
SERVER_PORT: 9000
NODE_ENV: test
secrets: # Pass secrets from AWS Systems Manager (SSM) Parameter Store.
ACCESS_TOKEN_SECRET: ACCESS_TOKEN_SECRET
REFRESH_TOKEN_SECRET: REFRESH_TOKEN_SECRET
MONGODB_URL: MONGODB_URL
# You can override any of the values defined above by environment.
environments:
test:
variables:
NODE_ENV: test
# count: 2 # Number of tasks to run for the "test" environment.
My Dockerfile
# Check out https://hub.docker.com/_/node to select a new base image
FROM node:lts-buster-slim
# Set to a non-root built-in user `node`
USER node
# Create app directory (with user `node`)
RUN mkdir -p /home/node/app
WORKDIR /home/node/app
# Install app dependencies
# A wildcard is used to ensure both package.json AND package-lock.json are copied
# where available (npm#5+)
COPY --chown=node package*.json ./
RUN npm install
# Bundle app source code
COPY --chown=node . .
RUN npm run build
# Bind to all network interfaces so that it can be mapped to the host OS
ENV HOST=0.0.0.0 PORT=3000
EXPOSE 9000
CMD [ "node", "." ]
This works fine locally, with docker-compose. But where can I find the URL of the deployed service? I checked ECS console and the task has a public IP. However I can't connect to that.
What's missing here?
Nm.. my bad. Backend services are not supposed to be reachable via internet. They expose endpoints but should talk to each other (or the frontend) via service discovery.
I am trying to setup hyperledger fabric blockchain network using amazon managed blockchain following this guide. In the step 6, to create the channel I have executed the following command,
docker exec cli peer channel create -c hrschannel -f /opt/home/hrschannel.pb -o orderer.n-zzzz.managedblockchain.us-east-1.amazonaws.com:30001 --cafile /opt/home/managedblockchain-tls-chain.pem --tls
But I am getting the following error,
Error: failed to create deliver client: orderer client failed to connect to orderer.n-zzzz.managedblockchain.us-east-1.amazonaws.com:30001: failed to create new connection: context deadline exceeded
Help me to fix this issue.
Edited:
I asked the same question in reddit. One user replied that he added listenAddress environment variable in my configtx.yaml file. He did not say clear information about which listenAddress and where to add that address in configtx.yaml. Here is my configtx.yaml file.
################################################################################
#
# Section: Organizations
#
# - This section defines the different organizational identities which will
# be referenced later in the configuration.
#
################################################################################
Organizations:
- &Org1
# DefaultOrg defines the organization which is used in the sampleconfig
# of the fabric.git development environment
Name: m-CUB6HI
# ID to load the MSP definition as
ID: m-B6HI
MSPDir: /opt/home/admin-msp
# AnchorPeers defines the location of peers which can be used
# for cross org gossip communication. Note, this value is only
# encoded in the genesis block in the Application section context
AnchorPeers:
- Host:
Port:
################################################################################
#
# SECTION: Application
#
# - This section defines the values to encode into a config transaction or
# genesis block for application related parameters
#
################################################################################
Application: &ApplicationDefaults
# Organizations is the list of orgs which are defined as participants on
# the application side of the network
Organizations:
################################################################################
#
# Profile
#
# - Different configuration profiles may be encoded here to be specified
# as parameters to the configtxgen tool
#
################################################################################
Profiles:
OneOrgChannel:
Consortium: AWSSystemConsortium
Application:
<<: *ApplicationDefaults
Organizations:
- *Org1
Help me to fix this issue.
One must check if the peer container is able to communicate with the orderer container. curl orderer.endpoint port can be used to check the connection. If the peer is unable to communicate then either the orderer container is down or could be due to different security groups.
Update:
As OP mentioned in the comments, changing the port helped in resolving the issue. One must give it a try.
Solution
Always make sure you reserve your IPs when using a Static IP
Versions
VirtualBox Version: 6.0.0 ( I think )
Vagrant Version: 2.2.3
CentosBox: "centos/7"
Nginx Version: 1.16.1
uWSGI Version: 2.0.18
Django Version: 2.2.1
Background
I have two vagrant boxes running, a test and a production. The only difference is IP and core count. I've set up both so I can ssh directly into the boxes, instead of having to ssh into the host machine and then run 'vagrant ssh'
General Issue
The production version will randomly boot me out of the ssh (Connection reset by IP port 22) and then i'll get Connection Refused. If I ssh into the Host machine and then 'vagrant ssh' I can still get in and everything seems to be fine, I can even still ping other computers on the network. But I can't access it from outside the host, this goes for the nginx server as well (IP refused to connect.) on chrome
The issue will occasionally fix itself in a couple minutes, but the majority of the time requires a 'vagrant destroy' and 'vagrant up --provision' / recreate the box. I also occasionally get booted out of the Host Machine and well as the test box, but both I can still access externally after (even the nginx server on test) I'm working over a VPN and I also occasionally get booted out of that as well, but i can reconnect when I notice
VagrantFile
# -*- mode: ruby -*-
# vi: set ft=ruby :
# Please don't change it unless you know what you're doing.
Vagrant.configure("2") do |config|
config.vm.box = "centos/7"
config.vm.hostname = "DjangoProduction"
# Disable automatic box update checking. If you disable this, then
# boxes will only be checked for updates when the user runs
# `vagrant box outdated`. This is not recommended.
# config.vm.box_check_update = false
# Create a public network, which generally matched to bridged network.
# Bridged networks make the machine appear as another physical device on
# your network.
config.vm.network "public_network", ip: "IP"
# Share an additional folder to the guest VM. The first argument is
# the path on the host to the actual folder. The second argument is
# the path on the guest to mount the folder. And the optional third
# argument is a set of non-required options.
config.vm.synced_folder "./", "D:/abcd", type: "sshfs", group:'vagrant', owner:'vagrant'
# Provider-specific configuration so you can fine-tune various
# backing providers for Vagrant. These expose provider-specific options.
# Example for VirtualBox:
#
config.vm.provider "virtualbox" do |v|
v.name = "DjangoProduction"
# test has these two commented out
v.memory = 6000
v.cpus = 4
end
#
# View the documentation for the provider you are using for more
# information on available options.
## Keys
### For SSH directly into the Box
# Work Laptop Key
config.vm.provision "file", source: ".provision/keys/work.pub", destination: "~/.ssh/work.pub"
config.vm.provision "shell", inline: "cat ~vagrant/.ssh/work.pub >> ~vagrant/.ssh/authorized_keys"
# Personal Laptop Key
config.vm.provision "file", source: ".provision/keys/msi.pub", destination: "~/.ssh/msi.pub"
config.vm.provision "shell", inline: "cat ~vagrant/.ssh/msi.pub >> ~vagrant/.ssh/authorized_keys"
##
required_plugins = %w( vagrant-sshfs )
required_plugins.each do |plugin|
exec "vagrant plugin install #{plugin};vagrant #{ARGV.join(" ")}" unless Vagrant.has_plugin? plugin || ARGV[0] == 'plugin'
end
# Enable provisioning with a shell script. Additional provisioners such as
# Puppet, Chef, Ansible, Salt, and Docker are also available. Please see the
# documentation for more information about their specific syntax and use.
config.vm.provision :shell, path: ".provision/boot.sh"
end
boot.sh
# networking
sudo yum -y install net-tools
ifconfig eth1 IP netmask 255.255.252.0
route add -net 10.1.0.0 netmask 255.255.252.0 dev eth1
route add default gw 10.1.0.1
# I manually set the gateway so It can be accessed through VPN
## install, reqs + drop things to places - gonna leave all that out
Error messages
Django
This issue starting popping up earlier this week with django sending me error emails saying. it's always random URLs there's no consistency
OperationalError at /
(2003, 'Can\'t connect to MySQL server on \'external-ip\' (110 "Connection timed out")')
I used to get this email once every other day and paid it no attention, but currently it's sending me at least 20 a day and the site is almost unusable- it's either really slow or I get chrome errors: 'ERR_CONNECTION_TIMED_OUT' or 'ERR_CONNECTION_REFUSED' or 'ERR_CONNECTION_RESET' .. it will be fine for an hour and then everything hits the fan
I originally thought it was an issue with the db or uwsgi or django, but working with it yesterday I realized there was a correlation with the timed out and getting kicked out of ssh.
Nginx Server Settings ( I have't changed nginx.conf )
upstream django {
server unix:///vagrant/abcd.sock;
}
server{
listen 8080;
return 301 https://$host$request_uri;
}
server{
charset utf-8;
listen 443 ssl;
ssl_certificate /etc/ssl/certs/nginx-selfsigned.crt;
ssl_certificate_key /etc/ssl/private/nginx-selfsigned.key;
location / {
uwsgi_pass django;
include /vagrant/project/uwsgi_params;
uwsgi_read_timeout 3600;
uwsgi_ignore_client_abort on;
}
location /static {
alias /vagrant/static;
}
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root /vagrant/templates/core;
}
}
UWSGI command used
uwsgi --socket abcd.sock --module project.wsgi --chmod-socket=664 --master --processes 8 --threads 4 --buffer-size=65535 --lazy
Nginx Error Logs
Nothing.
Messages file
only shows the '(110 "Connection timed out")' dump when it happens
Can you test the behaviour but commenting the line "config.vm.synced_folder..."?
I have created an instance of PostgreSQL running in a Ubuntu/Bionic box in Vagrant/VirtualBox that will be used by Django in my dev environment. I wanted to test my ability to connect to it with either the terminal or pgAdmin before connecting with DJango, just to be sure it was working on that end first; the idea being that I could make later Django debugging easier if I am assured the connection works; but, I've had no success.
I have tried editing the configuration files that many posts suggest, with no effect. I can, however, ping the box via the ip assigned in the Vagrantfile with no issue - but not when specifying port 5432 with ping 10.1.1.1:5432. I can also use psql from within the box, so it's running.
I have made sure to enable ufw on the vm, created a rule to allow port 5432 and insured that it took using sudo ufw status. I have also confirmed that I'm editing the correct files using the show command within psql.
Here are the relevant configs as they currently are:
Vagrantfile:
Vagrant.configure("2") do |config|
config.vm.hostname = "hg-site-db"
config.vm.provider "virtualbox" do |v|
v.memory = 2048
v.cpus = 1
end
config.vm.box = "ubuntu/bionic64"
config.vm.network "forwarded_port", host_ip: "127.0.0.1", guest: 5432, host: 5432
config.vm.network "public_network", ip: "10.1.1.1"
config.vm.provision "shell", inline: <<-SHELL
# Update and upgrade the server packages.
sudo apt-get update
sudo apt-get -y upgrade
# Install PostgreSQL
sudo apt-get install -y postgresql postgresql-contrib
# Set Ubuntu Language
sudo locale-gen en_US.UTF-8
SHELL
end
/etc/postgresql/10/main/postgresql.conf:
listen_addresses = '*'
/etc/postgresql/10/main/pg_hba.conf - I am aware this is insecure, but I was just trying to find out why it was not working, with plans to go back and correct this:
host all all 0.0.0.0/0 trust
As we discussed in comments, you should remove host_ip from your forwarded port definition and just leave the guest and host ports.
I am trying to implement encryption a Tomcat Server on AWS Elastic Beanstalk.
I follow this advise, and in the .ebextensions directory, I add the following files:
https-instance.config, which contains the certificate file and private key contents.
ssl.conf, which contains:
SSLCertificateKeyFile "/etc/pki/tls/certs/server.key"
SSLCertificateChainFile "/etc/pki/tls/certs/gd_bundle.crt"
SSLCipherSuite EECDH+AESGCM:EDH+AESGCM:AES256+EECDH:AES256+EDH
but on the server in /etc/pki/tls/certs:
(In ssl.conf I tried changing "/etc/pki/tls/certs/gd_bundle.crt" to "/etc/pki/tls/certs/ca-bundle.crt", but no difference)
https-instance-single.config, which contains:
Resources:
sslSecurityGroupIngress:
Type: AWS::EC2::SecurityGroupIngress
Properties:
GroupId: {"Fn::GetAtt" : ["AWSEBSecurityGroup", "GroupId"]}
IpProtocol: tcp
ToPort: 443
FromPort: 443
CidrIp: 0.0.0.0/0
I then try deploy my war file, but get:
Error
[Instance: i-04a9fa826b9d8e0a3] Command failed on instance. Return
code: 1 Output: httpd: no process found. container_command killhttpd
in .ebextensions/https-instance.config failed. For more detail, check
/var/log/eb-activity.log using console or EB CLI.
eb-activity.log
[Application update
thewhozoo-1.0.0.39#121/AppDeployStage0/EbExtensionPostBuild/Infra-EmbeddedPostBuild/postbuild_0_theWhoZoo_java/Command
killhttpd] : Activity execution failed, because: httpd: no process
found (ElasticBeanstalk::ExternalInvocationError)
As you can see, it's complaining about in https-instance.config there is httpd: no process found. Which suggests that there is a problem with:
...
-----END RSA PRIVATE KEY-----
container_commands:
killhttpd:
command: "killall httpd"
waitforhttpddeath:
command: "sleep 3"
i.e. there is no httpd process.
I then read that I need to set up a proxy server on Tomcat, so I follow these instructions, Extending the Default nginx Configuration.
So I add the following:
.ebextensions/nginx/conf.d/https.conf
# HTTPS server
server {
listen 443;
server_name localhost;
ssl on;
ssl_certificate /etc/pki/tls/certs/server.crt;
ssl_certificate_key /etc/pki/tls/certs/server.key;
ssl_session_timeout 5m;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_ciphers "EECDH+AESGCM:EDH+AESGCM:AES256+EECDH:AES256+EDH";
ssl_prefer_server_ciphers on;
location / {
proxy_pass http://localhost:80;
proxy_set_header Connection "";
proxy_http_version 1.1;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
}
But I still get the same above error (no httpd process).
More info
I don't really know what this means, but it looks like something is running on the server:
and
where on the server /etc/httpd/conf.d/ssl.conf:
#
# This is the Apache server configuration file providing SSL support.
# It contains the configuration directives to instruct the server how to
# serve pages over an https connection. For detailing information about these
# directives see <URL:http://httpd.apache.org/docs/2.2/mod/mod_ssl.html>
#
# Do NOT simply read the instructions in here without understanding
# what they do. They're here only as hints or reminders. If you are unsure
# consult the online docs. You have been warned.
#
LoadModule ssl_module modules/mod_ssl.so
#
# When we also provide SSL we have to listen to the
# the HTTPS port in addition.
#
Listen 443
##
## SSL Global Context
##
## All SSL configuration in this context applies both to
## the main server and all SSL-enabled virtual hosts.
##
# Pass Phrase Dialog:
# Configure the pass phrase gathering process.
# The filtering dialog program (`builtin' is a internal
# terminal dialog) has to provide the pass phrase on stdout.
SSLPassPhraseDialog builtin
# Inter-Process Session Cache:
# Configure the SSL Session Cache: First the mechanism
# to use and second the expiring timeout (in seconds).
SSLSessionCache shmcb:/var/cache/mod_ssl/scache(512000)
SSLSessionCacheTimeout 300
# Semaphore:
# Configure the path to the mutual exclusion semaphore the
# SSL engine uses internally for inter-process synchronization.
SSLMutex default
# Pseudo Random Number Generator (PRNG):
# Configure one or more sources to seed the PRNG of the
# SSL library. The seed data should be of good random quality.
# WARNING! On some platforms /dev/random blocks if not enough entropy
# is available. This means you then cannot use the /dev/random device
# because it would lead to very long connection times (as long as
# it requires to make more entropy available). But usually those
# platforms additionally provide a /dev/urandom device which doesn't
# block. So, if available, use this one instead. Read the mod_ssl User
# Manual for more details.
SSLRandomSeed startup file:/dev/urandom 256
SSLRandomSeed connect builtin
#SSLRandomSeed startup file:/dev/random 512
#SSLRandomSeed connect file:/dev/random 512
#SSLRandomSeed connect file:/dev/urandom 512
#
# Use "SSLCryptoDevice" to enable any supported hardware
# accelerators. Use "openssl engine -v" to list supported
# engine names. NOTE: If you enable an accelerator and the
# server does not start, consult the error logs and ensure
# your accelerator is functioning properly.
#
SSLCryptoDevice builtin
#SSLCryptoDevice ubsec
##
## SSL Virtual Host Context
##
<VirtualHost _default_:443>
# General setup for the virtual host, inherited from global configuration
#DocumentRoot "/var/www/html"
#ServerName www.example.com:443
# Use separate log files for the SSL virtual host; note that LogLevel
# is not inherited from httpd.conf.
ErrorLog logs/ssl_error_log
TransferLog logs/ssl_access_log
LogLevel warn
# SSL Engine Switch:
# Enable/Disable SSL for this virtual host.
SSLEngine on
# SSL Protocol support:
# List the enable protocol levels with which clients will be able to
# connect. Disable SSLv3 access by default:
SSLProtocol all -SSLv3
SSLProxyProtocol all -SSLv3
# SSL Cipher Suite:
# List the ciphers that the client is permitted to negotiate.
# See the mod_ssl documentation for a complete list.
SSLCipherSuite ALL:!ADH:!EXPORT:!SSLv2:RC4+RSA:+HIGH:+MEDIUM:+LOW
# Server Certificate:
# Point SSLCertificateFile at a PEM encoded certificate. If
# the certificate is encrypted, then you will be prompted for a
# pass phrase. Note that a kill -HUP will prompt again. A new
# certificate can be generated using the genkey(1) command.
SSLCertificateFile /etc/pki/tls/certs/localhost.crt
# Server Private Key:
# If the key is not combined with the certificate, use this
# directive to point at the key file. Keep in mind that if
# you've both a RSA and a DSA private key you can configure
# both in parallel (to also allow the use of DSA ciphers, etc.)
SSLCertificateKeyFile /etc/pki/tls/private/localhost.key
# Server Certificate Chain:
# Point SSLCertificateChainFile at a file containing the
# concatenation of PEM encoded CA certificates which form the
# certificate chain for the server certificate. Alternatively
# the referenced file can be the same as SSLCertificateFile
# when the CA certificates are directly appended to the server
# certificate for convinience.
#SSLCertificateChainFile /etc/pki/tls/certs/server-chain.crt
# Certificate Authority (CA):
# Set the CA certificate verification path where to find CA
# certificates for client authentication or alternatively one
# huge file containing all of them (file must be PEM encoded)
#SSLCACertificateFile /etc/pki/tls/certs/ca-bundle.crt
# Client Authentication (Type):
# Client certificate verification type and depth. Types are
# none, optional, require and optional_no_ca. Depth is a
# number which specifies how deeply to verify the certificate
# issuer chain before deciding the certificate is not valid.
#SSLVerifyClient require
#SSLVerifyDepth 10
# Access Control:
# With SSLRequire you can do per-directory access control based
# on arbitrary complex boolean expressions containing server
# variable checks and other lookup directives. The syntax is a
# mixture between C and Perl. See the mod_ssl documentation
# for more details.
#<Location />
#SSLRequire ( %{SSL_CIPHER} !~ m/^(EXP|NULL)/ \
# and %{SSL_CLIENT_S_DN_O} eq "Snake Oil, Ltd." \
# and %{SSL_CLIENT_S_DN_OU} in {"Staff", "CA", "Dev"} \
# and %{TIME_WDAY} >= 1 and %{TIME_WDAY} <= 5 \
# and %{TIME_HOUR} >= 8 and %{TIME_HOUR} <= 20 ) \
# or %{REMOTE_ADDR} =~ m/^192\.76\.162\.[0-9]+$/
#</Location>
# SSL Engine Options:
# Set various options for the SSL engine.
# o FakeBasicAuth:
# Translate the client X.509 into a Basic Authorisation. This means that
# the standard Auth/DBMAuth methods can be used for access control. The
# user name is the `one line' version of the client's X.509 certificate.
# Note that no password is obtained from the user. Every entry in the user
# file needs this password: `xxj31ZMTZzkVA'.
# o ExportCertData:
# This exports two additional environment variables: SSL_CLIENT_CERT and
# SSL_SERVER_CERT. These contain the PEM-encoded certificates of the
# server (always existing) and the client (only existing when client
# authentication is used). This can be used to import the certificates
# into CGI scripts.
# o StdEnvVars:
# This exports the standard SSL/TLS related `SSL_*' environment variables.
# Per default this exportation is switched off for performance reasons,
# because the extraction step is an expensive operation and is usually
# useless for serving static content. So one usually enables the
# exportation for CGI and SSI requests only.
# o StrictRequire:
# This denies access when "SSLRequireSSL" or "SSLRequire" applied even
# under a "Satisfy any" situation, i.e. when it applies access is denied
# and no other module can change it.
# o OptRenegotiate:
# This enables optimized SSL connection renegotiation handling when SSL
# directives are used in per-directory context.
#SSLOptions +FakeBasicAuth +ExportCertData +StrictRequire
<Files ~ "\.(cgi|shtml|phtml|php3?)$">
SSLOptions +StdEnvVars
</Files>
<Directory "/var/www/cgi-bin">
SSLOptions +StdEnvVars
</Directory>
# SSL Protocol Adjustments:
# The safe and default but still SSL/TLS standard compliant shutdown
# approach is that mod_ssl sends the close notify alert but doesn't wait for
# the close notify alert from client. When you need a different shutdown
# approach you can use one of the following variables:
# o ssl-unclean-shutdown:
# This forces an unclean shutdown when the connection is closed, i.e. no
# SSL close notify alert is send or allowed to received. This violates
# the SSL/TLS standard but is needed for some brain-dead browsers. Use
# this when you receive I/O errors because of the standard approach where
# mod_ssl sends the close notify alert.
# o ssl-accurate-shutdown:
# This forces an accurate shutdown when the connection is closed, i.e. a
# SSL close notify alert is send and mod_ssl waits for the close notify
# alert of the client. This is 100% SSL/TLS standard compliant, but in
# practice often causes hanging connections with brain-dead browsers. Use
# this only for browsers where you know that their SSL implementation
# works correctly.
# Notice: Most problems of broken clients are also related to the HTTP
# keep-alive facility, so you usually additionally want to disable
# keep-alive for those clients, too. Use variable "nokeepalive" for this.
# Similarly, one has to force some clients to use HTTP/1.0 to workaround
# their broken HTTP/1.1 implementation. Use variables "downgrade-1.0" and
# "force-response-1.0" for this.
SetEnvIf User-Agent ".*MSIE.*" \
nokeepalive ssl-unclean-shutdown \
downgrade-1.0 force-response-1.0
# Per-Server Logging:
# The home of a custom SSL log file. Use this when you want a
# compact non-error SSL logfile on a virtual host basis.
CustomLog logs/ssl_request_log \
"%t %h %{SSL_PROTOCOL}x %{SSL_CIPHER}x \"%r\" %b"
</VirtualHost>
(Unfortunately I cannot deploy to the server and I am under time pressure - should I delete the entire environment and start again on AWS?)
UPDATE
On the advise below, I add Elastic Load Balancing to my server, set up https with a certificate issued by Amazon. I test my app, and the url is accessible under https...Yay!
However, when I access it, it shows:
Why is it "Not Secure"?
If you're using Elastic Beanstalk I'd totally recommend you to use the Elastic Load balancer to take care of the encryption. Use HTTP layer between the VM and ELB and then from the ELB to the client, use HTTPS.
You can generate the certificates for free on AWS and upload them in the ELB.
http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/configuring-https.html