IIS 7.5 + PHP + httpOnlyCookies + requireSSL - cookies

How do I enable the httpOnlyCookies and requireSSL for all the cookie in IIS 7.5 ?
I have tried adding
<httpCookies httpOnlyCookies="true" requireSSL="true" />
within the
<system.webServer>
but it show 500 Internal Error.

Edit the php.ini and find the line:
session.cookie_httponly =
Set this value to true (e.g. to 1 as I have had issues with true for some reason) and restart IIS once you're done editing the php.ini.

Related

How to properly use/configure YT gem?

I'm trying to reproduce this tutorial : YouTube API, Version 3 on Rails
in order to apply it on my own project. But I'm having a hard with it since few days.
At first, I had this error :
A request to YouTube API caused an unexpected server error: To display
more verbose errors, change the configuration of Yt with: Yt.configure
do |config| config.log_level = :debug end
I updated RVM and Ruby and I'm getting this error now :
Yt::Errors::Forbidden in VideosController#create A request to YouTube
API was considered forbidden by the server: To display more verbose
errors, change the configuration of Yt with: Yt.configure do |config|
config.log_level = :debug end
I already :
get ruby and rvm updated
tried different version of the yt gem
tried that : OpenSSL::SSL::VERIFY_PEER = OpenSSL::SSL::VERIFY_NONE
tried that : config.force_ssl = false
this
curl -X GET -H "content-length: 0" -H "user-agent: Yt::Request (gzip)" -H "host: www.googleapis.com" "https://www.googleapis.com/youtube/v3/videos?id=wuZfOIWwM_Y&part=snippet"
return that :
Using Rails 4.2.4, Ruby 2.3.0;
Source code at : https://github.com/NeimadTL/YT_Sample_App
Any help, suggestions would be strongly and sincerely appreciated.
forbidden (403) forbidden Access forbidden. The request may not be properly authorized.
Answer: The request you are making is not authorized. update: Change key= to access_token=
Possible cause:
https://www.youtube.com/annotations_invideo?key=
You are trying to run a request annotations_invideo (which I cant actually find any were in the documentation) and you are applying an API key to it. API keys only work with public data. Either annotations_invideo is not a valid request to the API or its something that you need to be authenticated for. If you need to be authenticated then you will need an access token and then apply access_token= instead of key=
where exactly did you find annotations_invideo ?
Update:
Lucky for me it has been under an hour since you posted your question I was able to take
https://www.youtube.com/annotations_invideo?access_token=AIzaSyBSvIOM0EGX1tcrf5IAlYJuH_ttqVgTO4Q&video_id=BPNYv0vd78A
and dump it in a web browser it returned data.
<document>
<annotations>
<annotation author="" id="annotation_1585555999" log_data="ei=B2k9WIOCB8X0dNKokKAG&a-id=annotation_1585555999&xble=1&a-type=4&a-v=BPNYv0vd78A" style="title" type="text">
<TEXT>Hello, world!</TEXT>
<segment>
<movingRegion type="rect">
<rectRegion d="0" h="25.2779998779" t="0:00.000" w="75.0" x="13.1540002823" y="67.3239974976"/>
<rectRegion d="0" h="25.2779998779" t="0:02.089" w="75.0" x="13.1540002823" y="67.3239974976"/>
</movingRegion>
</segment>
<appearance bgAlpha="0.25" bgColor="0" borderAlpha="0.10000000149" effects="" fgColor="16777215" fontWeight="bold" highlightFontColor="16777215" textSize="21.6642"/>
</annotation>
<annotation id="channel:563d3ce4-0000-20cc-8fd5-001a11463304" style="playlist" type="promotion" log_data="ei=B2k9WIOCB8X0dNKokKAG&a-type=12&a-ch=UCwCnUcLcb9-eSrHa_RQGkQQ&xble=1&a-id=563d3ce4-0000-20cc-8fd5-001a11463304&l-class=2&link-id=PLuW4g7xujBWfU26JUTW1DGs3hk4LD5KaL&a-v=BPNYv0vd78A">
<data>
{"playlist_length":"200","session_data":{"itct":"CAIQwTcY____________ASITCMOh497wzdACFUU6HQodUhQEZCj4HTICaXZIwN_33vSX1vkE","annotation_id":"563d3ce4-0000-20cc-8fd5-001a11463304","feature":"iv","ei":"B2k9WIOCB8X0dNKokKAG","src_vid":"BPNYv0vd78A"},"is_mobile":false,"text_line_2":"Adorable Kids","text_line_1":"Check this playlist","image_url":"https:\/\/i.ytimg.com\/vi\/yDrLVqRHAsw\/mqdefault.jpg","start_ms":1000,"collapse_delay_ms":86400000,"end_ms":3000}
</data>
<segment/>
<action trigger="click" type="openUrl">
<url type="hyperlink" target="new" value="https://www.youtube.com/watch?v=yDrLVqRHAsw&list=PLuW4g7xujBWfU26JUTW1DGs3hk4LD5KaL"/>
</action>
</annotation>
</annotations>
</document>
Note: I wonder why this is returning XML and not Json it has me thinking this is an older api. Found it you are using the YouTube API v2 which is deprecated It should have been shut down .
https://youtube-eng.googleblog.com/2014/09/have-you-migrated-to-data-api-v3-yet.html
you should drop this and move to the YouTube API v3

Django deploy on Heroku gives site can't be reached response

I’m having a tough time deploying my django app (v1.9) to heroku (psql 9.5),cedar stack-14.
Here’s how I arrived here: I had tremendous migration issues that resulted in “””django.db.utils.ProgrammingError: relation already exists”””, and “””Django column “name” of relation “django_content_type” does not exist””” errors. Figuring that there were old, mishandled migrations imported int django_migrations table, I decided to push a fresh, local db up to an empty heroku with:
PGUSER=dbnameHERE PGPASSWORD=dbpassHERE heroku pg:push localDBnameHERE DATABASE --app appnameHERE
This worked flawlessly. After that, here’s what happens when I run these commands:
When I run heroku local, my full app shows locally on 0.0.0.0:5000. (/admin works, but with css issues, presumably bc the whitenoise module I imported does not deal well under production .env settings)
When I run heroku local -e .env.DEV (development .env settings) on 0.0.0.0:5000, everything, including /admin works wonderfully.
The issue begins when gunicorn comes into the picture. When I run gunicorn config.wsgi:application, it runs, but I get “This site can’t be reached, localhost took too long to respond” blank page.
Here is the request header from the blank webpage:
HTTP/1.1 301 Moved Permanently
Connection: keep-alive
Server: gunicorn/19.4.5
Date: Tue, 07 Jun 2016 22:39:00 GMT
Transfer-Encoding: chunked
Location: https://sitename.herokuapp.com/
Content-Type: text/html; charset=utf-8
X-Content-Type-Options: nosniff
X-Xss-Protection: 1; mode=block
X-Frame-Options: SAMEORIGIN
Via: 1.1 vegur
When I run heroku - heroku run python manage.py check --deploy, I get this:
WARNINGS:
?: (security.W001) You do not have 'django.middleware.security.SecurityMiddleware' in your MIDDLEWARE_CLASSES so the SECURE_HSTS_SECONDS, SECURE_CONTENT_TYPE_NOSNIFF, SECURE_BROWSER_XSS_FILTER, and SECURE_SSL_REDIRECT settings will have no effect.
?: (security.W009) Your SECRET_KEY has less than 50 characters or less than 5 unique characters. Please generate a long and random SECRET_KEY, otherwise many of Django's security-critical features will be vulnerable to attack.
?: (security.W012) SESSION_COOKIE_SECURE is not set to True. Using a secure-only session cookie makes it more difficult for network traffic sniffers to hijack user sessions.
?: (security.W016) You have 'django.middleware.csrf.CsrfViewMiddleware' in your MIDDLEWARE_CLASSES, but you have not set CSRF_COOKIE_SECURE to True. Using a secure-only CSRF cookie makes it more difficult for network traffic sniffers to steal the CSRF token.
?: (security.W017) You have 'django.middleware.csrf.CsrfViewMiddleware' in your MIDDLEWARE_CLASSES, but you have not set CSRF_COOKIE_HTTPONLY to True. Using an HttpOnly CSRF cookie makes it more difficult for cross-site scripting attacks to steal the CSRF token.
?: (security.W018) You should not have DEBUG set to True in deployment.
?: (security.W019) You have 'django.middleware.clickjacking.XFrameOptionsMiddleware' in your MIDDLEWARE_CLASSES, but X_FRAME_OPTIONS is not set to 'DENY'. The default is 'SAMEORIGIN', but unless there is a good reason for your site to serve other parts of itself in a frame, you should change it to 'DENY'.
?: (security.W020) ALLOWED_HOSTS must not be empty in deployment.
System check identified 8 issues (0 silenced).
Same blank page and 301 redirect with no error code when I navigate to sitename.herokuapp.com that I got when running gunicorn. Any guess as to why my app throws redirects when gunicorn gets involved?
I eventually solved this issue by re-reading the Heroku docs, where it states that production environments must have their config vars loaded separately, either through their dashboard interface, or by using their heroku config:set command from the heroku cli.
I had placed my config vars $VIRTUAL_ENV/bin/postactivate, loading the different virtualenv configs following activation of the virtualenv; however, that file was not being used in production.
if ENVIRONMENT == 'production':
SECURE_BROWSER_XSS_FILTER = True
X_FRAME_OPTIONS = 'DENY'
SECURE_SSL_REDIRECT = True
SECURE_HSTS_SECONDS = 3600
SECURE_HSTS_INCLUDE_SUBDOMAINS = True
SECURE_HSTS_PRELOAD = True
SECURE_CONTENT_TYPE_NOSNIFF = True
SESSION_COOKIE_SECURE = True
CSRF_COOKIE_SECURE = True
There is some security issues can be happen

Rails 4.2 console issues - using RAILS_ENV=development

trying to run
$ rails c RAILS_ENV=development
1 warning and 1 error are raised which I do not understand
# warning :
config.eager_load is set to nil. Please update your config/environments/*.rb files accordingly:
* development - set it to false
* test - set it to false (unless you use a tool that preloads your test environment)
* production - set it to true
# error
/config/initializers/devise.rb:13:in `+': no implicit conversion of nil into String (TypeError)
However , the config.eager_load is set to false in the development environment
config/environment/development.rb
Rails.application.configure do
config.cache_classes = false
config.eager_load = false
…/…
And looking at the config/initializers/devise.rb ( line 13) I have
config/initializers/devise.rb
Devise.setup do |config|
…/…
(13) config.mailer_sender = 'no-reply#' + Rails.application.secrets.domain_name
…/…
which lead to the config/secrets.yml file
config/secrets.yml
development:
domain_name: example.com
it's quite understandable , as running rails c ( whithout RAILS_ENV) , I get
$ rails c
development environment (Rails 4.2.3)
irb: warn: can't alias context from irb_context.
irb(main):001:0> Rails.application.secrets.domain_name
=> "example.com"
this warning is also cryptic :
irb: warn: can't alias context from irb_context
could not find any info on Google search... but at least it runs in development ....
why this warning and error using RAILS_ENV ? any enlightenment welcome
too bad .. I should HAVE READ the latest 4.2 doc !!!
so I should NOT be using RAILS_ENV at all !!
$ rails console staging
Loading staging environment (Rails 4.2.3)
irb(main):001:0> exit
$ rails console development
Loading development environment (Rails 4.2.3)
irb: warn: can't alias context from irb_context.
irb(main):001:0> exit

login page doesn't work only on aws

i recently push my rails4 app on aws. The landing page work (some others too) but the login and subscribe page don't. and they do on localhost.
I have "we're sry blalbla check log" the probleme is i have no information on my log file
just this line :
Started GET "/login" for 93.26.170.84 at 2015-08-09 15:10:43 +0000
for the request and nothing else.
I am on development environment.
What would you do to find what the problem is ?
thx for your time !
UPDATE :
i coded the login process by myself,
i am using puma
config/puma.rb
workers Integer(ENV['WEB_CONCURRENCY'] || 2)
threads_count = Integer(ENV['MAX_THREADS'] || 5)
threads threads_count, threads_count
preload_app!
rackup DefaultRackup
port ENV['PORT'] || 80
environment ENV['RACK_ENV'] || 'development'
on_worker_boot do
# Worker specific setup for Rails 4.1+
# See: https://devcenter.heroku.com/articles/
# deploying-rails-applications-with-the-puma-web-server#on-worker-boot
ActiveRecord::Base.establish_connection
end
the webserver :
http://52.28.211.67/
UPDATE 2 :
config/development.rb
...
# Show full error reports and disable caching.
config.consider_all_requests_local = true
config.action_controller.perform_caching = true
config.log_level = :debug
...
and i only have a development.log in logs/
still no informations about the probleme :>

Get thrown "Generator: 0 records selected for fetching" when trying to crawl a small majority of websites using Nutch

I have a site that runs using moderngov.co.uk (you send them a template, which they then upload). I'm trying to crawl this site so it can indexed by Solr and searched through a drupal site. I can crawl the vast majority of websites out there, but for some reason I am unable to crawl this one: http://scambs.moderngov.co.uk/uuCoverPage.aspx?bcr=1
The specific error I get is this:
Injector: starting at 2013-10-17 13:32:47
Injector: crawlDb: X-X/crawldb
Injector: urlDir: urls/seed.txt
Injector: Converting injected urls to crawl db entries.
Injector: total number of urls rejected by filters: 1
Injector: total number of urls injected after normalization and filtering: 0
Injector: Merging injected urls into crawl db.
Injector: finished at 2013-10-17 13:32:50, elapsed: 00:00:02
Thu, Oct 17, 2013 1:32:50 PM : Iteration 1 of 2
Generating a new segment
Generator: starting at 2013-10-17 13:32:51
Generator: Selecting best-scoring urls due for fetch.
Generator: filtering: false
Generator: normalizing: true
Generator: topN: 50000
Generator: 0 records selected for fetching, exiting ...
I'm not sure if it's got something to do with the regex patterns Nutch uses to parse html, or if there's a redirect that's causing issues, or something else entirely. Below are a few of the nutch config files:
Here are the urlfilters: http://pastebin.com/ZqeZUJa1
sysinfo:
Windows 7 (64-bit)
Solr 3.6.2
Apache Nutch 1.7
If anyone has come across this problem before, or might know why this is happening, any help would be greatly appreciated.
Thanks
I tried that seed url and I got this error:
Denied by robots.txt: http://scambs.moderngov.co.uk/uuCoverPage.aspx?bcr=1
Looking at the robots.txt file of that site:
# Disallow all webbot searching
User-agent: *
Disallow: /
You have to set a specific user agent in Nutch and modify the website to accept crawling form your user agent.
The property to change in Nutch is in conf/nutch-site.xml:
<property>
<name>http.agent.name</name>
<value>nutch</value>
</property>
try this
<property>
<name>db.fetch.schedule.class</name>
<value>org.apache.nutch.crawl.AdaptiveFetchSchedule</value>
</property>
<property>
<name>db.fetch.interval.default</name>
<value>10</value>
<description>The default number of seconds between re-fetches of a page (30 days).
</description>
</property>
<property>
<name>db.fetch.interval.max</name>
<!-- for now always re-fetch everything -->
<value>100</value>
<description>The maximum number of seconds between re-fetches of a page
(less than one day). After this period every page in the db will be re-tried, no
matter what is its status.
</description>
</property>