the server responded with a status of 403 (Forbidden) - javascript-security

Just built a ror(ruby on rails) site and all looks fine and function well at home from my internet connection.
when trying to browse from a company internet connection no javascripts are loaded by browser (tested chrome and IE 10)
the errors I get for all JS files:
Failed to load resource: the server responded with a status of 403 (Forbidden) https://stamdomain.com/javascripts/translations.js Failed to load resource: the server responded with a status of 403 (Forbidden)
When trying to download the js file I get a security page message (some security client installed on my machiene)
To further harden our security, we are limiting FILE DOWNLOAD from uncategorized, and certain non-business websites that could potentially house virus, malwares and other cyber threats
Date: Tue, 25 Oct 2016 14:30:57 GMT
Username:
Your Source IP: 10.11.11.112
URL: GET https://stamdomain.com/javascripts/translations.js
Category: Uncategorized URLs
File Type: application/javascript
Reason: BLOCK-TYPE
Web Reputation Score: ns
Malware Category/Name:
Appliance: 10.11.123.71
not sure where is the problem..client? server? and if there is a way to bypass it and make website accessible in these kind of networks too.
Thanks

Thanks, Joe Clay this was a security issue on client pc installed by company

Related

django code 400, message Bad request version ('î\x9el\x00$\x13\x01\x13\x03\x13\x02À+À/̨̩À,À0À')

I was trying to implement 'Securing Django Admin login with OTP', however I can't login into the admin panel now. I removed the app from everywhere but still doesn't work. Any solution for this?
[05/Feb/2021 21:39:49] code 400, message Bad request version ('î\x9el\x00$\x13\x01\x13\x03\x13\x02À+À/̨̩À,À0À')
[05/Feb/2021 21:39:49] You're accessing the development server over HTTPS, but it only supports HTTP.
If you are on development version you cant use https connection on your localhost so just turn your URL to http and error will be disappear.
ex:
https://localhost:8000
or
https://127.0.0.1:8000
just turn it to
http://localhost:8000
or
http://127.0.0.1:8000
also may be at 127.0.0.1:8000

Cowboy returns 400 without headers

AWS HealthCheck endpoint doesn't send any headers which causes Cowboy (v 1.1.2) to return 400. This is causing container restarts.
Is there any way around the issue?
Related github issue: https://github.com/phoenixframework/phoenix/issues/2437
curl request to reproduce the error:
curl http://localhost:4000/ping -H 'Host:'
Log:
[error] Cowboy returned 400 and there are no headers in the connection.
This may happen if Cowboy is unable to parse the request headers,
for example, because there are too many headers or the header name
or value are too large (such as a large cookie).
You can customize those values when configuring your http/https
server. The configuration option and default values are shown below:
protocol_options: [
max_header_name_length: 64,
max_header_value_length: 4096,
max_headers: 100
]
endpoint configuration:
config :my_app, MyAppWeb.Endpoint,
load_from_system_env: true,
url: [host: System.get_env("MY_HOST"), port: 443],
force_ssl: [rewrite_on: [:x_forwarded_proto]]
I ended up running the server with the following Endpoint config:
config :my_app, MyAppWeb.Endpoint,
load_from_system_env: true,
http: [port: 4000]
and the problem was resolved.
It probably had something to do with the fact that cowboy was running https server behind ELB.

URL forbidden 403 when using a tool but fine from browser

I have some images that I need to do a HttpRequestMethod.HEAD in order to find out some details of the image.
When I go to the image url on a browser it loads without a problem.
When I attempt to get the Header info via my code or via online tools it fails
An example URL is http://www.adorama.com/images/large/CHHB74P.JPG
As mentioned, I have used the online tool Hurl.It to try and attain the Head request but I am getting the same 403 Forbidden message that I am getting in my code.
I have tried adding many various headers to the Head request (User-Agent, Accept, Accept-Encoding, Accept-Language, Cache-Control, Connection, Host, Pragma, Upgrade-Insecure-Requests) but none of this seems to work.
It also fails to do a normal GET request via Hurl.it. Same 403 error.
If it is relevant, my code is a c# web service and is running on the AWS cloud (just in case the adorama servers have something against AWS that I dont know about). To test this I have also spun up an ec2 (linux box) and run curl which also returned the 403 error. Running curl locally on my personal computer returns the binary image which is presumably just the image data.
And just to remove the obvious thoughts, my code works successfully for many many other websites, it is just this one where there is an issue
Any idea what is required for me to download the image headers and not get the 403?
same problem here.
Locally it works smoothly. Doing it from an AWS instance I get the very same problem.
I thought it was a DNS resolution problem (redirecting to a malfunctioning node). I have therefore tried to specify the same IP address as it was resolved by my client but didn't fix the problem.
My guess is that Akamai (the service is provided by an Akamai CDN in this case) is blocking AWS. It is understandable somehow, customers pay by traffic for CDN, by abusing it, people can generate huge bills.
Connecting to www.adorama.com (www.adorama.com)|104.86.164.205|:80... connected.
HTTP request sent, awaiting response...
HTTP/1.1 403 Forbidden
Server: **AkamaiGHost**
Mime-Version: 1.0
Content-Type: text/html
Content-Length: 301
Cache-Control: max-age=604800
Date: Wed, 23 Mar 2016 09:34:20 GMT
Connection: close
2016-03-23 09:34:20 ERROR 403: Forbidden.
I tried that URL from Amazon and it didn't work for me. wget did work from other servers that weren't on Amazon EC2 however. Here is the wget output on EC2
wget -S http://www.adorama.com/images/large/CHHB74P.JPG
--2016-03-23 08:42:33-- http://www.adorama.com/images/large/CHHB74P.JPG
Resolving www.adorama.com... 23.40.219.79
Connecting to www.adorama.com|23.40.219.79|:80... connected.
HTTP request sent, awaiting response...
HTTP/1.0 403 Forbidden
Server: AkamaiGHost
Mime-Version: 1.0
Content-Type: text/html
Content-Length: 299
Cache-Control: max-age=604800
Date: Wed, 23 Mar 2016 08:42:33 GMT
Connection: close
2016-03-23 08:42:33 ERROR 403: Forbidden.
But from another Linux host it did work. Here is output
wget -S http://www.adorama.com/images/large/CHHB74P.JPG
--2016-03-23 08:43:11-- http://www.adorama.com/images/large/CHHB74P.JPG
Resolving www.adorama.com... 23.45.139.71
Connecting to www.adorama.com|23.45.139.71|:80... connected.
HTTP request sent, awaiting response...
HTTP/1.0 200 OK
Content-Type: image/jpeg
Last-Modified: Wed, 23 Mar 2016 08:41:57 GMT
Server: Microsoft-IIS/8.5
X-AspNet-Version: 2.0.50727
X-Powered-By: ASP.NET
ServerID: C01
Content-Length: 15131
Cache-Control: private, max-age=604800
Date: Wed, 23 Mar 2016 08:43:11 GMT
Connection: keep-alive
Set-Cookie: 1YDT=CT; expires=Wed, 20-Apr-2016 08:43:11 GMT; path=/; domain=.adorama.com
P3P: CP="NON DSP ADM DEV PSD OUR IND STP PHY PRE NAV UNI"
Length: 15131 (15K) [image/jpeg]
Saving to: \u201cCHHB74P.JPG\u201d
100%[=====================================>] 15,131 --.-K/s in 0s
2016-03-23 08:43:11 (460 MB/s) - \u201cCHHB74P.JPG\u201d saved [15131/15131]
I would guess that the image provider is deliberately blocking requests from EC2 address ranges.
The reason the wget outgoing ip address is different in the two examples is due to DNS resolution on the cdn provider that adorama are providing
Web Server may implement ways to check particular fingerprint attributes to prevent automated bots . Here a few of them they can check
Geoip, IP
Browser headers
User agents
plugin info
Browser fonts return
You may simulate the browser header and learn some fingerprinting "attributes" here : https://panopticlick.eff.org
You can try replicate how a browser behave and inject similar headers/user-agent. Plain curl/wget are not likely to satisfied those condition, even tools like phantomjs occasionally get blocked. There is a reason why some prefer tools like selenium webdriver that launch actual browser.
I found using another url also being protected by AkamaiGHost was blocking due to certain parts in the user agent. Particulary using a link with protocol was blocked:
Using curl -H 'User-Agent: some-user-agent' https://some.website I found the following results for different user agents:
Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:70.0) Gecko/20100101 Firefox/70.0 okay
facebookexternalhit/1.1 (+http\://www.facebook.com/externalhit_uatext.php): 403
https ://bar: okay
https://bar: 403
All I could find for now is this (downvoted) answer https://stackoverflow.com/a/48137940/230422 stating that colons (:) are not allowed in header values. That is clearly not the only thing happening here as the Mozilla example also has a colon, only not a link.
I guess that at least most webservers don't care and allow facebook's bot and other bots having a contact url in their user agent. But appearently AkamaiGHost does block it.

Adding a message to Google Group via API

I am trying to use the Google Groups Migration API to add an entry to a Google Group. According to the documentation I use this url:
https://www.googleapis.com/upload/groups/v1/groups/transend#googlegroups.com/archive?uploadType=media
I am supplying the auth token correctly I believe (got past http 401 error). Now I am getting http 500, internal server error. The json
response says "Backend Error". My http headers are:
Content-Length: 225
Content-Type: message/rfc822
The data that follows is as plain a rfc822 type message as I can make:
From: jmckay9351#gmail.com
To: transend#googlegroups.com
Subject: forward test
MIME-Version: 1.0
Date: Mon, 22 Feb 2016 08:03:00 -0800
Content-Type: text/plain; charset="UTF-8"
This is the first line of the message.
I believe the group is set up correctly - it can receive messages via email from jmckay9351#gmail.com, just not via the API. Any suggestions for me?
The Groups Migration API can only be used with Google Apps accounts, not for googlegroups.com.
See Prerequisites:
Have a Google account and create an administrator. The API applies to Google Apps for Business, Education, Government, Reseller, and ISP accounts.

Google OAUTH gives 502 error

I am successful in authenticating users locally, but on the production server I am receiving 502 errors after timeout..
here is my FLOW:
FLOW = OAuth2WebServerFlow(
client_id='YOUR_CLIENT_ID',
client_secret='YOUR_CLIENT_SECRET',
scope='https://www.googleapis.com/auth/calendar',
user_agent='Real_Hub/1.0',
redirect_uri='quickerhub.com',)
locally redirect_uri is simply my localhost ip and it works fine.
here is my error through chrome network panel:
quickerhub.com
GET
502
Bad Gateway
text/html
This likely has to do with your redirect_uri. 502 is a very general error. It indicates that Django (probably uWSGI or Passenger) was unable to get a response before timeout. Have you tried that uri directly in your browser? If you have any HTTP authentication or anything on that domain, it will cause this error. For some reason, the OAuth does not seem to be properly creating the redirect response.
Hope this helps!