Google API Scope Changed - django

edit: I solved it easily by adding "https://www.googleapis.com/auth/plus.me" to my scopes, but I wanted to start a discussion on this topic and see if anyone else experienced the same issue.
I have a service running on GCP, an app engine that uses Google API. This morning, I've received this "warning" message which threw an 500 error.
It has been working fine for the past month and only threw this error today (5 hours prior to this post).
Does anyone know why Google returned an additional scope at the oauth2callback? Any additional insight is very much appreciated. Please let me know if you've seen this before or not. I couldn't find it anywhere.
Exception Type: Warning at /oauth2callback
Exception Value:
Scope has changed from
"https://www.googleapis.com/auth/userinfo.email" to
"https://www.googleapis.com/auth/userinfo.email
https://www.googleapis.com/auth/plus.me".
This line threw the error:
flow.fetch_token(
authorization_response=authorization_response,
code=request.session["code"])
The return url is https://my_website.com/oauth2callback?state=SECRET_STATE&scope=https://www.googleapis.com/auth/userinfo.email+https://www.googleapis.com/auth/plus.me#
instead of the usual https://my_website.com/oauth2callback?state=SECRET_STATE&scope=https://www.googleapis.com/auth/userinfo.email#
edit: sample code
import the required things
SCOPES = ['https://www.googleapis.com/auth/userinfo.email',
'https://www.googleapis.com/auth/calendar',
# 'https://www.googleapis.com/auth/plus.me' <-- without this, it throws the error stated above. adding it, fixes the problem. Google returns an additional scope (.../plus.me) which causes an error.
]
def auth(request):
flow = google_auth_oauthlib.flow.Flow.from_client_secrets_file(
CLIENT_SECRETS_FILE, scopes=SCOPES)
flow.redirect_uri = website_url + '/oauth2callback'
authorization_url, state = flow.authorization_url(
access_type='offline', include_granted_scopes='true',
prompt='consent')
request.session["state"] = state
return redirect(authorization_url)
def oauth2callback(request):
...
# request.session["code"] = code in url
authorization_response = website_url + '/oauth2callback' + parsed.query
flow.fetch_token(
authorization_response=authorization_response,
code=request.session["code"])
...

We discovered the same issue today. Our solution has been working without any hiccups for the last couple of months.
We solved the issue by updating our original scopes 'profile email' to https://www.googleapis.com/auth/userinfo.email https://www.googleapis.com/auth/userinfo.profile and by doing some minor changes to the code.
When initiating the google_auth_oauthlib.flow client, we previously passed in the scopes in a list with only one item which contained a string in which the scopes were separated by spaces.
google_scopes = 'email profile'
self.flow = Flow.from_client_secrets_file(secret_file, scopes=[google_scopes], state=state)
Now, with the updated scopes, we send in a list where each element is a separate scope.
google_scopes = 'https://www.googleapis.com/auth/userinfo.email https://www.googleapis.com/auth/userinfo.profile'
self.flow = Flow.from_client_secrets_file(secret_file, scopes=google_scopes.split(' '), state=state)
Hope it helps, good luck!

I am using requests_oauthlib extension and I had the same error. I fix the issue by adding OAUTHLIB_RELAX_TOKEN_SCOPE: '1' to environment variables. So my app.yaml file is similar to this:
#...
env_variables:
OAUTHLIB_RELAX_TOKEN_SCOPE: '1'

For my case I added the following line in that function that the authentication is happening in
os.environ['OAUTHLIB_RELAX_TOKEN_SCOPE'] = '1'
flow = InstalledAppFlow.from_client_config(client_config, scopes=SCOPES)

At a guess from your error it looks like you're using a depreciated scope. See:
https://developers.google.com/+/web/api/rest/oauth#deprecated-scopes
I'm also guessing that you may be using the Google+ Platform Web library and maybe the People:Get method. Perhaps try using one of the following scopes instead:
https://www.googleapis.com/auth/plus.login
or
https://www.googleapis.com/auth/plus.me

Given the timing, you might be effected by this change by Google:
"Starting July 18, 2017, Google OAuth clients that request certain sensitive OAuth scopes will be subject to review by Google."
https://developers.google.com/apps-script/guides/client-verification

Related

twilio capability token generate for voice call integration issue

error: "uninitialized constant TwilioCapability"
Twilio capability token generate issue on live site and staging this is working properly.
This is my code of generate Twilio Capability token
class Twilio::TokenController < ApplicationController
skip_before_filter :verify_authenticity_token
def generate
token = ::TwilioCapability.generate("#{params[:appointment_id]}#{params[:from_type]}")
render json: { token: token }
end
end
twilocapabilty.rb file code
class TwilioCapability
def self.generate(id)
account_sid = ENV['TWILIO_ACCOUNT_SID']
auth_token = ENV['TWILIO_AUTH_TOKEN']
capability = Twilio::Util::Capability.new account_sid, auth_token
application_sid = ENV['TWIML_APPLICATION_SID']
capability.allow_client_outgoing application_sid
capability.allow_client_incoming id
capability.generate
end
end
Twilio developer evangelist here.
I believe there might be a couple of issues with this, mainly answered in this existing SO question.
Firstly, make sure that, if your class is called TwilioCapability then the file name matches it via the Rails naming rules. It should be called twilio_capability.rb.
Other than that, I guess you are keeping the file in the lib directory (so it should be lib/twilio_capability.rb). If you are not already autoloading files from lib in production, then you should add the following to your config/application.rb:
config.autoload_paths << Rails.root.join('lib')
Let me know if that helps at all.

Rails 4 Action Mailer Previews and Factory Girl issues

I've been running into quite an annoying issue when dealing with Rails 4 action mailer previews and factory girl. Here's an example of some of my code:
class TransactionMailerPreview < ActionMailer::Preview
def purchase_receipt
account = FactoryGirl.build_stubbed(:account)
user = account.owner
transaction = FactoryGirl.build_stubbed(:transaction, account: account, user: user)
TransactionMailer.purchase_receipt(transaction)
end
end
This could really be any action mailer preview. Lets say I get something wrong (happens every time), and there's an error. I fix the error and refresh the page. Every time this happens I get a:
"ArgumentError in Rails::MailersController#preview
A copy of User has been removed from the module tree but is still active!"
Then my only way out is to restart my server.
Am I missing something here? Any clue as to what is causing this and how it could be avoided? I've restarted my server 100 times over the past week because of this.
EDIT: It may actually be happening any time I edit my code and refresh the preview?
This answers my question:
https://stackoverflow.com/a/29710188/2202674
I used approach #3: Just put a :: in front of the offending module.
Though this is not exactly an answer (but perhaps a clue), I've had this problem too.
Do your factories cause any records to actually be persisted?
I ended up using Factory.build where I could, and stubbing out everything else with private methods and OpenStructs to be sure all objects were being created fresh on every reload, and nothing was persisting to be reloaded.
I'm wondering if what FactoryGirl.build_stubbed uses to trick the system into thinking the objects are persisted are causing the system to try and reload them (after they are gone).
Here's a snippet of what is working for me:
class SiteMailerPreview < ActionMailer::Preview
def add_comment_to_page
page = FactoryGirl.build :page, id: 30, site: cool_site
user = FactoryGirl.build :user
comment = FactoryGirl.build :comment, commentable: page, user: user
SiteMailer.comment_added(comment)
end
private
# this works across reloads where `Factory.build :site` would throw the error:
# A copy of Site has been removed from the module tree but is still active!
def cool_site
site = FactoryGirl.build :site, name: 'Super cool site'
def site.users
user = OpenStruct.new(email: 'recipient#example.com')
def user.settings(sym)
OpenStruct.new(comments: true)
end
[user]
end
site
end
end
Though I am not totally satisfied with this approach, I don't get those errors anymore.
I would be interested to hear if anyone else has a better solution.

Tweepy rate limit / pagination issue.

I've put together a small twitter tool to pull relevant tweets, for later analysis in a latent semantic analysis. Ironically, that bit (the more complicated bit) works fine - it's pulling the tweets that's the problem. I'm using the code below to set it up.
This technically works, but no as expected - the .items(200) parameter I thought would pull 200 tweets per request, but it's being blocked into 15 tweet chunks (so the 200 items 'costs' me 13 requests) - I understand that this is the original/default RPP variable (now 'count' in the Twitter docs), but I've tried that in the Cursor setting (rpp=100, which is the maximum from the twitter documentation), and it makes no difference.
Tweepy/Cursor docs
The other nearest similar question isn't quite the same issue
Grateful for any thoughts! I'm sure it's a minor tweak to the settings, but I've tried various settings on page and rpp, to no avail.
auth = tweepy.OAuthHandler(apikey, apisecret)
auth.set_access_token(access_token, access_token_secret_var)
from tools import read_user, read_tweet
from auth import basic
api = tweepy.API(auth)
current_results = []
from tweepy import Cursor
for tweet in Cursor(api.search,
q=search_string,
result_type="recent",
include_entities=True,
lang="en").items(200):
current_user, created = read_user(tweet.author)
current_tweet, created = read_tweet(tweet, current_user)
current_results.append(tweet)
print current_results
I worked it out in the end, with a little assistance from colleagues. Afaict, the rpp and items() calls are coming after the actual API call. The 'count' option from the Twitter documentation which was formerly RPP as mentioned above, and is still noted as rpp in Tweepy 2.3.0, seems to be at issue here.
What I ended up doing was modifying the Tweepy Code - in api.py, I added 'count' in to the search bind section (around L643 in my install, ymmv).
""" search """
search = bind_api(
path = '/search/tweets.json',
payload_type = 'search_results',
allowed_param = ['q', 'count', 'lang', 'locale', 'since_id', 'geocode', 'max_id', 'since', 'until', 'result_type', **'count**', 'include_entities', 'from', 'to', 'source']
)
This allowed me to tweak the code above to:
for tweet in Cursor(api.search,
q=search_string,
count=100,
result_type="recent",
include_entities=True,
lang="en").items(200):
Which results in two calls, not fifteen; I've double checked this with
print api.rate_limit_status()["resources"]
after each call, and it's only deprecating my remaining searches by 2 each time.

How to create a model without controller considering rails4 strong parameters

Cannot find anywhere the accepted way to create a model without going through controller considering that attr_accissible is no longer supported.
Is the below approach correct?
in my old code:
ModelName.create(course_id:680, user_id:25)
(raises mass_assignment error now that I have removed attr_accessible)
new code:
model = ModelName.new.tap do |m|
m.course_id = 680
m.user_id = 25
end
model.save!
(works but looks hacky)
Apparently, the below will not work because without_protection option is removed in Rails4
ModelName.create({course_id: 680, user_id: User.first.id}, without_protection: true)
Thanks to this question I've read about strong parameters 'Use outside of Controllers' - link but even if I do the following from my console:
raw_params = {course_id: Course.last.id, user_id: User.first.id}
parameters = ActionController::Parameters.new(raw_params)
ModelName.create(parameters.permit(:course_id, :user_id))
I get error
WARN -- : WARNING: Can't mass-assign protected attributes for ModelName: course_id, user_id
I read this question more carefully and found my answer
I had to add
config.active_record.whitelist_attributes = false
to my environments (development/test/production.rb), maybe because I still have the protected_attributes gem installed.
so now I can happily use
ModelName.create(course_id:680, user_id:25)
afterall.
I realise this question/answer is somewhat of a repeat of the aforementioned question - but I did find that question a bit tricky to understand, so I won't take this question down unless asked.

#vary_on_cookie fails due to non-Django cookies

I am stumped on a caching issue in my Django 1.5.6 application:
#vary_on_cookie
#cache_page(24 * 60 * 60, key_prefix=':1:community')
#rendered_with("general/community.html")
#allow_http("GET")
def community(request):
...
return { ... }
Locally the caching is working correctly, but when I test this in staging, #vary_on_cookie isn't working -- I can see by the queries being executed that community() is being executed on subsequent calls to this page.
I updated my settings in my local environment to use the same Redis cache as staging to eliminate that difference, but the local environment continued to behave correctly.
Looking at the keys Redis has in its cache, I can see what the problem is -- in staging every time this page gets called, new keys are added to the cache. Compare the output from cache.keys('*community*'):
LOCAL:
First call to community page:
[u'community:1:views.decorators.cache.cache_page.:1:community.GET.b528759dd79cf1c6b405290c0bc05e39.3b7d4c38ec8d92512a4a0847f4738298.en-us.America/New_York',
u'community:1:views.decorators.cache.cache_header.:1:community.b528759dd79cf1c6b405290c0bc05e39.en-us.America/New_York']
Second call (same user):
[u'community:1:views.decorators.cache.cache_page.:1:community.GET.b528759dd79cf1c6b405290c0bc05e39.3b7d4c38ec8d92512a4a0847f4738298.en-us.America/New_York',
u'community:1:views.decorators.cache.cache_header.:1:community.b528759dd79cf1c6b405290c0bc05e39.en-us.America/New_York']
Notice there are the same number of keys in both cases.
STAGING:
First call to community page:
[u'community:1:views.decorators.cache.cache_header.:1:community.b528759dd79cf1c6b405290c0bc05e39.en-us.America/New_York',
u'community:1:views.decorators.cache.cache_page.:1:community.GET.b528759dd79cf1c6b405290c0bc05e39.559380b85dc0cdcf0ff25051df78987d.en-us.America/New_York']
Second call (same user):
[u'community:1:views.decorators.cache.cache_header.:1:community.b528759dd79cf1c6b405290c0bc05e39.en-us.America/New_York',
u'community:1:views.decorators.cache.cache_page.:1:community.GET.b528759dd79cf1c6b405290c0bc05e39.559380b85dc0cdcf0ff25051df78987d.en-us.America/New_York',
u'community:1:views.decorators.cache.cache_page.:1:community.GET.b528759dd79cf1c6b405290c0bc05e39.6ec85abcc8a14d66800228bdccc537f0.en-us.America/New_York']
Notice that an additional entry has been added to the cache though it's the same user!
I'm stumped where to go from here. Both environments are using SESSION_ENGINE = 'django.contrib.sessions.backends.cached_db'. The staging environment clearly recognizes that this is the same user in every other way. What is happening in #vary_on_cookie that is creating a difference in staging, but not locally?
I've inspected all of my staging vs. local differences, scrutinized my custom middleware, but I don't have any ideas of what to look at. Any ideas even of what to look at next would be greatly appreciated. Thanks!
UPDATE
I inspected django.utils.cache._generate_cache_key() to see how it generates that last hex section of the cache key. I naively assumed it just looked at Django's own cookies (like sessionid), but I see that it uses all of the cookies passed into HTTP_COOKIE -- that means, Django and non-Django. For me, that means cookies from Google Analytics and New Relic, neither of which I have running locally.
for header in headerlist: # headerlist = [u'HTTP_COOKIE']
value = request.META.get(header, None) # the string of all cookies, for ex: __atuvc=39%7C17%2C8%7C18; csrftoken=dPqaXS6XVGp2UUvfhEW9kS6R6WPHQlE4; sessionid=j6a83wbsq1sez9bz75n0tzl4n884umg2'
if value is not None:
ctx.update(force_bytes(value))
Can this really be true?! All of the world's Django sites using #vary_on_cookie are being thwarted by their third-party cookies?!
I created a custom decorator which hacks the HTTP headers to isolate the user's ID. (Although it sets Vary: DJANGO_USERID, Cookie in the response sent back to the browser, it doesn't include the actual ID.)
I would appreciate any feedback on this solution, since it's a bit beyond my Django comfort zone. Thanks!
def vary_on_user(view):
"""
Adapted from django.views.decorators.vary_on_cookie
"""
#wraps(view, assigned=available_attrs(view))
def inner_func(request, *args, **kwargs):
request.META['HTTP_DJANGO_USERID'] = request.user.id
response = view(request, *args, **kwargs)
patch_vary_headers(response, ('DJANGO_USERID',))
return response
return inner_func