Using each line of a file separately in a script - list

I have a list of 235 twitter IDs in a text file.
like so:
2597319
7445591
273299750
337590061
16510947
21958717
I need to go through each one and get the coordinates for each ID. I have written a script that collects the most common coordinate for an account but currently I have to go through manually changing the ID. I am fairly new to Python so any help would be greatly appreciated.
Below is the start of the script, with the '------' indicating were the ID name should be.
#finds unique IDs and saves them in a separate file
lines=open("twitter.txt",'r').readlines()
uniquelines=set(lines)
open("unique.txt",'w').writelines(uniquelines)
auth = tweepy.auth.OAuthHandler('username', 'password')
auth.set_access_token('username', 'password')
api = tweepy.API(auth)
user = api.get_user('------').followers()
#gets 50 tweets from specific ID
statuses = api.user_timeline(id = '------', count = 50)

Related

Laravel 5.5 load only the data for the currently logged in user

Hi I am new to laravel but I would like to load all the bookings for the currently logged in user.
I have tried doing this
//check if user is logged in
if ($user = Auth::user()) {
//get only the bookings for the currently logged in user
$allProducts =Booking::where('client', Auth::user()->name)->where('name', $name)->first();
//store the bookings in a products variable
$products = json_decode(json_encode($allProducts));
//Loop through the products:
foreach ($products as $key => $val) {
//get the name of the service by matching it's id in the service model to the service column in the products
$service_name = Service::where(['id' => $val->service])->first();
//get the charge amount of the service by matching it's id in the Charge model to the charge column in the products
$service_fee = Charge::where(['id' => $val->charge])->first();
//get the status of the service by matching it's id in the status model to the status column in the products
$service_status = Status::where(['id' => $val->status])->first();
$products[$key]->service_name = $service_name->name;
$products[$key]->service_fee = $service_fee->total;
$products[$key]->service_status = $service_status->name;
}
return view('client.booking.view_bookings')->with(compact('products'));
}
return view('/login');
}
But that is giving me an error: Undefined variable: name on the line
$allProducts =Booking::where('client', Auth::user()->name)->where('name', $name)->first();
What could I be doing wrong? and how can I solve it to dsplay only the required data
I have tried to understand what you are doing without success but from your explanations in the comments, I think I know what you want to do.
Since you said that this code works well for you except that it gives you the results of all the data in the database irrespective of the logged in user
$allProducts = Booking::get();
it is because that creates a query that selects all the data in the database.
Whatv you need is to add a where clause to your statement. to do that simply add this to the above line of code
where('client', Auth::user()->name)
it will return only the data that that contains the client column equal to the name of the currently logged in user.
Therefore the entire line of code becomes;
$allProducts = Booking::get()->where('client', Auth::user()->name);
Alternatively you could use filters

How to attach an uploaded file for e-mailing in web2py

All,
Python and web2py newbie here - I am trying to forward user input (e-mail address and a file) via e-mail, once a user has uploaded the information on a website.
The user-provided-information is stored in a database but it is yet over my head to fetch the file from the database and forward it via e-mail. Any pointers much appreciated!
This is my controller action-
def careers():
form = SQLFORM(db.cv_1, formstyle='bootstrap3_stacked')
for label in form.elements('label'):
label["_style"] = "display:none;"
form.custom.submit.attributes['_value'] = 'Submit CV'
if form.process().accepted:
applicant = str(form.vars.email)
mail.send(to=['email#company.com'], message= applicant + ' new CV', subject='CV submission', attachment=mail.Attachment('/path/to/file'))
return dict(form=form)
This is the database model
db.define_table('cv_1', Field('email',
requires=IS_EMAIL(error_message='Please provide your e-mail'),
widget=widget(_placeholder='Your e-mail (required)',_readonly=False)),
Field('cv', 'upload', autodelete=True, requires=[IS_LENGTH(1048576,1024),
IS_UPLOAD_FILENAME(extension='pdf')]))
The transformed filename of the uploaded file will be in form.vars.cv (this is the value stored in the db.cv_1.cv field in the database). You can use that along with the field's .retrieve method to get either the full file path or the file object itself:
original_filename, filepath = db.cv_1.cv.retrieve(form.vars.cv)
mail.send(to=['email#company.com'], message=applicant + ' new CV',
subject='CV submission',
attachment=mail.Attachment(filepath, original_filename))

Pulling data from datastore and converting it in Json in python(Google Appengine)

I am creating an apllication using google appengine, in which i am fetching a data from the website and storing it in my Database (Data store).Now whenever user hits my application url as "application_url\name =xyz&city= abc",i am fetching the data from the DB and want to show it as json.Right now i am using a filter to fetch data based on the name and city but getting output as [].I dont know how to get data from this.My code looks like this:
class MainHandler(webapp2.RequestHandler):
def get(self):
commodityname = self.request.get('veg',"Not supplied")
market = self.request.get('market',"No market found with this name")
self.response.write(commodityname)
self.response.write(market)
query = commoditydata.all()
logging.info(commodityname)
query.filter('commodity = ', commodityname)
result = query.fetch(limit = 1)
logging.info(result)
and the db structure for "commoditydata" table is
class commoditydata(db.Model):
commodity= db.StringProperty()
market= db.StringProperty()
arrival= db.StringProperty()
variety= db.StringProperty()
minprice= db.StringProperty()
maxprice= db.StringProperty()
modalprice= db.StringProperty()
reporteddate= db.DateTimeProperty(auto_now_add = True)
Can anyone tell me how to get data from the db using name and market and covert it in Json.First getting data from db is the more priority.Any suggestions will be of great use.
If you are starting with a new app, I would suggest to use the NDB API rather than the old DB API. Your code would look almost the same though.
As far as I can tell from your code sample, the query should give you results as far as the HTTP query parameters from the request would match entity objects in the datastore.
I can think of some possible reasons for the empty result:
you only think the output is empty, because you use write() too early; app-engine doesn't support streaming of response, you must write everything in one go and you should do this after you queried the datastore
the properties you are filtering are not indexed (yet) in the datastore, at least not for the entities you were looking for
the filters are just not matching anything (check the log for the values you got from the request)
your query uses a namespace different from where the data was stored in (but this is unlikely if you haven't explicitly set namespaces anywhere)
In the Cloud Developer Console you can query your datastore and even apply filters, so you can see the results with-out writing actual code.
Go to https://console.developers.google.com
On the left side, select Storage > Cloud Datastore > Query
Select the namespace (default should be fine)
Select the kind "commoditydata"
Add filters with example values you expect from the request and see how many results you get
Also look into Monitoring > Log which together with your logging.info() calls is really helpful to better understand what is going on during a request.
The conversion to JSON is rather easy, once you got your data. In your request handler, create an empty list of dictionaries. For each object you get from the query result: set the properties you want to send, define a key in the dict and set the value to the value you got from the datastore. At the end dump the dictionary as JSON string.
class MainHandler(webapp2.RequestHandler):
def get(self):
commodityname = self.request.get('veg')
market = self.request.get('market')
if commodityname is None and market is None:
# the request will be complete after this:
self.response.out.write("Please supply filters!")
# everything ok, try query:
query = commoditydata.all()
logging.info(commodityname)
query.filter('commodity = ', commodityname)
result = query.fetch(limit = 1)
logging.info(result)
# now build the JSON payload for the response
dicts = []
for match in result:
dicts.append({'market': match.market, 'reporteddate': match.reporteddate})
# set the appropriate header of the response:
self.response.headers['Content-Type'] = 'application/json; charset=utf-8'
# convert everything into a JSON string
import json
jsonString = json.dumps(dicts)
self.response.out.write( jsonString )

python code for directory api to batch retrieve all users from domain

Currently I have a method that retrieves all ~119,000 gmail accounts and writes them to a csv file using python code below and the enabled admin.sdk + auth 2.0:
def get_accounts(self):
students = []
page_token = None
params = {'customer': 'my_customer'}
while True:
try:
if page_token:
params['pageToken'] = page_token
current_page = self.dir_api.users().list(**params).execute()
students.extend(current_page['users'])
# write each page of data to a file
csv_file = CSVWriter(students, self.output_file)
csv_file.write_file()
# clear the list for the next page of data
del students[:]
page_token = current_page.get('nextPageToken')
if not page_token:
break
except errors.HttpError as error:
break
I would like to retrieve all 119,000 as a lump sum, that is, without having to loop or as a batch call. Is this possible and if so, can you provide example python code? I have run into communication issues and have to rerun the process multiple times to obtain the ~119,000 accts successfully (takes about 10 minutes to download). Would like to minimize communication errors. Please advise if better method exists or non-looping method also is possible.
There's no way to do this as a batch because you need to know each pageToken and those are only given as the page is retrieved. However, you can increase your performance somewhat by getting larger pages:
params = {'customer': 'my_customer', 'maxResults': 500}
since the default page size when maxResults is not set is 100, adding maxResults: 500 will reduce the number of API calls by an order of 5. While each call may take slightly longer, you should notice performance increases because you're making far fewer API calls and HTTP round trips.
You should also look at using the fields parameter to only specify user attributes you need to read in the list. That way you're not wasting time and bandwidth retrieving details about your users that your app never uses. Try something like:
my_fields = 'nextPageToken,users(primaryEmail,name,suspended)'
params = {
'customer': 'my_customer',
maxResults': 500,
fields: my_fields
}
Last of all, if your app retrieves the list of users fairly frequently, turning on caching may help.

Partial Text Matching GAE

I am developing a web application for managing customers. So I have a Customer entity which is made up by usual fields such as first_name, last_name, age etc.
I have a page where these customers are shown as a table. In the same page I have a search field, and I'd like to filter customers and update the table while the user is typing a something in the search field, using Ajax.
Here is how it should work:
Figure 1: The main page showing all of the customers:
Figure 2: As long as the user types letter "b", the table is updated with the results:
Given that partial text matching is not supported in GAE, I have tricked and implemented it arising from what is shown here: TL;DR: I have created a Customers Index, that contains a Search Document for every customer(doc_id=customer_key). Each Search Document contains Atom Fields for every customer's field I want to be able to search on(eg: first_name, last_name): every field is made up like this: suppose the last_name is Berlusconi, the field is going to be made up by these Atom Fields "b" "be" "ber" "berl" "berlu" "berlus" "berlusc" "berlusco" "berluscon" "berlusconi".
In this way I am able to perform full text matching in a way that resembles partial text matching. If I search for "Be", the Berlusconi customer is returned.
The search is made by Ajax calls: whenever a user types in the search field(the ajax is dalayed a little bit to see if the user keeps typing, to avoid sending a burst of requests), an Ajax call is made with the query string, and a json object is returned.
Now, things were working well in debugging, but I was testing it with a few people in the datastore. As long as I put many people, search looks very slow.
This is how I create search documents. This is called everytime a new customer is put to the datastore.
def put_search_document(cls, key):
"""
Called by _post_put_hook in BaseModel
"""
model = key.get()
_fields = []
if model:
_fields.append(search.AtomField(name="empty", value=""),) # to retrieve customers when no query string
_fields.append(search.TextField(name="sort1", value=model.last_name.lower()))
_fields.append(search.TextField(name="sort2", value=model.first_name.lower()))
_fields.append(search.TextField(name="full_name", value=Customer.tokenize1(
model.first_name.lower()+" "+model.last_name.lower()
)),)
_fields.append(search.TextField(name="full_name_rev", value=Customer.tokenize1(
model.last_name.lower()+" "+model.first_name.lower()
)),)
# _fields.append(search.TextField(name="telephone", value=Customer.tokenize1(
# model.telephone.lower()
# )),)
# _fields.append(search.TextField(name="email", value=Customer.tokenize1(
# model.email.lower()
# )),)
document = search.Document( # create new document with doc_id=key.urlsafe()
doc_id=key.urlsafe(),
fields=_fields)
index = search.Index(name=cls._get_kind()+"Index") # not in try-except: defer will catch and retry.
index.put(document)
#staticmethod
def tokenize1(string):
s = ""
for i in range(len(string)):
if i > 0:
s = s + " " + string[0:i+1]
else:
s = string[0:i+1]
return s
This is the search code:
#staticmethod
def search(ndb_model, query_phrase):
# TODO: search returns a limited number of results(20 by default)
# (See Search Results at https://cloud.google.com/appengine/docs/python/search/#Python_Overview)
sort1 = search.SortExpression(expression='sort1', direction=search.SortExpression.ASCENDING,
default_value="")
sort2 = search.SortExpression(expression='sort2', direction=search.SortExpression.ASCENDING,
default_value="")
sort_opt = search.SortOptions(expressions=[sort1, sort2])
results = search.Index(name=ndb_model._get_kind() + "Index").search(
search.Query(
query_string=query_phrase,
options=search.QueryOptions(
sort_options=sort_opt
)
)
)
print "----------------"
res_list = []
for r in results:
obj = ndb.Key(urlsafe=r.doc_id).get()
print obj.first_name + " "+obj.last_name
res_list.append(obj)
return res_list
Did anyone else had my same experience? If so, how have you solved it?
Thank you guys very much,
Marco Galassi
EDIT: names, email, phone are obviously totally invented.
Edit2: I have now moved to TextField, who look a little bit faster, but the problem still persist