pygsheets set_dataframe not being recognized - python-2.7

I am trying to write a pandas data frame to a google sheet
# open google sheet where 'test' is the name of the project
sh = gc.open_all('test')
# update the first sheet with df, starting at cell B2 and second sheet with ds
wks = sh[:-1]
wks.set_dataframe(df, (1, 1))
every time I run this app I get this error:
wks.set_dataframe(df, (1, 1))
AttributeError: 'list' object has no attribute 'set_dataframe'
It seems as if set_dataframe is not being recognized within pygsheets
Has anyone encountered this error or know what is the problem?

use gspread_dataframe
you can check it out from here gspread-dataframe

Related

How else can I write a timestamp with timezone in postgresql/django python shell?

I am working on a django API project with a postgres db. I have also added a serializers.py file. I am trying to test what I've done by adding a row to the db via python shell but I keep getting this error:
django.db.utils.ProgrammingError: column "date_created" is of type timestamp with time zone but expression is of type time without time zone
LINE 1: ...bility" = NULL, "rating" = NULL, "date_created" = '00:00:00'...
^
HINT: You will need to rewrite or cast the expression.
This is the code:
from vendor.models import Vendors
from vendor.serializers import VendorsRegisterSerializer
p = Vendors(id=1, first_name='Flavio', last_name='Akin', email='sheeku#gmail.com', profession='plumber', username='Flanne', pin='1234', phone_number='12345678901', number_of_jobs=300, number_of_patrons=788, date_created='21:00:00 +05:00')
I have tried replacing date_created value '21:00:00 +05:00' with '21:00:00 EST', '21:00+05' and '21:00:00+05' but I keep getting the same error.
Any help will be appreciated. Thanks in advance.

Error running flopy.modflow.HeadObservation: ValueError: Can't cast from structure to non-structure, except if the structure only has a single field

I am using Flopy to set up a MODFLOW model in Python 2.7. I am trying to add head observations via the HOB package. The following example code is taken directly from the function documentation at https://modflowpy.github.io/flopydoc/mfhob.html:
import flopy
model = flopy.modflow.Modflow()
dis = flopy.modflow.ModflowDis(model, nlay=1, nrow=11, ncol=11,
nper=2, perlen=[1,1])
obs = flopy.modflow.mfhob.HeadObservation(model, layer=0, row=5,
column=5,
time_series_data=[[1.,54.4],
[2., 55.2]])
Using this example code for the function, I am getting the following error:
ValueError: Can't cast from structure to non-structure, except if the structure only has a single field.
I get the same error when I try to create a head observation for my model, which is steady-state and has some different input values. Unfortunately, I haven't been able to find a working example to compare with. Any ideas?
Edit: jdhughes's code works like a charm; BUT I had also neglected to update Flopy to the most recent version - I tried updating numpy first, but didn't get rid of the ValueError until I updated Flopy from 3.2.8 to 3.2.9. Works now, thank you!!!
You need to create one or more instances of a HeadObservation type and pass that to ModflowHob. An example with two observation locations is shown below.
# create a new hob object
obs_data = []
# observation location 1
tsd = [[1., 1.], [87163., 2.], [348649., 3.],
[871621., 4.], [24439070., 5.], [24439072., 6.]]
names = ['o1.1', 'o1.2', 'o1.3', 'o1.4', 'o1.5', 'o1.6']
obs_data.append(flopy.modflow.HeadObservation(mf, layer=0, row=2, column=0,
time_series_data=tsd,
names=names, obsname='o1'))
# observation location 2
tsd = [[0., 126.938], [87163., 126.904], [871621., 126.382],
[871718.5943, 115.357], [871893.7713, 112.782]]
names = ['o2.1', 'o2.2', 'o2.3', 'o2.4', 'o2.5']
obs_data.append(flopy.modflow.HeadObservation(mf, layer=0, row=3, column=3,
time_series_data=tsd,
names=names, obsname='o2'))
hob = flopy.modflow.ModflowHob(mf, iuhobsv=51, obs_data=obs_data)
Will submit an issue to update the documentation and docstrings.

TypeError: 'bool' object is not iterable : Google Ananlytics API

I am using the following code snippet to get data from GA in python 2.7:
data = service.data().ga().get(
ids = 'ga:########',
start_date='yesterday',
end_date='today',
metrics = 'ga:pageviews',
dimensions = 'ga:pagePath',
filters = 'ga:pageviews'!=0,
start_index='1',
max_results='10000'
).execute()
It is giving me the following error:
File "pageViews.py", line 129, in main
max_results='10000'
File "/usr/local/Cellar/python/2.7.13/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/googleapiclient/discovery.py", line 738, in method
for pvalue in pvalues:
TypeError: 'bool' object is not iterable
However this error occurs only when i filter the data on the condition that pageviews != 0. When I remove the filter, the code works fine. I am using the same metrics, dimensions, dates, filter, start_index and max_results and getting the results in Query Explorer. I do not understand why I am getting this error and how to fix this. Can someone help me with this?
'ga:pageviews'!=0
is like doing Astrying != 0 which will result in a false would be my guess.
Try one of the following. You should be sending a string not a != 0
filters = 'ga:pageviews!=0',
or
filters = 'ga:pageviews!%3D0',

Training Doc2Vec on 20newsgroups dataset. Getting Exception AttributeError: 'str' object has no attribute 'words'

There were a similar question here Gensim Doc2Vec Exception AttributeError: 'str' object has no attribute 'words', but it didn't get any helpful answers.
I'm trying to train Doc2Vec on 20newsgroups corpora.
Here's how I build the vocab:
from sklearn.datasets import fetch_20newsgroups
def get_data(subset):
newsgroups_data = fetch_20newsgroups(subset=subset, remove=('headers', 'footers', 'quotes'))
docs = []
for news_no, news in enumerate(newsgroups_data.data):
tokens = gensim.utils.to_unicode(news).split()
if len(tokens) == 0:
continue
sentiment = newsgroups_data.target[news_no]
tags = ['SENT_'+ str(news_no), str(sentiment)]
docs.append(TaggedDocument(tokens, tags))
return docs
train_docs = get_data('train')
test_docs = get_data('test')
alldocs = train_docs + test_docs
model = Doc2Vec(dm=dm, size=size, window=window, alpha = alpha, negative=negative, sample=sample, min_count = min_count, workers=cores, iter=passes)
model.build_vocab(alldocs)
Then I train the model and save the result:
model.train(train_docs, total_examples = len(train_docs), epochs = model.iter)
model.train_words = False
model.train_labels = True
model.train(test_docs, total_examples = len(test_docs), epochs = model.iter)
model.save(output)
The problem appears when I try to load the model:
screen
I tried:
using LabeledSentence instead of TaggedDocument
yielding TaggedDocument instead of appending them to the list
setting min_count to 1 so no word would be ignored (just in case)
Also the problem occurs on python2 as well as python3.
Please, help me solve this.
You've hidden the most important information – the exact code that triggers the error, and the error text itself – in the offsite (imgur) 'screen' link. (That would be the ideal text to cut & paste into the question, rather than other steps that seem to run OK, without triggering the error.)
Looking at that screenshot, there's the line:
model = Doc2Vec("20ng_infer")
...which triggers the error.
Note that none of the arguments as documented for the Doc2Vec() initialization method are a plain string, like the "20ng_infer" argument in the above line – so that's unlikely to do anything useful.
If trying to load a model that was previously saved with model.save(), you should use Doc2Vec.load() – which will take a string describing a local file path from which to load the model. So try:
model = Doc2Vec.load("20ng_infer")
(Note also that larger models might be saved to multiple files, all starting with the string you supplied to save(), and these files must be kept/moved together to again re-load() them in the future.)

Fetching data using Psycopg2 module in python

While I am trying to run the following basic python script for fetching data in postgresql database using psycopg2 module :
cur.execute("""SELECT project from project_tlb""")
rows=cur.fetchall()
print rows[5]
the output for above I am getting is
('html',)
The value inside the table is html which is of type 'text'. Any suggestions how to get only the value html instead of ('html',) ?
The returned value ('html',) is a tuple so you can just access its first element:
rows[5][0]
to get
'html'