I have data which is loaded into a dataframe. This dataframe then needs to be saved to a django model. The major problem is that some data which should go into IntegerField or FloatField are empty strings "". On the other side, some data which should be saved into a CharField is represented as np.nan. This leads to the following errors:
ValueError: Field 'position_lat' expected a number but got nan.
If I replace the np.nan with an empty string, using data[database]["df"].replace(np.nan, "", regex = True, inplace = True), I end up with the following error:
ValueError: Field 'position_lat' expected a number but got ''.
So what I would like to do, is to check in the model whether a FloatField or IntegerField gets either np.nan or an empty string and replace it with an empty value. The same for CharField, which should convert integers (if applicable) to strings or np.nan to an empty string.
How could this be implemented? Using ModelManager or customized fields? Or any better approaches? Sorting the CSV files out is not an option.
import pandas as pd
import numpy as np
from .models import Record
my_dataframe = pd.read_csv("data.csv")
record = Record
entries = []
for e in my_dataframe.T.to_dict().values():
entries.append(record(**e))
record.objects.bulk_create(entries)
Maybe the problem was not clear, nevertheless, I would like to post my solution. I create a new dict which only contain keys with corresponding values.
entries = []
for e in my_dataframe.T.to_dict().values():
e = {k: v for k, v in e.items() if v}
entries.append(record(**e))
record.objects.bulk_create(entries)
I need to run a query like this -
historic_data.objects.raw("select * from company_historic_data")
This return a RawQuerySet. I have to convert values from this to a dataframe. the usual .values() method does not work with raw query. Can someone suggest a solution.
Try the codes below
import pandas as pd
res = model.objects.raw('select * from some_table;')
df = pd.DataFrame([item.__dict__ for item in res])
Note that there is a _state column in the returned dataframe
I have a pyspark dataframe, and one column is a list of IDs. I want to, for example, get the count of rows which have a certain ID in it.
AFAIK the two column types relevant to me are ArrayType and MapType. I could use the map type because checking for membership inside a map/dict is more efficient than checking for membership in an array.
However, to use the map I would need to filter with a custom udf rather than the built in (scala) function array_contains
with a MapType I can do :
from pyspark.sql.types import BooleanType
from pyspark.sql.functions import udf
df = spark.createDataFrame([("a-key", {"345": True, "123": True})], ["key", "ids"])
def is_in_map(k, d):
return k in d.keys()
def map_udf(key):
return udf(lambda d: is_in_map(key, d), BooleanType())
c = df.filter(map_udf("123")(df.ids)).count()
or with an ArrayType I can do :
from pyspark.sql.functions import array_contains
df = spark.createDataFrame([("a-key", ["345", "123"])], ["key", "ids"])
c = df.filter(array_contains(df.ids, "123")).count()
My first reaction is to use the MapArray because checking for membership inside the map is (I assume) more efficient.
On the other hand the built in function array_contains executes scala code and I assume that whatever scala defined function I call is going to be more efficient than returning the column dict to a python context and checking k in d.keys().
For checking membership in this (multi-value) column, is it best to use the MapType or ArrayType pyspark.sql.types?
Update
There is a column method pyspark.sql.Column.getItem which means I can filter by membership without a python udf
Maps are more performant, in Scala + Spark I used
df.where(df("ids").getItem("123") === true)
it uses standard Dataframe API and df("ids").getItem("123") returns Column with value of the map or null, it will work at Spark's native speed. Pyspark developers say that Pyspark has that API as well.
I am retrieving the data from Neo4j using Bolt Driver in Python Language. The returned result should be stored as RDD(or atleast into CSV). I am able to see the returned results but unable to store it as an RDD or a Data frame or atleast into a csv.
Here is how I am seeing the result:
session = driver.session()
result = session.run('MATCH (n) RETURN n.hobby,id(n)')
session.close()
Here, how can I store this data into RDD or CSV file.
I deleted the old post and reposted the same question. But I haven't received any pointers. So, I am posting my way of approach so that it may help others.
'''
Storing the return result into RDD
'''
session = driver.session()
result = session.run('MATCH (n:Hobby) RETURN n.hobby AS hobby,id(n) As id LIMIT 10')
session.close()
'''
Pulling the keys
'''
keys = result.peek().keys()
'''
Reading all the property values and storing it in a list
'''
values=list()
for record in result:
rec= list()
for key in keys:
rec.append(record[key])
values.append(rec)
'''
Converting list of values into a pandas dataframe
'''
df = DataFrame(values, columns=keys)
print df
'''
Converting the pandas DataFrame to Spark DataFrame
'''
sqlCtx = SQLContext(sc)
spark_df = sqlCtx.createDataFrame(df)
print spark_df.show()
'''
Converting the Pandas DataFrame to SparkRdd (via Spark Dataframes)
'''
rdd = spark_df.rdd.map(tuple)
print rdd.take(10)
Any suggestions to improve the efficiency is highly appreciated.
Instead of going from python to spark, why not use the Neo4j Spark connector? I think this would save python from being a bottle neck if you were moving a lot of data. You can put your cypher query inside of the spark session and save it as an RDD.
There has been talk on the Neo4J slack group about a pyspark implementation, which will hopefully be available later this fall. I know the ability to query neo4j from pyspark and sparkr would be very useful.
I am new to sfdc . I have report already created by user . I would like to use python to dump the data of the report into csv/excel file.
I see there are couple of python packages for that. But my code gives an error
from simple_salesforce import Salesforce
sf = Salesforce(instance_url='https://cs1.salesforce.com', session_id='')
sf = Salesforce(password='xxxxxx', username='xxxxx', organizationId='xxxxx')
Can i have the basic steps for setting up the API and some example code
This worked for me:
import requests
import csv
from simple_salesforce import Salesforce
import pandas as pd
sf = Salesforce(username=your_username, password=your_password, security_token = your_token)
login_data = {'username': your_username, 'password': your_password_plus_your_token}
with requests.session() as s:
d = s.get("https://your_instance.salesforce.com/{}?export=1&enc=UTF-8&xf=csv".format(reportid), headers=sf.headers, cookies={'sid': sf.session_id})
d.content will contain a string of comma separated values which you can read with the csv module.
I take the data into pandas from there, hence the function name and import pandas. I removed the rest of the function where it puts the data into a DataFrame, but if you're interested in how that's done let me know.
In case it is helpful, I wanted to write out the steps I used to answer this question now (Aug-2018), based on Obol's comment. For reference, I followed the README instructions at https://github.com/cghall/force-retrieve/blob/master/README.md for the salesforce_reporting package.
To connect to Salesforce:
from salesforce_reporting import Connection, ReportParser
sf = Connection(username='your_username',password='your_password',security_token='your_token')
Then, to get the report I wanted into a Pandas DataFrame:
report = sf.get_report(your_reports_id)
parser = salesforce_reporting.ReportParser(report)
report = parser.records_dict()
report = pd.DataFrame(report)
If you were so inclined, you could also simplify the four lines above into one, like so:
report = pd.DataFrame(salesforce_reporting.ReportParser(sf.get_report(your_reports_id)).records_dict())
One difference I ran into from the README is that sf.get_report('report_id', includeDetails=True) threw an error stating get_report() got an unexpected keyword argument 'includeDetails'. Simply removing it out seemed result in the code working fine.
report can now be exported via report.to_csv('report.csv',index=False), or manipulated directly.
EDIT: parser.records() changed to parser.records_dict(), as this allows the DataFrame to have the columns already listed, rather than indexing them numerically.
The code below is rather long and might be just for our use case but the basic idea is the following:
Find out date interval length and additional needed filtering to never run into the "more the 2'000" limit. In my case I could have weekly date range filter but would need to apply some additional filters
Then run it like this:
report_id = '00O4…'
sf = SalesforceReport(user, pass, token, report_id)
it = sf.iterate_over_dates_and_filters(datetime.date(2020,2,1),
'Invoice__c.InvoiceDate__c', 'Opportunity.CustomField__c',
[('a', 'startswith'), ('b', 'startswith'), …])
for row in it:
# do something with the dict
The iterator goes through every week (if you need daily iterators or monthly then you'd need to change the code, but the change should be minimal) since 2020-02-01 and applies the filter CustomField__c.startswith('a'), then CustomField__c.startswith('b'), … and acts as a generator so you don't need to mess with the filter cycling yourself.
The iterator throws an Exception if there's a query which returns more than 2000 rows, just to be sure that the data is not incomplete.
One warning here: SF has a limit of max 500 queries per hour. Say if you have one year with 52 weeks and 10 additional filters you'd already run into that limit.
Here's the class (relies on simple_salesforce)
import simple_salesforce
import json
import datetime
"""
helper class to iterate over salesforce report data
and manouvering around the 2000 max limit
"""
class SalesforceReport(simple_salesforce.Salesforce):
def __init__(self, username, password, security_token, report_id):
super(SalesforceReport, self).__init__(username=username, password=password, security_token=security_token)
self.report_id = report_id
self._fetch_describe()
def _fetch_describe(self):
url = f'{self.base_url}analytics/reports/{self.report_id}/describe'
result = self._call_salesforce('GET', url)
self.filters = dict(result.json()['reportMetadata'])
def apply_report_filter(self, column, operator, value, replace=True):
"""
adds/replaces filter, example:
apply_report_filter('Opportunity.InsertionId__c', 'startsWith', 'hbob').
For date filters use apply_standard_date_filter.
column: needs to correspond to a column in your report, AND the report
needs to have this filter configured (so in the UI the filter
can be applied)
operator: equals, notEqual, lessThan, greaterThan, lessOrEqual,
greaterOrEqual, contains, notContain, startsWith, includes
see https://sforce.co/2Tb5SrS for up to date list
value: value as a string
replace: if set to True, then if there's already a restriction on column
this restriction will be replaced, otherwise it's added additionally
"""
filters = self.filters['reportFilters']
if replace:
filters = [f for f in filters if not f['column'] == column]
filters.append(dict(
column=column,
isRunPageEditable=True,
operator=operator,
value=value))
self.filters['reportFilters'] = filters
def apply_standard_date_filter(self, column, startDate, endDate):
"""
replace date filter. The date filter needs to be available as a filter in the
UI already
Example: apply_standard_date_filter('Invoice__c.InvoiceDate__c', d_from, d_to)
column: needs to correspond to a column in your report
startDate, endDate: instance of datetime.date
"""
self.filters['standardDateFilter'] = dict(
column=column,
durationValue='CUSTOM',
startDate=startDate.strftime('%Y-%m-%d'),
endDate=endDate.strftime('%Y-%m-%d')
)
def query_report(self):
"""
return generator which yields one report row as dict at a time
"""
url = self.base_url + f"analytics/reports/query"
result = self._call_salesforce('POST', url, data=json.dumps(dict(reportMetadata=self.filters)))
r = result.json()
columns = r['reportMetadata']['detailColumns']
if not r['allData']:
raise Exception('got more than 2000 rows! Quitting as data would be incomplete')
for row in r['factMap']['T!T']['rows']:
values = []
for c in row['dataCells']:
t = type(c['value'])
if t == str or t == type(None) or t == int:
values.append(c['value'])
elif t == dict and 'amount' in c['value']:
values.append(c['value']['amount'])
else:
print(f"don't know how to handle {c}")
values.append(c['value'])
yield dict(zip(columns, values))
def iterate_over_dates_and_filters(self, startDate, date_column, filter_column, filter_tuples):
"""
return generator which iterates over every week and applies the filters
each for column
"""
date_runner = startDate
while True:
print(date_runner)
self.apply_standard_date_filter(date_column, date_runner, date_runner + datetime.timedelta(days=6))
for val, op in filter_tuples:
print(val)
self.apply_report_filter(filter_column, op, val)
for row in self.query_report():
yield row
date_runner += datetime.timedelta(days=7)
if date_runner > datetime.date.today():
break
For anyone just trying to download a report into a DataFrame this is how you do it (I added some notes and links for clarifications):
import pandas as pd
import csv
import requests
from io import StringIO
from simple_salesforce import Salesforce
# Input Salesforce credentials:
sf = Salesforce(
username='johndoe#mail.com',
password='<password>',
security_token='<security_token>') # See below for help with finding token
# Basic report URL structure:
orgParams = 'https://<INSERT_YOUR_COMPANY_NAME_HERE>.my.salesforce.com/' # you can see this in your Salesforce URL
exportParams = '?isdtp=p1&export=1&enc=UTF-8&xf=csv'
# Downloading the report:
reportId = 'reportId' # You find this in the URL of the report in question between "Report/" and "/view"
reportUrl = orgParams + reportId + exportParams
reportReq = requests.get(reportUrl, headers=sf.headers, cookies={'sid': sf.session_id})
reportData = reportReq.content.decode('utf-8')
reportDf = pd.read_csv(StringIO(reportData))
You can get your token by following the instructions at the bottom of this page