How can I calculate irradiance POA data for a single axis tracking PV system? - pvlib

I’d like to use pvlib library to calculate irradiance POA data for a single axis tracker system.
From the documentation it appears that this is possible, by creating a pvlib.tracking.SingleAxisTracker class (with the appropriate metadata), and then calling the get_irradiance method.
I've done so as such:
HSAT = SingleAxisTracker(axis_tilt=0,
axis_azimuth=167.5,
max_angle=50,
backtrack=True,
gcr=0.387)
I then use the get_irradiance method of the HSAT instance of the SingleAxisTracker I just created, expecting it to use the metadata that I just entered to calculate POA data for this Horizontal single axis tracker system:
hsat_poa = HSAT.get_irradiance(surface_tilt=0,
surface_azimuth=167.5,
solar_zenith=sz,
solar_azimuth=sa,
dni=dni,
ghi=ghi,
dhi=dhi,
airmass=None,
model='haydavies')
When I go to plot hsat_poa, however, I get what looks like POA data for a fixed tilt system.
When I looked at the source code, I noticed that the SingleAxisTracker.get_irradiance method ultimately calls the location.total_irrad() method, which only returns POA data for a fixed tilt systems.
Do I need to provide my down surface_tilt data from the HSAT system? I had assumed that pvlib models an HSAT system, and would generate the surface_tilt values for me, based on the arguments provided in the SingleAxisTracker class instantiation. But it appears that's not what happens.
So my question is does pvlib require the tracker angle as an input in order to calculate POA data for Single Axis Tracker systems, or can it model the tracker angle itself, based on metadata like axis_tilt, max_angle, and backtrack?

Turns out pvlib.tracking.singleaxis() is the missing link.
This will determine the rotation angle of a single axis tracker system.
tracker_data = pvlib.tracking.singleaxis(solar_position['apparent_zenith'],
solar_position['azimuth'],
axis_tilt=MOUNTING_TILT,
axis_azimuth=MOUNTING_AZIMUTH,
max_angle=MAX_ANGLE,
backtrack=True,
gcr=MOUNTING_GCR)
and then using tracker_data like so:
hsat_poa_model_tracker = HSAT.get_irradiance(surface_tilt=tracker_data['surface_tilt'],
surface_azimuth=tracker_data['surface_azimuth'],
solar_zenith=solar_position['apparent_zenith'],
solar_azimuth=solar_position['azimuth'],
dni=dni,
ghi=ghi,
dhi=dhi,
airmass=None,
model='haydavies')
will calculate POA data for a single axis tracker.
Found the answer in this jupyter notebook:
http://nbviewer.jupyter.org/github/pvlib/pvlib-python/blob/master/docs/tutorials/tracking.ipynb

Can it model the tracker angle itself, based on metadata like
axis_tilt, max_angle, and backtrack?
pvlib's ModelChain will do this. See the PV Power Forecast documentation for an example of using a ModelChain with a SingleAxisTracker.

Related

How is postgis treating coordinates sent with different SRID

I am running a django application and I am using the PostGis extension for my db. I am trying to understand better what happens under the hood when I send coordinates, especially because I am working with different coordinate systems which translate to different SRIDs. My question is threefold:
Is django/postgis handling the transformation when creating a Point or Polygon in the DB.
Can I query it back using a different SRID
Is it advisable to use the default SRID=4326
Let's say I have a model like this (note I am setting the standard SRID=4326):
class MyModel(models.Model):
name = models.CharField(
max_length=120,
)
point = models.PointField(
srid=4326,
)
polygon = models.PolygonField(
srid=4326,
)
Now I am sending different coordinates and polygons with different SRIDS.
I am reading here in the django docs that:
Moreover, if the GEOSGeometry is in a different coordinate system (has a different SRID value) than that of the field, then it will be implicitly transformed into the SRID of the model’s field, using the spatial database’s transform procedure
So if I understand this correctly, this mean that when I am sending an API request like this:
data = {
"name": "name"
"point": "SRID=2345;POLYGON ((12.223242267 280.123144553))"
"polygon": "SRID=5432;POLYGON ((133.2345662 214.1429138285, 123.324244572 173.755820912250072))"
}
response = requests.request("post", url=url, data=data)
Both - the polygon and the point - will correctly be transformed into SRID=4326??
EDIT:
When I send a point with SRID=25832;POINT (11.061859 49.460983) I get 'SRID=4326;POINT (11.061859 49.460983)' from the DB. When I send a polygon with 'SRID=25832;POLYGON ((123.2796155732267 284.1831980485285, ' '127.9249715130572 273.7782091450072, 142.2351651215613 ' '280.3825718937042, 137.558146278483 290.279508688337, ' '123.2796155732267 284.1831980485285))' I get a polygon 'SRID=4326;POLYGON ((4.512360573651161 0.002563158966576373, ' '4.512402191765552 0.002469312460126783, 4.512530396754145 ' '0.002528880231016955, 4.512488494972807 0.00261814442892858, ' '4.512360573651161 0.002563158966576373))' from the DB
Can I query it back using a different SRID
Unfortunately I haven't found a way to query the same points back to their original SRID. Is this even possible?
And lastly I am working mostly with coordinates in Europe but I also might have to include sporadically coordinates from all over the world too. Is SRID=4326 a good standard to use?
Thanks a lot for all the help in advance. Really appreciated.
Transforming SRS of geometries is much more than just changing their SRID. So, if for some reason after a transformation the coordinates return with exactly the same values, there was most probably no transformation at all.
This example uses ST_Transform to transform a geometry from 25832 to 4326. See the results yourself:
WITH j (geom) AS (
VALUES('SRID=25832;POINT (11.061 49.463)'::geometry))
SELECT ST_AsEWKT(geom),ST_AsEWKT(ST_Transform(geom,4326)) FROM j;
st_asewkt | st_asewkt
---------------------------------+------------------------------------------------------
SRID=25832;POINT(11.061 49.463) | SRID=4326;POINT(4.511355210946569 0.000446125446657)
(1 Zeile)
The Polygon transformation in your question is btw correct.
Make sure that django is really storing the values you mentioned. Send a 25832 geometry and directly check the SRS in the database. If you're only checking using django, it might be that it is transforming the coordinates back again in the requests, which might explain you not seeing any difference.
To your question:
Is SRID=4326 a good standard to use?
WGS84 is the most used SRS worldwide, so I'd tend to say yes, but it all depends on your use case. If you're uncertain of which SRS to use, it might indicate that your use case does not impose any constraint to it. So, stick to WGS84 but keep in mind that you don't mix different SRS in your application. Btw: if you try to store geometries in multiple SRS in the same table, PostgreSQL will raise an exception ;)
Further reading: ST_AsEWKT, WGS84
First of all, I'm not big expert at GIS (I have created just a few small things in Django and GIS), but...
In this documentaion about GeoDjango: https://docs.djangoproject.com/en/3.1/ref/contrib/gis/tutorial/#automatic-spatial-transformations . According to it:
When doing spatial queries, GeoDjango automatically transforms geometries if they’re in a different coordinate system. ...
Try in console (./manage.py shell):
from <yourapp>.models import MyModel
obj1 = MyModel.objects.all().first()
print(obj1)
print(obj1.point)
print(dir(obj1.point))
print(obj1.point.srid)
--edit--
You can manually test converting between SRID similary to this page: https://gis.stackexchange.com/questions/94640/geodjango-transform-not-working
obj1.point.transform(<new-srid>)

bokeh - plotting shapefile map using datashader

Initially, I created an interactive map of the UK Postcode area where an individual area is color represented based on its value (e.g. population in that post code area) as following.
from bokeh.plotting import figure
from bokeh.palettes import Viridis256 as palette
from bokeh.models import LinearColorMapper
from bokeh.models import ColumnDataSource
import geopandas as gpd
shp = 'file_path_to_the_downloaded_shapefile'
#read shape file into dataframe using geopandas
df = gpd.read_file(shp)
def expandMultiPolygons(row, geometry):
if row[geometry].type = 'MultiPolygon':
row[geometry] = [p for p in row[geometry]]
return row
#Some rows were in MultiPolygons instead of Polygons.
#Expand MultiPolygons to multi rows of Polygons
df = df.apply(expandMultiPolygons, geometry='geometry', axis=1)
df = df.set_index('Area')['geometry'].apply(pd.Series).stack().reset_index()
#Visualize the polygons. To visualize different colors for different post areas, I added another column called 'value' which has some random integer value.
p = figure()
color_mapper = LinearColorMapper(palette=palette)
source = ColumnDataSource(df)
p.patches('x', 'y', source=source,\
fill_color={'field': 'value', 'transform': color_mapper},\
fill_alpha=1.0, line_color="black", line_width=0.05)
where df is a dataframe of four columns : post code area, x-coordinate, y-coordinate, value (i.e. population).
The above code creates an interactive map on a web browser which is great but I noticed the interactivity is not very smooth in speed. If I zoom in or move the map, it renders slowly. The size of the dataframe is only 1106 rows, so I'm quite confused why it is so slow.
As one of the possible solutions, I came across with datashader (https://datashader.readthedocs.io/en/latest/) but I find the example script is quite complicated and most of them are with holoview package on Jupyter notebook but I want to create a dashboard using bokeh.
Does anyone advise me in incorporating datashader into the above bokeh script? Do I need a different function within datashader to create the shape map instead of using bokeh's patches function?
Any suggestion would be highly appreciated!!!
Without the data file involved, I can't answer your question directly, but can offer some observations:
Datashader is unlikely to be of value for this purpose, because datashader does not currently have any support for rendering polygons. As a rule of thumb, Datashader is designed to aggregate your data, and if it's already aggregated, Datashader won't normally be of help. Here your data is aggregated by postcode, which datashader can't process, but if you had the original data per person it would be happy to render it.
If you prefer working with Bokeh directly rather than via the higher-level HoloViews/GeoViews interface, I'd recommend folllwing Matt Rocklin's work on accelerating geopandas; his approach should be very fast for your purpose.
All that said, HoloViews, and GeoViews should be a convenient way to work with Bokeh in general, whether or not you want to create a dashboard. E.g. the 2017 JupyterCon tutorial shows how to make a simple Bokeh dashboard using both libraries. It doesn't cover shape files, but those are covered in other GeoViews examples.
As mentioned in my comment, I believe that the complexity of your polygons might cause your problem. The file you linked to contains several shapefile of different sizes and complexities. You can simplify those, i.e. reduce the number of points for each polygon. This can change how they look. It can range from almost no difference over a bit more "edginess" to an angular appearance. This depends on the level of simplification you chose. Depending on your needs you can chose different levels of simplicity.
I know of three easy options to get this done:
GUI: Try QGis. It is a great opensource tool for geospatial data processing. Load your Shapefile as a new layer. Then use the "Simplify Geometries" tool under the Vector menu.
Command-Line: GDAL is an open-source library. It comes with an useful command-line tool. You can use it like this: ogr2ogr outfile.shp infile.shp -simplify 0.000001
Online: Visit mapshader. Import your file. Select simplify and chose your level. Then, export the result. What I really like here is that your file is rendered instantly. Hence, you can immediately see the result of your simplification.
Other than that, you should also update your bokeh version. It gets updated regularly and there have been some performance improvements since.
Using HoloViews or GeoViews will not positively affect your performance. Thus, it is not related to your issues. I guess #James A. Bednar was just giving some side advice there.
I found a way to speed up the interactive visualization of the UK map as I move the slider.
I created individual image (in 2D) for a different value of slider first and updated the map using the 2D images instead of using bokeh's patches function.
Since the images are in array format, it is much faster to update the image while changing the values in the slider. one downside in this method is that I can no longer use hover function on the UK map.
I referred to the following url to convert polygon information into arrays: https://gist.github.com/brendancol/db030013e981c46acb2886060dde607e#file-rasterio_datashader_polygons-py-L35

How to use TimeSeriesForecasting in KnowledgeFlow?

Weka Explorer provides Time Series Forecasting perspective and it is easy to use.
However, what should I do, if I want to use KnowledgeFlow for time series forecast?
what if I want to save original dataset with predictions?
Solutions (Thanks to the help from people from WekaList, especially, Mark Hall, Eibe Frank)
Open knowledgeFlow, load dataset with ArffLoader
go to setting, check time series forecasting perspective, right-click ArffLoader to send to all perspective
go to time series forecasting perspective to set up a model
run the model and copy the model to clipboard
ctrl + v, and click to paste model to Data mining process canvas
save prediction along with original data with ArffSaver

Do OpenCV's machine learning algorithms continuously update a model?

Most machine learning algorithms implemented in OpenCV 2.4 built upon a CvStatModel which comes with a CvStatModel::train method.
There it says:
By default, the input feature vectors are stored as train_data rows, that is, all the components (features) of a training vector are stored continuously.
and
Usually, the previous model state is cleared by CvStatModel::clear() before running the training procedure. However, some algorithms may optionally update the model state with the new training data, instead of resetting it.
How do I know which ml algorithm isn't resetting the current model state. Since I wanted to use CvGBTrees::train which has a update parameter declared as being only a dummy parameter, I guess the model is discarded after every training call. Can I take it that if there is no such update parameter the current model state will always be discarded?
I need a machine learning algorithm which continuously trains one model and doesn't start with an initial model every training call.
Is this doable with the current ml implementations in OpenCV? and if so with which ones? Furthermore, if not are there other c++ libraries that would do so?

Geography mode distancies in GeoDjango - PostGIS 1.5

I store a PointField field named "coordinates" in a model.
Then, I consult nearest instances from a given one, in the command interpreter, and print its name and the distance in km.
[(p.name, p.distance.m) for p in Model_Name.objects.filter(
coordinates__distance_lte=(pnt, 1000000)).distance(pnt)]
The thing is, when the field "coordinates" is of kind geometry, it works well. But if I include the option "geography=True", to get better precision, it returns a much smaller value, even that I am indicating to print it in km as before.
How can I get correct geography calculations?
Thanks
Appending .distance() to the end of the query set would result in each object in the geoqueryset being annotated with a distance object. And you can use that distance object to get distances in different units.
Because the distance attribute is a Distance object, you can easily
express the value in the units of your choice. For example,
city.distance.mi is the distance value in miles and city.distance.km
is the distance value in kilometers
However in django 1.9 they moved the goal posts, while appending .distance() still works, now the recommended way is to do
from django.contrib.gis.db.models.functions import Distance
[ (p.name, p.distance.m) for p in Model_Name.objects.filter(
coordinates__distance_lte=(pnt, 1000000)
).annotate(distance=Distance('point', pnt))]
Finally, instead of using coordinates__distance_lte there is a much faster method available in postgis and mysql 5.7: dwithin