Can't find pvlib.pvsystem.Array - pvlib

I am using pvlib-python to model a series of photovoltaic installations.
I have been running the normal pvlib-python procedural code just fine (as described in the intro tutorial.
I am now trying to extend my model to be able to cope with several arrays of panels in different directions etc, but connected to the same inverter. For this I though the easiest way would be to use pvlib.pvsystem.Array to create a list of Array-objects that I can then pass to the pvlib.pvsytem.PVSystem class (as described here).
My issue now is that I can't find pvsystem.Array at all? eg I'm just getting:
AttributeError: module 'pvlib.pvsystem' has no attribute 'Array'
when I try to create an instance of Array using:
from pvlib import pvsystem
module_parameters = {'pdc0': 5000, 'gamma_pdc': -0.004}
array_one = pvsystem.Array(module_parameters=module_parameters)
array_two = pvsystem.Array(module_parameters=module_parameters)
system_two_arrays = pvsystem.PVSystem(arrays=[array_one, array_two],
inverter_parameters=inverter_parameters)
as described in the examples in the PVSystem and Arrays page.
I am using pvlib-python=0.8.1, installed in my conda env using conda install -c conda-forge pvlib-python.
I am quite confused about this since I can obviously see all the documentation on pvsystem.Array on read-the-docs and see the source code on pvlib's github.
When I look at the code in my conda env it doesn't have Array under pvsystem (or if I list it using dir(pvlib.pvsystem)), so it is something wrong with the installation, but I simply can't figure out what. I've tried installing pvlib again and using different installation but always the same issue.
Am I missing something really obvious here?
Kind regards and thank you,

This feature is not present in the current stable version (8.1). If you want to use it already you could download the latest source as a zip file and install it, or clone the pvlib git repository on your computer.

Related

Why does pm4py.view doesn't generate any image

For a business process discovery task, I am trying to generate a process model, following pm4py python library. Here's a sample code:
!pip install pm4py
import pm4py
log = pm4py.read_xes('/content/running-example.xes')
process_model, initial_marking, final_marking = pm4py.discover_petri_net_inductive(log)
pm4py.view_petri_net(process_model, initial_marking, final_marking, format="svg")
However, I get output as:
parsing log, completed traces :: 100%
6/6 [00:00<00:00, 121.77it/s]
But no image as is expected from the website: https://pm4py.fit.fraunhofer.de/getting-started-page#discovery
Being relatively new to the world of python, what I learnt from other coders' suggestions here on SO that always read in depth the source code in case of open source libraries.
Here is pm4py visual links:
https://github.com/pm4py/pm4py-core/blob/afee8b0932283b8f8f02dd2b6cc0968a1f1cc723/pm4py/visualization/process_tree/visualizer.py#L69
and specifically for my example:
https://github.com/pm4py/pm4py-core/blob/afee8b0932283b8f8f02dd2b6cc0968a1f1cc723/pm4py/vis.py#L17
But I am not able to figure out how to manipulate it.
Can someone please point out the problem to me and help me generate the views. Also, if anyone has done business process generations before, maybe if you could suggest me any libraries or techniques to analyse event-logs data it would be really helpful.
to visualize the process models mined in PM4Py, make sure that you have graphviz installed on your computer.
see https://pm4py.fit.fraunhofer.de/install for more information on this.

Why is my Django App failing on Azure with UUID invalid syntax

My Django App runs fine locally on macOS Catalina with Python 3.8.2 and Django 3.0.5. I am deploying it to Azure as WebApp from Github selecting Python 3.8.
I have provisioned the Postgres DB, the Storage Account and the WebApp. The build process is successful.
The WebApp fails at startup with:
File "/antenv/lib/python3.8/site-packages/django/db/models/fields/__init__.py", line 6, in <module>
import uuid
File "/antenv/lib/python3.8/site-packages/uuid.py", line 138
if not 0 <= time_low < 1<<32L:
^
SyntaxError: invalid syntax
I have verified that the uuid package is not within my requirements.txt file.
DB environment variables are set up.
Collectstatic successfully replicated my static data.
The WebApp is running with Docker.
Any help on this greatly appreciated.
EDIT
Rebuilt virtual environment and regenerated requirements.txt file and redeployed. This solved the issue.
Don't leave uuid == 1.30 or any version in your requirements.txt. If you want to use uuid library, simply a import uuid is enough cuz it is built in python3 already. This solution works for me. I am using Python 3.9.7.
Here is my Azure error log with uuid == 1.30 included in requirements.txt. Hope this helps.
Azure log
This is python-2.x syntax. In python-2.x there were two types of integer values: int and long. long had an L suffix at the end. An int had a fixed range, long had an arbitrary range: it could represent numbers as long as there was sufficient memory.
One could specify with the L suffix that this was a long, not an int. For example in python-2.x, one can write:
>>> type(1)
<type 'int'>
>>> type(1L)
<type 'long'>
In python-3.x, the two merged together to int, and an int can represent arbitrary large numbers, so there is no need anymore for such suffix. You are thus using a library designed for python-2.x with an interpreter that interprets python-3.x.
I would advise not to use this (version of this) library. Take a look if there is a release for python-3.x, or try to find an alternative. python-2.x is no longer supported since january 1, 2020, so it is also not a good idea to keep developing on python-2.x. Furthermore python-2.x and python-3.x differ in quite a large number of areas. It is not just an "extended" language. The way map and filter work is for example different. Therefore you better do not try to "fix" this issue, since likely a new one will pop up, or even worse, is hidden under the radar.
Rebuilding virtual environment from scratch, regenerating the requirements.txt file and then redeploying resolved the issue.

Is the possible to create plugin, that will support OpenCart 2.3 and 3.0 at the same time?

we have OpenCart Plugin, that support only version 3.0. We have task to add support to previos version of OpenCart 2.3. There is any way how to do it in one plugin? Or do we need create plugin for each version?
Yes, there are ways to do this. I think it's a huge pain though to maintain fully, and it may cause you a huge support headache. It will require extra files, such as files with code to detect appropriate version of OC first and then necessary functions within those files to point to the various the specific-version folder structures with the appropriate version files. You'll have to then account for the fact that you are making people carry two sets of folders/file structures in their opencart directory when they only need to use one set for the appropriate version for the plugin to be run on. As an example, the marketplace and extension folders are different between both the versions you are mentioning. These are some things to consider.
You'd have to set a global variable of somewhere of some sort to detect and store the OC version first, something along the lines of:
global $oc_version;
$oc_version = (int)str_replace('.','',VERSION);
Then you would have a whole bunch of functions telling oc what to do with your module depending on the oc version detected, such as specifying the path for where to run the module folder from and to run twig or tpl. Something along the lines of:
if ($data['oc_version'] = 2300)
// Do Stuff
} elsif ($data['oc_version'] = 3000)
// Do other stuff
}
However, the issue you'll encounter here with my examples is that if the version someone is using is let's say 3.0.2.0 (and not 3.0) and there are no changes that actually affect your module, then trying to go based on detecting the OC version won't work. You'd have to change your operators, put more thought, etc. Hence, you'll have to keep re-modifying parts of the same code frequently with any minor patch/version release. I don't see how it's saving you any more work going this route.
theoretically possible using " https://www.opencart.com/index.php?route=marketplace/extension/info&extension_id=31589 " and with small modifications in controller files. But I prefer convert tpl to twig.

Check sklearn version before loading model using joblib

I've followed this guide to save a machine learning model for later use. The model was dumped in one machine:
from sklearn.externals import joblib
joblib.dump(clf, 'model.pkl')
And when I loaded it joblib.load('model.pkl') in another machine, I got this warning:
UserWarning: Trying to unpickle estimator DecisionTreeClassifier from
version pre-0.18 when using version 0.18.1. This might lead to
breaking code or invalid results. Use at your own risk.
So is there any way to know the sklearn version of the saved model to compare it with the current version?
Versioning of pickled estimators was added in scikit-learn 0.18. Starting from v0.18, you can get the version of scikit-learn used to create the estimator with,
estimator.__getstate__()['_sklearn_version']
The warning you get is produced by the __setstate__ method of the estimator which is automatically called upon unpickling. It doesn't look like there is a straightforward way of getting this version without loading the estimator from disk. You can filter out the warning, with,
import warnings
with warnings.catch_warnings():
warnings.simplefilter("ignore", category=UserWarning)
estimator = joblib.load('model.pkl')
For pre-0.18 versions, there is no such mechanism, but I imagine you could, for instance, use not hasattr(estimator, '__getstate') as a test to detect to, at least, pre-0.18 versions.
I have the same problem, just re-training datasets and save again the 'model.pkl' file with joblib.dump. This will be resolved. Good luck!

Load partial shapefile into Postgis / GeoDjango project

I have a shapefile with Canadian postal codes, but am only looking to load a small subset of the data. I can load the entire data file and use SQL or Django queries to prune the data, but the load process takes about 2 hours on the slower machines I'm using.
As the data I'm actually after is about 10% of the dataset, this isn't a very efficient process.
I'm following the instructions in the Geodjango tutorial, specifically the following code:
from django.contrib.gis.utils import LayerMapping
from geoapp.models import TestGeo
mapping = {'name' : 'str', # The 'name' model field maps to the 'str' layer field.
'poly' : 'POLYGON', # For geometry fields use OGC name.
} # The mapping is a dictionary
lm = LayerMapping(TestGeo, 'test_poly.shp', mapping)
lm.save(verbose=True) # Save the layermap, imports the data.
Is there a way to only only import data with a particular name, as in the example above?
I'm limited to the Linux / OS X command line, so wouldn't be able to utilize any GUI tools.
Thanks for everyone here and on Postgis for their help, particularly ThomasG77 for this answer.
The following line did the trick:
ogr2ogr PostalCodes.shp CANmep.shp -sql "select * from CANmep where substr(postalcode,1,3) in ('M1C', 'M1R')"
ogr2ogr comes with GDAL. brew install gdal will install GDAL on OS X. If you're on another *nix system, the following installs it from source:
$ wget http://download.osgeo.org/gdal/gdal-1.9.2.tar.gz
$ tar xzf gdal-1.9.2.tar.gz
$ cd gdal-1.9.2
$ ./configure
$ make
$ sudo make install
If the required postal codes you need won't change for some time, try creating a shapefile of selected postal codes with QGIS. If you're not familiar with QGIS it's worth looking into. I use it to prepare the file for the webapp before uploading such as crs conversion, editing, attribute table and maybe simplify the geometry.
Theres plenty of tutorials and great help at gis.stackexchange
If you haven't done so already,take this question to gis.stackexchange.
hope this helps get you started, and feel free to ask for more info. I was new to django/geodjango not long ago and appreciated all of the help I received. Django is not for the faint of heart.
Michael