Snowflake connector issue on python AWS - snowflake-connector

Getting this error from python snowflake connector on AWS:
ImportError: cannot import name 'NamedTuple' from 'typing_extensions'

From snowflake support:
it appears to have popped up yesterday
pip install 'typing-extensions>=4.3.0'
should fix it
This fixed the issue

Related

How to upgrade SQLite in AWS MWAA Airflow?

I am using yfinance library to get some values like market_cap.
Code:-
import yfinance as yf
com = yf.Ticker('1140.SR')
print(com.fast_info['market_cap'])
I've updated the library to latest version i.e. 0.2.9 locally and on AWS MWAA Apache Airflow.
Locally, I'm able to run the code.
But on Amazon MWAA Airflow, I'm getting an error 'near "without": syntax error'.
Based on my searching, I believe, that the issue is because of a lower version of SQLite. One person was able to resolve the issue by upgrading SQLite version - https://github.com/ranaroussi/yfinance/issues/1372
Locally, here is the configuration:-
Python version - 3.10.8
yfinance - 0.2.9
SQLite3 - 3.37.2
and my Amazon MWAA Airflow configuration is:-
Python version - 3.10.8 (main, Jan 17 2023, 22:57:31) [GCC 7.3.1 20180712 (Red Hat 7.3.1-15)]
yfinance - 0.2.9
SQLite3 - 3.7.17
SQLite is not a python library version, but it’s the a system-level application that needs to be upgraded manually.
I'm able to do locally, but how do I upgrade SQLite to >3.34.x version on Amazon MWAA?
Can anyone help me?
Tried upgrading SQLITE version with requirements file. It didn't work.
apache-airflow-providers-sqlite==3.3.1

PySpark ModuleNotFoundError on GCP

I'm trying to run a Pyspark Streaming program on GCP Dataproc. I pip install mmh3 in ssh already, running pyspark then type import mmh3 caused no problem. But when I started running sc.start() and send info over from another ssh terminal, it starts saying the module not found. Any idea why this happened or how to fix it? Thanks.
By installing the package via SSH, you're just install it on the "driver" node. You'll need to install the package for the whole cluster (i.e. all worker nodes) as well. Try following the documentation

Django deployment error deango.core.exceptions.ImproperlyConfigured

Hey i have an django application which is working fine locally but its not working when it is hosted on a web showing below error
django.core.exceptions.ImproperlyConfigured: Error loading pyodbc module: /lib/x86_64-linux-gnu/libm.so.6: version `GLIBC_2.29' not found (required by /home/site/wwwroot/antenv/lib/python3.7/site-packages/pyodbc.cpython-37m-x86_64-linux-gnu.so)
Did i miss anything at the time of hosting?
Assuming you got this issue during deployment via DevOps pipeline, you could specify an exact version of python in the UsePythonVersion (including minor version) task.
Supported python versions, you could check the software of the agent image:
https://learn.microsoft.com/en-us/azure/devops/pipelines/agents/hosted?view=azure-devops&tabs=yaml#software
Also, you could try the solution in the following case, by adding deadsnakes repo, installing 3.7 and symlink python to python3.7:
https://github.com/actions/virtual-environments/issues/2634#issuecomment-775808754

Using google-cloud-tasks with python 2.7 on app-engine

I'm working on migrating a Python 2.7/1st Gen/GAE app to Python 3/2nd Gen/GAE.
My current step is replacing google.appengine.ext.deferred with the Python Client for Cloud Tasks API.
Here is where I'm at:
Still using Python 2.7
Latest updates with gcloud components update
Following the Python Client docs, I added google-cloud-tasks==1.5.0 to my requirements.txt
google-cloud-tasks needs grpcio so I added - name: grpcio\n version: latest to app.yaml
Now dev_appserver.py is giving me the error ImportError: No module named enum
I'm not finding much documentation online so I'm wondering... Is it possible to use google-cloud-tasks with Python 2.7 on app engine?
If so, how do I fix the last error above?

Error with H20 - Python init(): Server Error

This is a totally newbie question to see if I am missing something key (like there is more to install?).
After installing H20 (python 2.7) on a 9 node Hadoop / Spark cluster
using pip install of the whl file (h2o-3.10.4.8-py2.py3-none-any.whl) (which says it installed correctly).....
I can import h2o successfully.
But, when I run h2o.init() then I get:
"Checking whether there is an H20 instance running at http://localhost:54321. connected."
But then an error is thrown:
H2oServerError: HTTP 500 Server Error: u'Error: 500'
Should I be able to run H20 by simply pip installing that whl or is there more? The documentation seems outdated and there are lots of different versions found online. Anyone have any experience with this?
It's most likely that you got this problem solved, but maybe someone else may benefit. Use the install in Python Tab on the following website : http://h2o-release.s3.amazonaws.com/h2o/rel-tutte/2/index.html.