How to put Items into DynamoDB table using python? - amazon-web-services

I'm trying to put an Item into an Amazon DynamoDB table using a python script, but when I run the python script I get following error:
Traceback (most recent call last):
File "./table.py", line 32, in <module>
item.put(None, None)
File "/usr/local/lib/python2.7/dist-packages/boto/dynamodb/item.py", line 183, in put
return self.table.layer2.put_item(self, expected_value, return_values)
File "/usr/local/lib/python2.7/dist-packages/boto/dynamodb/layer2.py", line 551, in put_item
object_hook=self.dynamizer.decode)
File "/usr/local/lib/python2.7/dist-packages/boto/dynamodb/layer1.py", line 384, in put_item
object_hook=object_hook)
File "/usr/local/lib/python2.7/dist-packages/boto/dynamodb/layer1.py", line 119, in make_request
retry_handler=self._retry_handler)
File "/usr/local/lib/python2.7/dist-packages/boto/connection.py", line 954, in _mexe
status = retry_handler(response, i, next_sleep)
File "/usr/local/lib/python2.7/dist-packages/boto/dynamodb/layer1.py", line 159, in _retry_handler
data)
boto.exception.DynamoDBResponseError: DynamoDBResponseError: 400 Bad Request
{u'message': u'Requested resource not found', u'__type': u'com.amazonaws.dynamodb.v20111205#ResourceNotFoundException'}
My code is:
#!/usr/bin/python
import boto
import boto.s3
import sys
from boto import dynamodb2
from boto.dynamodb2.table import Table
from boto.s3.key import Key
import boto.dynamodb
conn = boto.dynamodb.connect_to_region('us-west-2', aws_access_key_id=<My_access_key>, aws_secret_access_key=<my_secret_key>)
entity = conn.create_schema(hash_key_name='RPI_ID', hash_key_proto_value=str, range_key_name='PIC_ID', range_key_proto_value=str)
table = conn.create_table(name='tblSensor', schema=entity, read_units=10, write_units=10)
item_data = {
'Pic_id': 'P100',
'RId': 'R100',
'Temperature': '28.50'
}
item = table.new_item(
# Our hash key is 'forum'
hash_key='RPI_ID',
# Our range key is 'subject'
range_key='PIC_ID',
# This has the
attrs=item_data
)
item.put() // I got error here.
My reference is: Setting/Getting/Deleting CORS Configuration on a Bucket

I ran your code in my account and it worked 100% perfect, returning:
{u'ConsumedCapacityUnits': 1.0}
You might want to check that you are using the latest version of boto:
pip install boto --upgrade

I searched on google and solved my that problem. I have set correct time and date on my raspberry pi board and run that program, it's working fine.

Related

Airflow 2: Job Not Found when transferring data from BigQuery into Cloud Storage

I am trying to migrate from Cloud Composer 1 into Cloud Composer 2 (from Airflow 1.10.15 into Airflow 2.2.5) and when attempting to load data from BigQuery into GCS using the BigQueryToGCSOperator
from airflow.providers.google.cloud.transfers.bigquery_to_gcs import BigQueryToGCSOperator
# ...
BigQueryToGCSOperator(
task_id='my-task',
source_project_dataset_table='my-project-name.dataset-name.table-name',
destination_cloud_storage_uris=f'gs://my-bucket/another-path/*.jsonl',
export_format='NEWLINE_DELIMITED_JSON',
compression=None,
location='europe-west2'
)
that results into the following error:
[2022-06-07, 11:17:01 UTC] {taskinstance.py:1776} ERROR - Task failed with exception
Traceback (most recent call last):
File "/opt/python3.8/lib/python3.8/site-packages/airflow/providers/google/cloud/transfers/bigquery_to_gcs.py", line 141, in execute
job = hook.get_job(job_id=job_id).to_api_repr()
File "/opt/python3.8/lib/python3.8/site-packages/airflow/providers/google/common/hooks/base_google.py", line 439, in inner_wrapper
return func(self, *args, **kwargs)
File "/opt/python3.8/lib/python3.8/site-packages/airflow/providers/google/cloud/hooks/bigquery.py", line 1492, in get_job
job = client.get_job(job_id=job_id, project=project_id, location=location)
File "/opt/python3.8/lib/python3.8/site-packages/google/cloud/bigquery/client.py", line 2066, in get_job
resource = self._call_api(
File "/opt/python3.8/lib/python3.8/site-packages/google/cloud/bigquery/client.py", line 782, in _call_api
return call()
File "/opt/python3.8/lib/python3.8/site-packages/google/api_core/retry.py", line 283, in retry_wrapped_func
return retry_target(
File "/opt/python3.8/lib/python3.8/site-packages/google/api_core/retry.py", line 190, in retry_target
return target()
File "/opt/python3.8/lib/python3.8/site-packages/google/cloud/_http/__init__.py", line 494, in api_request
raise exceptions.from_http_response(response)
google.api_core.exceptions.NotFound: 404 GET https://bigquery.googleapis.com/bigquery/v2/projects/my-project-name/jobs/airflow_1654592634552749_1896245556bd824c71f31c79d28cdfbe?projection=full&prettyPrint=false: Not found: Job my-project-name:airflow_1654592634552749_1896245556bd824c71f31c79d28cdfbe
Any clue what may be the issue here and why it does not work on Airflow 2.2.5 (even though the equivalent BigQueryToCloudStorageOperator works for Cloud Composer v1 in Airflow 1.10.15).
Apparently this seems to be a bug introduced in apache-airflow-providers-google version v7.0.0.
Also note that the file transfer from BQ into GCS will actually be successful (even though the task will fail).
As a workaround you can either revert back to a working version (if this is possible) e.g. to 6.8.0, or make use of the BQ API and get rid of BigQueryToGCSOperator.
For example,
from google.cloud import bigquery
from airflow.operators.python import PythonOperator
def load_bq_to_gcs():
client = bigquery.Client()
job_config = bigquery.job.ExtractJobConfig()
job_config.destination_format = bigquery.DestinationFormat.NEWLINE_DELIMITED_JSON
destination_uri = f"{<gcs-bucket-destination>}*.jsonl"
dataset_ref = bigquery.DatasetReference(bq_project_name, bq_dataset_name)
table_ref = dataset_ref.table(bq_table_name)
extract_job = client.extract_table(
table_ref,
destination_uri,
job_config=job_config,
location='europe-west2',
)
extract_job.result()
and then create an instance of PythonOperator:
PythonOperator(
task_id='test_task',
python_callable=load_bq_to_gcs,
)

Oracle to Aurora - DMS - Failure

Need a Suggestion on the below. I was trying to do a DMS to aurora and i have installed python3.8.8. I have installed all required modules. Openssl is also installed.
When i trigger the script i encounter the below error.
Traceback (most recent call last): File "task_runner.py", line 2, in
import boto3 File "/usr/local/lib/python3.8/site-packages/boto3/init.py", line 16,
in
from boto3.session import Session File "/usr/local/lib/python3.8/site-packages/boto3/session.py", line 17, in
import botocore.session File "/usr/local/lib/python3.8/site-packages/botocore/session.py", line 30,
in
import botocore.credentials File "/usr/local/lib/python3.8/site-packages/botocore/credentials.py", line
34, in
from botocore.config import Config File "/usr/local/lib/python3.8/site-packages/botocore/config.py", line 16,
in
from botocore.endpoint import DEFAULT_TIMEOUT, MAX_POOL_CONNECTIONS File
"/usr/local/lib/python3.8/site-packages/botocore/endpoint.py", line
22, in
from botocore.awsrequest import create_request_object File "/usr/local/lib/python3.8/site-packages/botocore/awsrequest.py", line
26, in
import botocore.utils File "/usr/local/lib/python3.8/site-packages/botocore/utils.py", line 33,
in
import botocore.httpsession File "/usr/local/lib/python3.8/site-packages/botocore/httpsession.py", line
8, in
from urllib3.util.ssl_ import ( ImportError: cannot import name 'ssl' from 'urllib3.util.ssl_'
(/usr/local/lib/python3.8/site-packages/urllib3/util/ssl_.py)
I have tried changing the version of python as suggested in another post and also reinstalled awscli but nothing works. no matter what version of python used, i always end up with the same error.
Lastly, the server where am doing this doesnt have an internet connection.
Kindly suggest.
Cleaning up the entire Python setup and editing the Setup.dist to use SSL during build resolved the issue.

PyMongo 3 and ServerSelectionTimeoutError while getting data from Mongodb

This seems like an old solved problem here and here and here but Still I am getting this error.I create my db on Docker.And It worked only one time.Before this, I re-created db, did "connect =false",added wait, downgraded pymongo, did previous solutions etc. I stuck.
Python 3.8.0, Pymongo 3.9.0
from pymongo import MongoClient
import pprint
client = MongoClient('mongodb://192.168.1.100:27017/',
username='admin',
password='psw',
authSource='myappdb',
authMechanism='SCRAM-SHA-1',
connect=False)
db = client['myappdb']
serverStatusResult=db.command("serverStatus")
pprint(serverStatusResult)
and I am getting ServerSelectionTimeoutError
Traceback (most recent call last):
File "C:\Users\ME\eclipse2019-workspace\exdjango\exdjango__init__.py",
line 12, in
serverStatusResult=db.command("serverStatus")
File "C:\Users\ME\AppData\Local\Programs\Python\Python38-32\lib\site-packages\pymongo\database.py",
line 610, in command
with self.client._socket_for_reads(
File "C:\Users\ME\AppData\Local\Programs\Python\Python38-32\lib\contextlib.py",
line 113, in __enter
return next(self.gen)
File "C:\Users\ME\AppData\Local\Programs\Python\Python38-32\lib\site-packages\pymongo\mongo_client.py",
line 1099, in _socket_for_reads
server = topology.select_server(read_preference)
File "C:\Users\ME\AppData\Local\Programs\Python\Python38-32\lib\site-packages\pymongo\topology.py",
line 222, in select_server
return random.choice(self.select_servers(selector,
File "C:\Users\ME\AppData\Local\Programs\Python\Python38-32\lib\site-packages\pymongo\topology.py",
line 182, in select_servers
server_descriptions = self._select_servers_loop(
File "C:\Users\ME\AppData\Local\Programs\Python\Python38-32\lib\site-packages\pymongo\topology.py",
line 198, in _select_servers_loop
raise ServerSelectionTimeoutError(
pymongo.errors.ServerSelectionTimeoutError: 192.168.1.100:27017: timed out
Your connection looks a little misconfigured. Firstly you have half connection string, half parameter format. I'd suggest you stick with one or the other.
Your auth database is usually seperate to your actual databases (and it's usually called admin). Check this is correct.
There's no particular need to specify the authMechanism assuming you are using MongoDB 3.0 or later.
The connect=False is likely a red herring.
So I would try one of either:
client = MongoClient('mongodb://admin:psw#192.168.1.100:27017/myappdb?authSource=admin')
or
client = MongoClient(host='192.168.1.100',
port=27017,
username='admin',
password='psw',
authSource='admin')

Error in gcloud " AttributeError: 'module' object has no attribute 'DEFAULT_MAX_REDIRECTS'"

I am trying to implement firebase functions cron job from this link : https://github.com/firebase/functions-cron
Everything worked properly
But when I try to run google cloud cron job it gives me below error :
(/base/alloc/tmpfs/dynamic_runtimes/python27/c5586dbb532f7e5f_unzipped/python27_lib/versions/1/google/appengine/runtime/wsgi.py:263)
Traceback (most recent call last):
File "/base/alloc/tmpfs/dynamic_runtimes/python27/c5586dbb532f7e5f_unzipped/python27_lib/versions/1/google/appengine/runtime/wsgi.py", line 240, in Handle
handler = _config_handle.add_wsgi_middleware(self._LoadHandler())
File "/base/alloc/tmpfs/dynamic_runtimes/python27/c5586dbb532f7e5f_unzipped/python27_lib/versions/1/google/appengine/runtime/wsgi.py", line 299, in _LoadHandler
handler, path, err = LoadObject(self._handler)
File "/base/alloc/tmpfs/dynamic_runtimes/python27/c5586dbb532f7e5f_unzipped/python27_lib/versions/1/google/appengine/runtime/wsgi.py", line 85, in LoadObject
obj = __import__(path[0])
File "/base/data/home/apps/s~debitcredit-7ecc0/20180506t121449.409523654918066893/main.py", line 18, in <module>
import pubsub_utils
File "/base/data/home/apps/s~debitcredit-7ecc0/20180506t121449.409523654918066893/pubsub_utils.py", line 24, in <module>
import oauth2client.contrib.appengine as gae_oauth2client
File "./lib/oauth2client/contrib/appengine.py", line 36, in <module>
from oauth2client import client
File "./lib/oauth2client/client.py", line 39, in <module>
from oauth2client import transport
File "./lib/oauth2client/transport.py", line 255, in <module>
redirections=httplib2.DEFAULT_MAX_REDIRECTS,
AttributeError: 'module' object has no attribute 'DEFAULT_MAX_REDIRECTS'
I tried this solution : Getting AttributeError: 'module' object has no attribute 'DEFAULT_MAX_REDIRECTS' when running Google Sheets API quickstart
But still no luck.
Can anyone please help me with this.
Issue is with your httplib Module.
When you installed this module for your project you must have installed it with pip for Python 3
If you want to check whether this module is for python 3 or python 2,
Go to httplib2 module and go inside its init.py
later see this line Requires Python 3 or later
if it is written like that means you have installed this library with pip for python 3. Now delete all the httplib2 from your lib folder.
Create a seperate enviroment for python 2.7 and again install all your modules with pip install -t lib -r requirements.txt

NotSupportedError when trying to build primary index in N1QL in Couchbase Python SDK

I'm trying to get into the new N1QL Queries for Couchbase in Python.
I got my database set up in Couchbase 4.0.0.
My initial try was to retreive all documents like this:
from couchbase.bucket import Bucket
bucket = Bucket('couchbase://localhost/dafault')
rv = bucket.n1ql_query('CREATE PRIMARY INDEX ON default').execute()
for row in bucket.n1ql_query('SELECT * FROM default'):
print row
But this produces a OperationNotSupportedError:
Traceback (most recent call last):
File "/Applications/PyCharm.app/Contents/helpers/pydev/pydevd.py", line 2357, in <module>
globals = debugger.run(setup['file'], None, None, is_module)
File "/Applications/PyCharm.app/Contents/helpers/pydev/pydevd.py", line 1777, in run
pydev_imports.execfile(file, globals, locals) # execute the script
File "/Users/my_user/python_tests/test_n1ql.py", line 9, in <module>
rv = bucket.n1ql_query('CREATE PRIMARY INDEX ON default').execute()
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/couchbase/n1ql.py", line 215, in execute
for _ in self:
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/couchbase/n1ql.py", line 235, in __iter__
self._start()
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/couchbase/n1ql.py", line 180, in _start
self._mres = self._parent._n1ql_query(self._params.encoded)
couchbase.exceptions.NotSupportedError: <RC=0x13[Operation not supported], Couldn't schedule n1ql query, C Source=(src/n1ql.c,82)>
Here the version numbers of everything I use:
Couchbase Server: 4.0.0
couchbase python library: 2.0.2
cbc: 2.5.1
python: 2.7.8
gcc: 4.2.1
Anyone an idea what might have went wrong here? I could not find any solution to this problem up to now.
There was another ticket for node.js where the same issue happened. There was a proposal to enable n1ql for the specific bucket first. Is this also needed in python?
It would seem you didn't configure any cluster nodes with the Query or Index services. As such, the error returned is one that indicates no nodes are available.
I also got similar error while trying to create primary index.
Create a primary index...
Traceback (most recent call last):
File "post-upgrade-test.py", line 45, in <module>
mgr.n1ql_index_create_primary(ignore_exists=True)
File "/usr/local/lib/python2.7/dist-packages/couchbase/bucketmanager.py", line 428, in n1ql_index_create_primary
'', defer=defer, primary=True, ignore_exists=ignore_exists)
File "/usr/local/lib/python2.7/dist-packages/couchbase/bucketmanager.py", line 412, in n1ql_index_create
return IxmgmtRequest(self._cb, 'create', info, **options).execute()
File "/usr/local/lib/python2.7/dist-packages/couchbase/_ixmgmt.py", line 160, in execute
return [x for x in self]
File "/usr/local/lib/python2.7/dist-packages/couchbase/_ixmgmt.py", line 144, in __iter__
self._start()
File "/usr/local/lib/python2.7/dist-packages/couchbase/_ixmgmt.py", line 132, in _start
self._cmd, index_to_rawjson(self._index), **self._options)
couchbase.exceptions.NotSupportedError: <RC=0x13[Operation not supported], Couldn't schedule ixmgmt operation, C Source=(src/ixmgmt.c,98)>
Adding query and index node to the cluster solved the issue.