How to take last 3 months aws billing data using python flask - python-2.7

I need last 3 months AWS billing data in graph using python and python Flask.
i found some articles for this, i just created environment in my local machine. after i dont know how to take billing data using python & Python Flask script. Any one have idea please help me.. Thanks in advance.

You can use Boto to connect to AWS using Python.
In general you need to cover several items:
Authentication - You'll need to set up your credentials to be able to connect to AWS. Check the Documentation
Enable DBR or CUR reports in the billing console. This will export your monthly billing information to S3. Check the Documentation
Once ready use Boto3 to download the reports and import them to DB, ES, whatever you're working with to process large excel files. Check the Documentation
Good luck!
EDIT:
For previous months you can just go to console -> bills and download the reports from the console directly, then process them in your application.

Related

Sync postgres database via git

We are 3 students working on a django project with postgres database and we sync our project with eachother via a git repository like gitlab. we have different os (windows 10 and linux ubuntu 20). and we use vscode as IDE.
How can we sync our entire database (data entry) via git (like sqlite) ?
Is there any way to handle it via kind of converting our db to file ?
Its very complicated to sync your three local database. The best way is to host your database into a cloud platform and all 3 of you connect to that.
There are free cloud platform you can use for 1 year like amazon web service, google cloud platform. You just need an active debit/credit card, it won't charge you except for amazon, it will deducted 1 dollar for account verification.
I would definitely go with this answer.
If you do not want to do that, use pg_dump to export the data from the database as a text file and sync that one using git. But you might still get a lot of merge conflicts.
I use https://www.elephantsql.com/ for it and it works! thank you

Running Python from Amazon Web Service Ec2 Instance?

I was hoping I could get some direction about creating a website using AWS that will run a python script. I created an EC2 Instance running Ubuntu and made it talk with a relational database made with the same account.
In a nutshell, the site I am creating is a YouTube Library of captions. The user will input a title and AWS will retrieve links to XML documents that contains the captions to the related videos from YouTube. I would like to know where and how to run a Python script to scrape the text from these XML documents every time a user makes a request.
My research has taken me in multiple directions, but I am not sure what is best for my purpose. For example, I am trying to run a remote script from GitHub, but don't know if there's a better way to store the script?
It's my first time working with AWS so please keep explanations simple. Thanks!

Export / Import tool with Google Spanner

I have several questions regarding the Google Spanner Export / Import tool. Apparently the tool creates a dataflow job.
Can an import/export dataflow job be re-run after it had run successfully from the tool? If so, will it use the current timestamp?
How to schedule a daily backup (export) of Spanner DBs?
How to get notified of new enhancements within the GCP platform? I was browsing the web for something else and I noticed that the export / import tool for GCP Spanner had been released 4 days earlier.
I am still browsing through the documentation for dataflow jobs and templates, etc.. Any suggestions to the above would be greatly appreciated.
Thx
My response based on limited experience with the Spanner Export tool.
I have not seen a way to do this. There is no option in the GCP console, though that does not mean it cannot be done.
There is no built-in scheduling capability. Perhaps this can be done via Google's managed Airflow service, Cloud Composer (https://console.cloud.google.com/composer)? I have yet to try this, but it is next step as I have similar needs.
I've made this request to Google several times. I have yet to get a response. My best recommendation is to read the change logs when updating the gcloud CLI.
Finally-- there is an outstanding issue with the Export tool that causes it to fail if you export a table with 0 rows. I have filed a case with Google (Case #16454353) and they confirmed this issue. Specifically:
After running into a similar error message during my reproduction of
the issue, I drilled down into the error message and discovered that
there is something odd with the file path for the Cloud Storage folder
[1]. There seems to be an issue with the Java File class viewing
‘gs://’ as having a redundant ‘/’ and that causes the ‘No such file or
directory’ error message.
Fortunately for us, there is an ongoing internal investigation on this
issue, and it seems like there is a fix being worked on. I have
indicated your interest in a fix as well, however, I do not have any
ETAs or guarantees of when a working fix will be rolled out.

AWS Billing(Usage + rateCard)

I want to get AWS usage report in .net using SDk or Rest API. Is there any service available for it?
To get rate card(pricing info) i have used
https://pricing.us-east-1.amazonaws.com/offers/v1.0/aws/AmazonCloudWatch/current/index.json
this service from which i could get the json object.
Please advice if there is any such service avaialble to get the resources consumed in AWS. SO that i can calculate the billing.
Regards,
Aparna
In general you need to cover several items:
Authentication - You'll need to set up your credentials to be able to connect to AWS. Check the Documentation
Enable DBR or CUR reports in the billing console. This will export your monthly billing information to S3. Check the Documentation
Once ready use the SDK to download the reports and import them to DB, ES, whatever you're working with to process large excel files. Check the Documentation
Good luck!

Using impdp/expdp with RDS Oracle on AWS

I'm very new to Amazon web services, especially using their RDS system. I have set up an Oracle database (11.2) and I now want to import a dump we made locally from our server using expdp. Apparently, the ability to use expdp/impdp on AWS is quite new. From what I understand, when creating an ORACLE database on RDS, a DATA_PUMP_DIR is automatically created. What is less obvious is how to access this directory and made our local dump available to RDS. I've tried to read the following information http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Oracle.Procedural.Importing.html on their website. But there is a lot of things I don't understand:
Why do I have to setup an EC2 instance when the dump file is actually on my local computer (and I can access remotely the RDS database using sqlplus or sql developper)
They are often using the 'sys' or 'system' user in their examples but, when reading the security settings for Oracle, it said that these users are made unavailable on RDS => you cannot connect to a database as Sysdba.
Could someone please point me to a simple and clear tutorial on how to use impdp on AWS ?
Thanks
It is possible to use Data Pump on RDS now.
duduklein's answer was correct when he wrote it. But the RDS docs now have details about using Oracle Data Pump. The doc page url is unmodified from the link as originally posted in the question (nice job, Amazon!) but it has new content on using Data Pump now.
It's not possible for now. I have just contacted amazon (through the premium support) for the same issue and they just told me that this is a feature request that was already passed to the RDS team, but there is no estimation of when this will be available.
The only way you can import files dumps is using the "exp" utility instead of the "expdp". In this case, you can use the "imp" utility to import data to RDS