Updating the parent via rally API - python-2.7

I have more than 1000 projects which are in closed state under one of our work space.
I got that data from - https://rally1.rallydev.com/slm/webservice/1.29/subscription?fetch=Workspaces,Name,Projects,State
We want to update the "Parent" for the projects which are marked as "Closed".
import sys
from pyral import Rally, rallyWorkset
options = [arg for arg in sys.argv[1:] if arg.startswith('--')]
args = [arg for arg in sys.argv[1:] if arg not in options]
server = <server>
apikey = <api_key>
workspace = <workspace>
project = <project_name>
rally = Rally(server,apikey=apikey, workspace=workspace, project=project)
rally.enableLogging('mypyral.log')
Method to check the status of the projects -
projects = rally.getProjects(workspace=workspace)
for proj in projects:
print (" %12.12s %s %s" % (proj.oid, proj.Name, proj.State))
I didnt find any reference to update the project parent here - Rest API post method - http://pyral.readthedocs.io/en/latest/interface.html?highlight=post

I would do it in the following way:
#get object for 'New Parent':
target_project = rally.getProject('NewParentForClosedProjects')
projects = rally.getProjects(workspace=workspace)
for proj in projects:
#get children of project
children = proj.Children
for child in children:
#if project closed:
if child.State == 'Closed':
#Then update Parent to new one:
project_fields = {
"ObjectID": child.oid,
"Parent": target_project.ref
}
try:
result = rally.update('Project', project_fields)
print "Project %s has been successfully updated with new %s parent" % (str(child.Name), str(child.Parent))
except RallyRESTException, ex:
print "Update failure for Project %s" % (str(child.Name))
print ex

Related

How to get all the VM's information for all Projects in GCP

How to get all the VM's information for all Projects in GCP.
I have multiple Projects in My GCP account and I need the Operating System, Version of Operating of System and Build Version of the Operating System for All the VM's for all Project in GCP.
I didn't find a tool to that, so I code something that you can use.
This code must be improved, but here you can find a way to scan all project and get information about the OS.
Let me know if it helps you.
Pip install:
!pip install google-cloud
!pip install google-api-python-client
!pip install oauth2client
Code:
import subprocess
import sys
import logging
import threading
import pprint
logger = logging.Logger('catch_all')
def execute_bash(parameters):
try:
return subprocess.check_output(parameters)
except Exception as e:
logger.error(e)
logger.error('ERROR: Looking in jupyter console for more information')
def scan_gce(project, results_scan):
print('Scanning project: "{}"'.format(project))
ex = execute_bash(['gcloud','compute', 'instances', 'list', '--project', project, '--format=value(name,zone, status)'])
list_result_vms = []
if ex:
list_vms = ex.decode("utf-8").split('\n')
for vm in list_vms:
if vm:
vm_info = vm.split('\t')
print('Scanning Instance: "{}" in project "{}"'.format(vm_info[0], project))
results_bytes = execute_bash(['gcloud', 'compute', '--project',project,
'ssh', '--zone', vm_info[1], vm_info[0],
'--command', 'cat /etc/*-release' ])
if results_bytes:
results = results_bytes.decode("utf-8").split('\n')
list_result_vms.append({'instance_name': vm_info[0],'result':results})
results_scan.append({'project':project, 'vms':list_result_vms})
list_projects = execute_bash(['gcloud','projects', 'list', '--format=value(projectId)']).decode("utf-8").split('\n')
threads_project = []
results_scan = []
for project in list_projects :
t = threading.Thread(target=scan_gce, args=(project, results_scan))
threads_project.append(t)
t.start()
for t in threads_project:
t.join()
for result in results_scan:
pprint.pprint(result)
You can find the full code here:
Wuick and dirty:
gcloud projects list --format 'value(PROJECT_ID)' >> proj_list
cat proj_list | while read pj; do gcloud compute instances list --project $pj; done
You can use the following command in the Cloud Shell to fetch all projects and then show the instances for each of them:
for i in $(gcloud projects list | sed 1d | cut -f1 -d$' '); do
gcloud compute instances list --project $i;done;
note: make sure you have compute.instances.list permission to all of the projects
Here is how you do it using the pip3 install -U google-api-python-client without using bash. Note, this is to be ran with keyless auth. Using service account keys is bad practice.
https://github.com/googleapis/google-api-python-client/blob/main/docs/start.md
https://github.com/googleapis/google-api-python-client/blob/main/docs/dyn/index.md
https://googleapis.github.io/google-api-python-client/docs/dyn/compute_v1.html
from googleapiclient import discovery
from googleapiclient.errors import HttpError
import yaml
import structlog
logger = structlog.stdlib.get_logger()
def get_projects() -> list:
projects: list = []
service = discovery.build('cloudresourcemanager','v1', cache_discovery=False)
request = service.projects().list()
response = request.execute()
for project in response.get('projects'):
projects.append(project.get("projectId"))
logger.debug('got projects', projects=projects)
return projects
def get_zones(project: str) -> list:
zones: list = []
service = discovery.build('compute','v1', cache_discovery=False)
request = service.zones().list(project=project)
while request is not None:
response = request.execute()
if not 'items' in response:
logger.warn('no zones found')
return {}
for zone in response.get('items'):
zones.append(zone.get('name'))
request = service.zones().list_next(previous_request=request,previous_response=response)
logger.debug('got zones', zones=zones)
return zones
def get_vms() -> list:
vms: list = []
projects: list = get_projects()
service = discovery.build('compute', 'v1', cache_discovery=False)
for project in projects:
try:
zones: list = get_zones(project)
for zone in zones:
request = service.instances().list(project=project, zone=zone)
response = request.execute()
if 'items' in response:
for vm in response.get('items'):
ips: list = []
for interface in vm.get('networkInterfaces'):
ips.append(interface.get('networkIP'))
vms.append({vm.get('name'): {'self_link': vm.get('selfLink'), 'ips': ips}})
except HttpError:
pass
logger.debug('got vms', vms=vms)
return vms
if __name__ == '__main__':
data = get_vms()
with open('output.yaml', 'w') as fh:
yaml.dump(data, fh)

Celery task queuing

I have created a flask application and it consist of 2 celery tasks.
Task 1: Generate a file through a process
Task 2: Email the generated file
Normally task one needs more time compared to task 2. I want to execute task 1 and then task 2. But the problem is both start to execute at the same time inside celery.
How can I resolve this issue.
#celery.task(name='celery_example.process')
def process(a,b,c,d,e,f):
command='rnx2rtkp -p '+a+' -f '+b+' -m '+c+' -n -o oout.pos '+d+' '+e+' '+f
os.system(command)
return 'Successfully created POS file'
#celery.task(name='celery_example.emailfile')
def emailfile(recipientemail):
email_user = ''
email_password = ''
subject = 'subject'
msg = MIMEMultipart()
msg['From'] = email_user
msg['To'] = recipientemail
msg['Subject'] = subject
body = 'This is your Post-Processed position file'
msg.attach(MIMEText(body,'plain'))
filename='oout.pos'
attachment =open(filename,'rb')
part = MIMEBase('application','octet-stream')
part.set_payload((attachment).read())
encoders.encode_base64(part)
part.add_header('Content-Disposition',"attachment; filename= "+filename)
msg.attach(part)
text = msg.as_string()
server = smtplib.SMTP('smtp.gmail.com',587)
server.starttls()
server.login(email_user,email_password)
server.sendmail(email_user,recipientemail,text)
server.quit()
return 'Email has been successfully sent'
This is the app.route
#app.route('/pp.php', methods=['GET', 'POST'])
def pp():
pp = My1Form()
target = os.path.join(APP_ROOT)
print(target)
for fileBase in request.files.getlist("fileBase"):
print(fileBase)
filename = fileBase.filename
destination = "/".join([target, filename])
print(destination)
fileBase.save(destination)
for fileObsRover in request.files.getlist("fileObsRover"):
print(fileObsRover)
filename = fileObsRover.filename
destination = "/".join([target, filename])
print(destination)
fileObsRover.save(destination)
for fileNavRover in request.files.getlist("fileNavRover"):
print(fileNavRover)
filename = fileNavRover.filename
destination = "/".join([target, filename])
print(destination)
fileNavRover.save(destination)
a=fileObsRover.filename
b=fileBase.filename
c=fileNavRover.filename
elevation=pp.ema.data
Freq=pp.frq.data
posMode=pp.pmode.data
emailAdd=pp.email.data
process.delay(posMode,Freq,elevation,a,b,c)
emailfile.delay(emailAdd)
return render_template('results.html', email=pp.email.data, Name=pp.Name.data, ema=elevation, frq=Freq, pmode=posMode, fileBase=a)
return render_template('pp.php', pp=pp)
As it currently stands your code does the following:
# schedule process to run asynchronously
process.delay(posMode,Freq,elevation,a,b,c)
# schedule emailfile to run asynchronously
emailfile.delay(emailAdd)
Both of these will immediately be picked up by workers and executed. You have provided nothing to inform celery that emailfile should wait until processfile is complete.
Instead you should:
alter the signature of emailfile to include another parameter that will be the output of a successful processfile call; then
call processfile using link.
For example:
deferred = processfile.apply_async(
(posMode,Freq,elevation,a,b,c),
link=emailfile.s())
deferred.get()
An alternative to using link, but semantically identical in this case, would be to use a chain.

Is there any faster way for downloading multiple files from s3 to local folder?

I am trying to download 12,000 files from s3 bucket using jupyter notebook, which is estimating to complete download in 21 hours. This is because each file is downloaded one at a time. Can we do multiple downloads parallel to each other so I can speed up the process?
Currently, I am using the following code to download all files
### Get unique full-resolution image basenames
images = df['full_resolution_image_basename'].unique()
print(f'No. of unique full-resolution images: {len(images)}')
### Create a folder for full-resolution images
images_dir = './images/'
os.makedirs(images_dir, exist_ok=True)
### Download images
images_str = "','".join(images)
limiting_clause = f"CONTAINS(ARRAY['{images_str}'],
full_resolution_image_basename)"
_ = download_full_resolution_images(images_dir,
limiting_clause=limiting_clause)
See the code below. This will only work with python 3.6+, because of the f-string (PEP 498). Use a different method of string formatting for older versions of python.
Provide the relative_path, bucket_name and s3_object_keys. In addition, max_workers is optional, and if not provided the number will be a multiple of 5 times the number of machine processors.
Most of the code for this answer came from an answer to How to create an async generator in Python?
which sources from this example documented in the library.
import boto3
import os
from concurrent import futures
relative_path = './images'
bucket_name = 'bucket_name'
s3_object_keys = [] # List of S3 object keys
max_workers = 5
abs_path = os.path.abspath(relative_path)
s3 = boto3.client('s3')
def fetch(key):
file = f'{abs_path}/{key}'
os.makedirs(file, exist_ok=True)
with open(file, 'wb') as data:
s3.download_fileobj(bucket_name, key, data)
return file
def fetch_all(keys):
with futures.ThreadPoolExecutor(max_workers=5) as executor:
future_to_key = {executor.submit(fetch, key): key for key in keys}
print("All URLs submitted.")
for future in futures.as_completed(future_to_key):
key = future_to_key[future]
exception = future.exception()
if not exception:
yield key, future.result()
else:
yield key, exception
for key, result in fetch_all(S3_OBJECT_KEYS):
print(f'key: {key} result: {result}')
Thank you for this. Had 9000 over JPEG images that I needed to download from my S3. I tried to incorporate this directly into my Colab Pro but wasn't able to get it to work. Kept getting "Errno 21 : Is a directory" error.
Had to add 2 things: 1) a makedir to create the directory I want & 2) use mknod, instead of mkdir.
fetch_all is almost the same: except a small edit for max_workers to actually take effect. s3c is just my boto3.client with my keys and all.
My download time went from 30+ mins to 5 mins with 1000 workers.
os.makedirs('/*some dir you want*/*prefix*')
def fetch(key):
file = f'{abs_path}/{key}'
os.mknod(file, mode=384)
with open(file, 'wb') as data:
s3c.download_fileobj(bucket_name, key, data)
return file
def fetch_all(keys):
with futures.ThreadPoolExecutor(max_workers=max_workers) as executor:
future_to_key = {executor.submit(fetch, key): key for key in keys}
print("All URLs submitted.")
for future in futures.as_completed(future_to_key):
key = future_to_key[future]
exception = future.exception()
if not exception:
yield key, future.result()
else:
yield key, exception
You can try this out. This is fast
import boto3
from multiprocessing import Pool
bucket_name = 'BUCKET_NAME'
prefix = 'PREFIX'
local_dir = './downloads/' # PUT YOUR LOCAL DIR
max_process = 20 # CAN BE CHANGE
debug_en = True
# pass your credentials and region name
s3_client = boto3.client('s3',aws_access_key_id=' ',
aws_secret_access_key=' ', region_name=' ')
def downfiles(bucket_name, src_obj, dest_path):
try:
s3_client.download_file(bucket_name, src_obj, dest_path)
if debug_en:
print("[dubug] downloading object: %s to %s" %(src_obj, dest_path))
except:
pass
def download_dir(bucket_name, sub_prefix):
paginator = s3_client.get_paginator('list_objects_v2')
pages = paginator.paginate(Bucket=bucket_name, Prefix=sub_prefix)
pool = Pool(max_process)
print(pool)
mp_data = []
for page in pages:
if 'Contents' in page:
for obj in page['Contents']:
src_obj = obj['Key']
dest_path = local_dir + src_obj
mp_data.append((bucket_name, src_obj, dest_path))
os.path.dirname(dest_path) and os.makedirs(os.path.dirname(dest_path), exist_ok=True)
pool.starmap(downfiles, mp_data)
return len(mp_data)
if __name__ == '__main__':
print("starting script...")
start_time = datetime.now()
s3_dirs = prefix
total_files = 0
for s3_dir in s3_dirs:
print("[Information] %s directory is downloading" % s3_dir)
no_files = download_dir(bucket_name, s3_dir)
total_files = total_files + no_files
end_time = datetime.now()
print('Duration: {}'.format(end_time - start_time))
print('Total File numbers: %d' % total_files)
print("ended")

why is Exact Target FUEL SDK not validating my API keys?

I am working with the FUEL SDK for Exact Target API. I have setup my enviorment variables but the app keeps denying me data, and throws the following error message
raise Exception('Unable to validate App Keys(ClientID/ClientSecret) provided: ' + repr(r.json()))
Exception: Unable to validate App Keys(ClientID/ClientSecret) provided: {u'errorcode': 1, u'message': u'Unauthorized', u'documentation': u''}
I am looking in the client, but do not see why the authentication would be stopping. here is my code:
import os
os.environ["FUELSDK_CLIENT_ID"] = ""
os.environ["FUELSDK_CLIENT_SECRET"] = ""
os.environ["FUELSDK_DEFAULT_WSDL"] = "https://webservice.exacttarget.com/etframework.wsdl"
os.environ
["FUELSDK_AUTH_URL"] = "https://auth.exacttargetapis.com/v1/requestToken?legacy=1"
#os.environ["FUELSDK_WSDL_FILE_LOCAL_LOC"] = "C:\Users\Aditya.Sharma\AppData\Local\Temp\ExactTargetWSDL.s6.xml"
# Add a require statement to reference the Fuel SDK's functionality:
import FuelSDK
# Next, create an instance of the ET_Client class:
myClient = FuelSDK.ET_Client()
# Create an instance of the object type we want to work with:
list = FuelSDK.ET_List()
# Associate the ET_Client to the object using the auth_stub property:
list.auth_stub = myClient
# Utilize one of the ET_List methods:
response = list.get()
# Print out the results for viewing
print 'Post Status: ' + str(response.status)
print 'Code: ' + str(response.code)
print 'Message: ' + str(response.message)
print 'Result Count: ' + str(len(response.results))
print 'Results: ' + str(response.results)
Could someone please tell me why I am being shut out?
here is git repository I am using: https://github.com/salesforce-marketingcloud/FuelSDK-Python
Thank you in advance.
Update Client KEY/PWD in client.py

In Python 2.7. I am getting an error that I cannot resolve: TypeError: __init__() takes exactly 3 arguments (1 given)"

Trying to create a GUI which will move files from one directory to another. If the files have been created/modified over 24 hours from the time it will list in a text box when the files were created/modified.
import os
import wx, DB_FILE
import shutil
import time
wildcard = "All files (*.txt)|*.*"
class File_Transfer(wx.Frame):
def __init__(self, parent, id):
wx.Frame.__init__(self, parent, id, 'File Transfer', size=(420, 200))
#Add dta to the list control
self.fillListCtrl()
panel = wx.Panel(self,-1)
self.Directory = os.getcwd()
panel.SetBackgroundColour("White")
openButton = wx.Button(panel, -1, "Pull Files From:", pos=(10,0))
self.Bind(wx.EVT_BUTTON, self.onOpenFile, openButton)
placeButton = wx.Button(panel, -1, "New File Transfer to:", pos=(127,0))
self.Bind(wx.EVT_BUTTON, self.placeFile, placeButton)
trsnfrButton = wx.Button(panel, -1, "Transfer File(s)", pos=(280,0))
self.Bind(wx.EVT_BUTTON, self.trnsfrFile, trsnfrButton)
#SETUP THE TABLE UI
#SETUP TABLE AS listCtrl
self.listCtrl = wx.ListCtrl(panel, size = (100, 100), pos=(100, 40), style=wx.LC.REPORT |wx.BORDER_SUNKEN)
#Add columns to listCtrl
self.listCtrl.InsertColumn(0, "ID")
self.listCtrl.InsertColumn(1, "File_DESCRIPTION")
self.listCtrl.InsertColumn(1, "Age")
#Get dta from the database
def fillListCtrl(self):
self.allData = DB_FILE.viewAll()
#Delete old data before adding new data
self.listCtrl.DeleteAllItems()
for row in AllData:
#Loop through and append data
self.listCtrl.Append(row)
def addAge(self, event):
name = dst
age = mtime
#Adding character to database
DB_FILE.newfILE(name)
DB_FILE.newfILE(age)
print DB_FILE.viewAll()
#Update list control
self.fillListCtrl()
def onOpenFile(self,event):
dlg = wx.FileDialog(
self, message="Select FILE",
defaultDir=self.Directory,
defaultFile="",
wildcard=wildcard,
style=wx.OPEN | wx.MULTIPLE | wx.CHANGE_DIR
)
if dlg.ShowModal() == wx.ID_OK:
paths=dlg.GetPaths()
print "You chose the following file(s):"
for path in paths:
print path
global filePath
filePath=path
dlg.Destroy()
def placeFile(self, event):
#Get directory where the files will go.
dlg = wx.DirDialog(self, "Choose a directory:")
if dlg.ShowModal() == wx.ID_OK:
paths = dlg.GetPath()
print "You chose %s" % dlg.GetPath()
for path in paths:
global savePath
savePath=dlg.GetPath()
fileage_program.deleteCharacter(self.selectedId)
#Refresh the table
self.fillListCtrl()
dlg.Destroy()
#Transfer files from initial location to final location then states time
def trnsfrFile(src,dst):
src = filePath #Original File address
dst = savePath #Move to directory
#print file # testing
st=os.stat(src)
ctime=st.st_ctime #Current time
global mtime
mtime=(time.time() - ctime)/3600 #Subtract current time from last time file was touched and convert it to hours
if mtime<24: #If time from creation is less than 24 hours then move files.
shutil.move(src, dst)
newFile()
print dst
else:
print "file are: '%d' hours, since creation/modification" %mtime
#Run the program
if __name__ == "__main__":
app = wx.App()
frame = File_Transfer()
frame.Show()
app.MainLoop()
Here is my database I am connected to:
import sqlite3
# Connect to simpsons database
conn = sqlite3.connect('FILE_INFO.db')
def createTable():
conn.execute("CREATE TABLE if not exists \
FILE_INFO( \
ID INTEGER PRIMARY KEY AUTOINCREMENT, \
FILE_DESCRIPTION TEXT, \
TIME INT);")
def newfILE(name,age):
# Create values part of sql command
val_str = "'{}', {}".format(\
name, age)
sql_str = "INSERT INTO FILE_INFO \
(FILE_DESCRIPTION, TIME) \
VALUES ('{}',{});".format(val_str)
print sql_str
conn.execute(sql_str)
conn.commit()
return conn.total_changes
def viewAll():
# Create sql string
sql_str = "SELECT * from FILE_INFO"
cursor = conn.execute(sql_str)
# Get data from cursor in array
rows = cursor.fetchall()
return rows
createTable()
1-You don't really need to import rows as rows (if you import rows you will automatically be able to call the module as rows).
2-where is this .py file (rows.py) saved? Can you access the directory where it is from the directory where you are?
Just posting this answer to summarize the comments.
##############################################################################################revising my answer today, decided to edit
#
3-This error TypeError:__init__() takes exactly 3 arguments (1 given) usually happens when you miss in the argumentation. An argument, just to remember, is the word you put between () when you're making or calling a funtion. For example: __init__(self): in this case self is an argument for the __init__() function. On your case, you're calling 3 of those self, parent, id. When you call this function you will need to use this three arguments, and it could be variables determined in another function or class, that you would need to determine as global for the program identifi it as the same variable usd above, or you can just substitute it for values of data (exp.: int, float, str), like: __init__(self, 1, 42). The important thing is: you need to use the complete argumentation if it is the same function: if it has three args on it's definition, you need to call it with three args.