Need python interface for moving a machine to another folder - python-2.7

I am trying to find a code support in python for moving a machine between Datacenter's folders without success, I saw in pysphere that you can define the folder just in the clone stage and not after machine already cloned.
This seems as a solution for my problem but it is in powershell, do anybody know a wrapping support for it in python

You can do this with pyVmomi. I would avoid pysphere because pyVmomi is maintained by VMWare and pysphere hasnt been updated in 4 years or more.
That said here is some sample code that uses pyVmomi
service_instance = connect.SmartConnect(host=args.host,
user=args.user,
pwd=args.password,
port=int(args.port))
search_index = service_instance.content.searchIndex
folder = search_index.FindByInventoryPath("LivingRoom/vm/new_folder")
vm_to_move = search_index.FindByInventoryPath("LivingRoom/vm/test-vm")
move_task = folder.MoveInto([vm_to_move])
In this example I create a ServiceInstance by connecting to a vCenter, next I grab an instance of the SearchIndex. The SearchIndex has several methods that can be used to locate your managed objects. In this example I decided to use the FindByInventoryPath method, but you could use any that will work for you. First I find the instance of the Folder named new_folder that I want to move my VirtualMachine into. Next I find the VirtualMachine I want to move. Finally I execute the Task that will move the vm for me. That task takes a param of the list of objects to be moved into the folder, and in this case its a single item list containing only the one vm I want to move. From here you can monitor the task if you want.
Keep in mind that if you use the FindByInventoryPath there are many hidden folders that are not visible from the GUI. I find that using the ManagedObjectBrowser is very helpful at times.
Helpful doc links:
https://github.com/vmware/pyvmomi/blob/master/docs/vim/SearchIndex.rst
https://github.com/vmware/pyvmomi/blob/master/docs/vim/Folder.rst
https://github.com/vmware/pyvmomi/blob/master/docs/vim/Task.rst

Related

Spring XD Dynamic Module ClassLoader Issue

According to the documentation we can load jars dynamically at module creation time by exploiting the attribute module.classloader in the .properties file :
http://docs.spring.io/spring-xd/docs/1.3.1.RELEASE/reference/html/#module-class-loading
I spent two days trying to test this feature. It does not work. The option module.classloader seems to be simply ignored
I did not find any string named module.classloader in the XD code. But I found another one called module.classpath in this class:
https://github.com/spring-projects/spring-xd/blob/master/spring-xd-module/src/main/java/org/springframework/xd/module/options/ModuleUtils.java
The code in the above class seems to match the documentation. But unfortunalletely it does not work too. My classes are not found and I get java.lang.ClassNotFoundException
I have module option named dir4jars where I put the jars to load at creation time (when I issue job create --name xx --defintion ..). It's a directory, and I have tested the following possibilities, with both module.classpath and module.classloader :
module.classpath=${dir4jars}/*.jar
module.classloader=${dir4jars}/*.jar
.
.
job create --name jobName --definition "myJobModuleName --dir4jars=C:/ELS/Flash/libxd" --deploy
and
job create --name jobName --definition "myJobModuleName --dir4jars=file:C:/ELS/Flash/libxd" --deploy
I need the dir4jars to be absolute and outside XD home.
So my questions:
What's the right option to use for this dynamic load? module.classpath or module.classloader ?
How can I set an absolute directory as I mentioned above?
Many thanks.
I think it has to be module.classpath and module.classloader looks like a mistake in the documentation. Does this work when you explicitly use module.classpath=file:C:/ELS/Flash/libxd?
As a side note: Please consider using Spring Cloud Data Flow which is the successor of Spring XD.

How to create separate python script for uploading data into ndb

Can anyone guide me towards the right direction as to where I should place a script solely for loading data into ndb. As I wish to upload all data into the gae ndb so that the application could perform query on it.
Right now, the loading of data is in my application. I wish to placed it separately from the main application.
Should it be edited in the yaml file?
EDITED
This is a snippet of the entity and the handler to upload the data into GAE ndb.
I wish to placed this chunk of code separately from my main application .py. Reason being the uploading of this data won't be done frequently and to keep the codes in the main application "cleaner".
class TagTrend_refine(ndb.Model):
tag = ndb.StringProperty()
trendData = ndb.BlobProperty(compressed=True)
class MigrateData(webapp2.RequestHandler):
def get(self):
listOfEntities = []
f = open("tagTrend_refine.txt")
lines = f.readlines()
f.close()
for line in lines:
temp = line.strip().split("\t")
data = TagTrend_refine(
tag = temp[0],
trendData = temp[1]
)
listOfEntities.append(data)
ndb.put_multi(listOfEntities)
For example if I placed the above code in a file called dataLoader.py, where should I call this script to invoke?
In app.yaml alongside my main application(knowledgeGraph.application)?
- url: /.*
script: knowledgeGraph.application
You don't show us the application object (no doubt a WSGI app) in your knowledge.py module, so I can't know what URL you want to serve with the MigrateData handler -- I'll just guess it's something like /migratedata.
So the class TagTrend_refine should be in a separate file (usually called models.py) so that both your dataloader.py, and your knowledge.py, can import models to access it (and models.py will need its own import of ndb of course). (Then of course access to the entity class will be as models.TagTrend_refine -- very basic Python).
Next, you'll complete dataloader.py by defining a WSGI app, e.g, at end of file,
app = webapp2.WSGIApplication(routes=[('/migratedata', MigrateData)])
(of course this means this module will need to import webapp2 as well -- can I take for granted a knowledge of super-elementary Python?).
In app.yaml, as the first URL, before that /.*, you'll have:
url: /migratedata
script: dataloader.app
Given all this, when you visit '/migratedata', your handler will read the "tagTrend_refine.txt" file that you uploaded together with your .py, .yaml, and so on, files in your overall GAE application, and unconditionally create one entity per line of that file (assuming you fix the multiple indentation problems in your code as displayed above, but, again, this is just super-elementary Python -- presumably you've used both tabs and spaces and they show up OK in your editor, but not here on SO... I recommend you use strictly, only spaces, never tabs, in Python code).
However this does seem to be a peculiar task. If /migratedata gets visited twice, it will create duplicates of all entities. If you change the tagTrend_refine.txt and deploy a changed variation, then visit /migratedata... all old entities will stick around and all the new entities will join them. And so forth.
Moreover -- /migratedata is NOT idempotent (if visited more than once it does not produce the same state as running it just once) so it shouldn't be a GET (and now we're on to super-elementary HTTP for a change!-) -- it should be a POST.
In fact I suspect (but I'm really flying blind here, since you see fit to give such tiny amounts of information) that you in fact want to upload a .txt file to a POST handler and do the updates that way (perhaps avoiding duplicates...?). However, I'm no mind reader, so this is about as far as I can go.
I believe I have fully answered the question you posted (though perhaps not the one you meant but didn't express:-) and by SO's etiquette it would be nice to upvote and accept this answer, then, if needed, post another question, expressing MUCH more clearly and completely what you're trying to achieve, your current .py and .yaml (ideally with correct indentation), what they actually do and why you'd like to do something different. For POST vs GET in particular, just study When should I use GET or POST method? What's the difference between them? ...
Alex's solution will work, as long as all you data can be loaded in under 1 minute, as that's the timeout for an app engine request.
For larger data, consider calling the datastore API directly from your own computer where you have the source. It's a bit of a hassle because it's a different API; it's not ndb. But it's still a pretty simple API. Here's some code that calls the API:
https://github.com/GoogleCloudPlatform/getting-started-python/blob/master/2-structured-data/bookshelf/model_datastore.py
Again, this code can run anywhere. It doesn't need to be uploaded to app engine to run.

Delete row from database when date passes

In a database, i have a field called date. Is there a way to delete a row when the date passes, so that it doesnt show up anymore? Ive tried comparing it to todays date in the view, but this wouldnt happen everyday, and people would still see it on the first page load. Any ideas?
Removing something from your database is not safe for many reasons. Starting from permissions going to on_delete logic. If you are not sure about that it's totally required to delete something, just mark this row as active=false.
I would not recomend to use cron, since it hard to maintain: you have to set different tasks on different environments manually, copy these files somewhere on your VCS, work with bash instead of python.
Also, when talking about events, I would not recommend to store something like this in your database, since it is not controlled by VCS and hard to maintain.
If your app is pretty simple schedule is an option.
But if you are looking for some extra info like:
What rows were deleted?
Were there any exceptions?
You can move to more complex Celery with Beat turned on. Extra dependencies (like Redis, RabbitMQ) are the main disadvantage.
Docs:
celery beat
Related:
How do I get a Cron like scheduler in Python?
I believe the best way would be to use a Cron Job or to use a additional conditional in the view to show only rows after the said date.
I would recommend you use a mysql event, since this will run constantly, unlike triggers that are only fired on database operations. You want this to occur outside of anything happening in the application, just based on time, so mysql event will work for this scenario. See full tutorial here: http://www.sitepoint.com/working-with-mysql-events/
I had a easier approach, i guess you could call it "hard-coded". I made a function called deleteevent, which had the following code
def deleteevent():
yesterday = date.today() - timedelta(1)
if Events.objects.filter(event_date = yesterday).count():
Events.objects.filter(event_date = yesterday).delete()
Then, in every other function i had, i called this at the beginning, so the event would be deleted before the page loaded

Google App Engine - Add & Reflect pattern, working around eventual consistency

I'm building a web application on app engine.
In my case, that's built on django-nonrel, but the key point is that it's using Google's datastore.
I love the fact that I don't need to deal with replication, sharding, backups and such, but one thing that is always getting in my way is the eventual consistency, which seems to get in the way of implementing a common web app pattern which I'm calling "Add & Reflect".
Let's say I have a project management app. The Project is its central model.
Now there's a web page page where I see a list of all projects, can add a project, and then I'll reflect back the list of all projects, which should include the project I just added (assuming no errors).
So the pattern goes like this:
Get and display list of existing projects
User adds new project (using a form on that page)
New project is created
As a response, get and display list of existing projects (now includes the new project)
Now the thing is, that due to eventual consistency, there is no guarantee whatsoever that I will get that new project when I get a list of all projects right after adding a new project.
Now that would be fine if this momentary inconsistency happened when another request (e.g. another user: user B) requested the list of projects one second after the project was added by the first user (user A), but it's really a problem when user A performs an operation, and does not see the results of his action, therefore does not get feedback.
I have gotten used to doing something like this to work around this problem:
def create_project(request):
response_context = {}
new_project = Project(name=request.POST['name'])
project.save()
response_context['projects'] = Project.get_serialized_projects()
# on GAE, eventual consistency means we are not guaranteed to see the
# new projects while querying for all projects, therefore we might need
# to add it manually...
if project.serialize() not in response_context['projects']:
response_context['projects'].append(project.serialize())
return render('projects.html', response_context)
The problem is that this happens in many places in my code, so I'm thinking maybe I'm missing something there, since this pattern is such a basic web app pattern.
Any suggestions for other ways to handle this?
Yes its a common issue. No theres no magic fix.
From client-side once you know the commit succeeded you can save the item locally (globals or storage) and then when querying from datastore merge your saved data. Put an expiration on it so its temporary. Its not trivial to make it work in all cases (say added an item then removed/renamed it so also update cache etc).
From server-side its common to cache recent saves in memcache and also merge with your queries.

How do you make a django field reference to a python class?

I am working on a project to expand a testing suite that my company uses. One of this things that was asked of me was to link the website to our Github source for code so that the dev team could continue tracking the issues there instead of trying to look in two places. I was able to do this but the problem is that every time the a bug is reported an issue is opened.
I want to add a field to my Django model that tracks an Issue object (from the github3.py wrapper) that is sent to Github. I want to use this to check if an Issue has already been created in Github by that instance of the BugReport and if it has been created, edit the Issue instead of creating another issue in Github that is a duplicate. Does Django have a something that can handle this sort of reference?
I am using Django 1.3.1 and Python 2.7.1
EDIT
I was able to figure my specific problem out using esauro's suggestions. However, as mkoistinen said, If this problem came up in a program where the work-around was not as easy as this one was, should an object reference be created like I had originally asked about or is that bad practice? and if it is okay to make an object reference like that, how would you do it with the Django models?
I'm the creator of github3.py.
If you want to get the issue itself via a number, there are a couple different ways to do this. I'm not sure how you're interacting with the API but you can do:
import github3
i = githbu3.issue('repo_owner', 'repo_name', issue_number)
or
import github3
r = github3.repository('repo_owner', 'repo_name')
i = r.issue(issue_number)
or
import github3
g = github3.login(client_key='client_key', client_secret='client_secret')
i = g.issue('repo_owner', 'repo_name', issue_number)
# or
r = g.repository('repo_owner', 'repo_name')
i = r.issue(issue_number)
otherwise if you're looking for something that's in an issue without knowing the number:
import github3
r = github3.repository('repo_owner', 'repo_name')
for i in r.iter_issues():
if 'text to search for' in i.body_text:
i.edit('...')