First of all sorry for my bad english, I am from Germany and currently learning english so please don't be harsh to me.
Now my question is, is there a way to execute a response by a bot on an equation, that turns out correct /equal. So in like an if-else statement
UPDATE - After writing the original answer below, I realized that regular commands would be able to handle many cases if there were a suitable parameter, like $if('expr', 'true-response', 'false-response'). So I wrote one. You can find it at https://github.com/madelsberger/if_StreamlabsParameter
The parameter is itself implemented in a Python script, so you'd still have to set your bot up to use Python scripts. But then if you install this parameter script, you can use it in regular commands instead of implementing each command with yet more custom scripting.
Note that this script is not approved by the Streamlabs Chatbot support (nor am I familiar with a procedure for submitting it to be approved); and I provide it under the MIT license (i.e. "AS IS"). It should work fine, and you're free to review the code to verify I haven't done anything malicious. I'll try to make a reasonable effort to answer reasonable questions as time permits.
So what you want is for the user to issue a command, and then the bot does a calculation; if a particular equation is satisfied, it sends a particular response, and if the equation is not satisfied it sends a different response (or just doesn't send); is that right?
I don't think basic commands (i.e. the Commands tab) can do this. You can do it with custom scripting. The script you need may not be very complicated, but you would need to install Python 2.7.13 and write a little Python code that satisfies the requirements for a Streamlabs Chatbot script.
As a very basic example, say we want to create a "guess my number" game, and when someone guesses it says "Correct" if the guess is correct.
In your chatbot's .../Services/Scripts directory you would create a subdirectory for your script (lets call it guess, and in that folder you would create guess_StreamlabsSystem.py as follows
# chatbot won't load a script that doesn't define these variables
ScriptName = "guess"
Website = None
Description = "number guessing game"
Creator = "madelsberger" # or for a script you write, your name here
Version = "1.0.0"
# global data used by the script
correctNumber = None
# called when your script starts up
def Init():
global correctNumber
correctNumber = Parent.GetRandom(1, 11)
# called whenever the chatbot has data - chat messages, whispers, etc. - that
# your script might care about
def Execute(data):
global correctNumber
if data.IsChatMessage() and data.GetParam(0).lower() == '!guess':
try: # avoid problems if a non-numeric guess is given
if int(data.GetParam(1)) == correctNumber:
Parent.SendTwitchMessage("#" + data.User
+ " guessed correctly!")
correctNumber = Parent.GetRandom(1, 11)
else:
Parent.SendTwitchMessage(data.GetParam(1) + " is not correct")
except:
pass
# called frequently to mark the passage of time; if there's some condition
# your bot wants to act on *other* than data from the bot, and if you can
# write code that watches for that condition, it can go here; but beware this
# gets run frequently, so if the check is resource-intensive then you don't
# want to run it on *every* tick
def Tick():
pass
Note that in practice you might want to add cooldown controls, or permission controls. Your code can see an object called Parent which provides methods to handle these things. You also might want things like the trigger command to be configurable; you can set that up using a UI_Config.json file. See the chatbot docs for more details on scripting. https://cdn.streamlabs.com/chatbot/Documentation.pdf
Related
My django app uses update_or_create() to update a bunch of records. In some cases, updates are really few within a ton of records, and it would be nice to know what got updated within those records. Is it possible to know what got updated (i.e fields whose values got changed)? If not, does any one has ideas of workarounds to achieve that?
This will be invoked from the shell, so ideally it would be nice to be prompted for confirmation just before a value is being changed within update_or_create(), but if not that, knowing what got changed will also help.
Update (more context): Thought I'd give more context here. The data in this Django app gets updated through various means (through users coming on the web site, through the admin page, through scripts (run from the shell) that populate data from a csv etc.). The above question is important mostly for the shell scripts that update data from csvs, hence a solution at the database/trigger/signal level may not be helpful here (I guess).
This is what I ended up doing:
for row in reader:
school_obj0, created = Org.objects.get_or_create(school_id = row[0])
if (school_obj0.name != row[1]):
print (school_obj0.name, '==>', row[1])
confirmation = input('proceed? [y/n]: ')
if (confirmation == 'y'):
school_obj1, created = Org.objects.update_or_create(
school_id = row[0], defaults={"name": row[1],})
Happy to know about improvements to this approach (please see the update in the question with more context)
This will be invoked from the shell, so ideally it would be nice to be
prompted for confirmation just before a value is being changed
Unfortunately, databases don't work like that. It's the responsibility of applications to provide this functionality. And django isn't an application. You can however use django to write an application that provides this functionality.
As for finding out whether an object was updated or created, that's what the return value gives you. A tuple where the second value is a flag for update or create
I have a Django 1.9 application that is running a section of code where changes are made to the database based on the results of queries to certain remote APIs. For instance, this might be data about commits, file changes, reviewers, pull requests, etc. that I want to save as entities in my database.
commit_data = commit_API_client.get_commit_info(argument1, argument2)
new_commit = models.Commit.Create(**commit_data)
#if the last API failed, this will fail
#I will need to run this again to get these files, so I need
#to get the commit all over again, too
files = file_API_client.get_file_info(new_commit.id)
new_files = models.Files.Create(**files)
#do some more stuff here
It's very possible that one of the few APIs I'm calling will return some error instead of valid data. I essentially need to make this section into a single atomic transaction so that if there are no errors returned from the HTTP requests, I commit all the changes to the database. Otherwise, I might lose some data if 2 APIs return correctly, but the 3rd doesn't.
I saw that Django supports commit hooks for database transactions, but I was wondering if that would applicable to this situation and how I would go about implementing that.
If you want to make that into an atomic operation, just wrap it in transaction.atomic(). If any of your code raises an exception the whole block will roll back. If you're only doing reads from the remote API that should work fine. A typical pattern is:
def my_function():
try:
with transaction.atomic():
# Do your work here
pass
except Exception:
# Do some error handling
pass
Django does have an on_commit hook, but that isn't really applicable here. Its purpose is to run some code after a transaction has completed successfully. For example, you might use it to write some log data to a remote API if the transaction was successful.
Can anyone guide me towards the right direction as to where I should place a script solely for loading data into ndb. As I wish to upload all data into the gae ndb so that the application could perform query on it.
Right now, the loading of data is in my application. I wish to placed it separately from the main application.
Should it be edited in the yaml file?
EDITED
This is a snippet of the entity and the handler to upload the data into GAE ndb.
I wish to placed this chunk of code separately from my main application .py. Reason being the uploading of this data won't be done frequently and to keep the codes in the main application "cleaner".
class TagTrend_refine(ndb.Model):
tag = ndb.StringProperty()
trendData = ndb.BlobProperty(compressed=True)
class MigrateData(webapp2.RequestHandler):
def get(self):
listOfEntities = []
f = open("tagTrend_refine.txt")
lines = f.readlines()
f.close()
for line in lines:
temp = line.strip().split("\t")
data = TagTrend_refine(
tag = temp[0],
trendData = temp[1]
)
listOfEntities.append(data)
ndb.put_multi(listOfEntities)
For example if I placed the above code in a file called dataLoader.py, where should I call this script to invoke?
In app.yaml alongside my main application(knowledgeGraph.application)?
- url: /.*
script: knowledgeGraph.application
You don't show us the application object (no doubt a WSGI app) in your knowledge.py module, so I can't know what URL you want to serve with the MigrateData handler -- I'll just guess it's something like /migratedata.
So the class TagTrend_refine should be in a separate file (usually called models.py) so that both your dataloader.py, and your knowledge.py, can import models to access it (and models.py will need its own import of ndb of course). (Then of course access to the entity class will be as models.TagTrend_refine -- very basic Python).
Next, you'll complete dataloader.py by defining a WSGI app, e.g, at end of file,
app = webapp2.WSGIApplication(routes=[('/migratedata', MigrateData)])
(of course this means this module will need to import webapp2 as well -- can I take for granted a knowledge of super-elementary Python?).
In app.yaml, as the first URL, before that /.*, you'll have:
url: /migratedata
script: dataloader.app
Given all this, when you visit '/migratedata', your handler will read the "tagTrend_refine.txt" file that you uploaded together with your .py, .yaml, and so on, files in your overall GAE application, and unconditionally create one entity per line of that file (assuming you fix the multiple indentation problems in your code as displayed above, but, again, this is just super-elementary Python -- presumably you've used both tabs and spaces and they show up OK in your editor, but not here on SO... I recommend you use strictly, only spaces, never tabs, in Python code).
However this does seem to be a peculiar task. If /migratedata gets visited twice, it will create duplicates of all entities. If you change the tagTrend_refine.txt and deploy a changed variation, then visit /migratedata... all old entities will stick around and all the new entities will join them. And so forth.
Moreover -- /migratedata is NOT idempotent (if visited more than once it does not produce the same state as running it just once) so it shouldn't be a GET (and now we're on to super-elementary HTTP for a change!-) -- it should be a POST.
In fact I suspect (but I'm really flying blind here, since you see fit to give such tiny amounts of information) that you in fact want to upload a .txt file to a POST handler and do the updates that way (perhaps avoiding duplicates...?). However, I'm no mind reader, so this is about as far as I can go.
I believe I have fully answered the question you posted (though perhaps not the one you meant but didn't express:-) and by SO's etiquette it would be nice to upvote and accept this answer, then, if needed, post another question, expressing MUCH more clearly and completely what you're trying to achieve, your current .py and .yaml (ideally with correct indentation), what they actually do and why you'd like to do something different. For POST vs GET in particular, just study When should I use GET or POST method? What's the difference between them? ...
Alex's solution will work, as long as all you data can be loaded in under 1 minute, as that's the timeout for an app engine request.
For larger data, consider calling the datastore API directly from your own computer where you have the source. It's a bit of a hassle because it's a different API; it's not ndb. But it's still a pretty simple API. Here's some code that calls the API:
https://github.com/GoogleCloudPlatform/getting-started-python/blob/master/2-structured-data/bookshelf/model_datastore.py
Again, this code can run anywhere. It doesn't need to be uploaded to app engine to run.
Like many others, I've been learning web development on django through building a test app. I've the basic models set up. I've populated a few of the tables with the absolute minimum data needed for further testing though using fixtures.
Now for a different table, I want to create data tuples through a custom management command which takes the required arguments. If this works as expected, I'll save the created data to the database by adding the --save option.
The syntax of the command is like this
create_raw_data owner_id temperature [--save]
where owner_id is required and temperature (in C) is optional. Within the Handle method, I'm using factory boy to create the raw_data with the given arguments etc.
I did have some issues but searching on SO, google, django docs etc, I've got the command working fine.
EXCEPT when I input a negative temperature...
Then I get the following error
Usage: C:\test\manage.py create_raw_data [options]
Creates a RawData object. Usage: create_raw_data owner_id temperature [--save]
C:\test\manage.py: error: no such option: -5
The code I have for parsing the args is like this
for index, item in enumerate(args):
if index == 0:
owner_id = int(item)
else index == 1:
temp = int(item)
I put a print(args) as the 1st line inside Handle but it seems the control is not even reaching here.
I'm not sure what is wrong... please help...
Thanks a lot
got the issue fixed so providing an answer to others who may come across this.
The issue was with parse_args method of optparse. I've read in a number of places that though optparse is deprecated and instead argparse is recommended, django recommends using optparse since that is what it uses. Long story short, the link at link suggested a few alternatives and using create_raw_data 1 -- -5 works as expected. So I did get a workaround. Thanks.
I have a SOAP UI 4.5.1, I have made a load test, it is working fine. My problem is that I run the same request every time and I need to change the values of the soap request I am sending.
For e.g. I have a block of my soap request:
<ns:Assessment>
<ns:Project>
<ns:ProviderId>SHL</ns:ProviderId>
<ns:ProjectId>SampleAssessment</ns:ProjectId>
</ns:Project>
</ns:Assessment>
Provider ID: SHL
Project ID: SampleAssessment
Is there a way to make those values changing from some kind of interval?
For e.g.: Provider IDs [SHL, SLH, LHS]
Project IDs [SampleAssessment, TestAssessment, AnotherAssessment]
And with a load test I am making three request so that for the first request values looks like this:
<ns:Assessment>
<ns:Project>
<ns:ProviderId>SHL</ns:ProviderId>
<ns:ProjectId>SampleAssessment</ns:ProjectId>
</ns:Project>
</ns:Assessment>
for the second like this:
<ns:Assessment>
<ns:Project>
<ns:ProviderId>SLH</ns:ProviderId>
<ns:ProjectId>TestAssessment</ns:ProjectId>
</ns:Project>
</ns:Assessment>
and so on...
Is there a way to make this happen with SOAP UI?
From my experience, you will need to use a Groovy Script step.
For example, if you have a step before your request that is a script, you can use something like:
context.setProperty("ProviderId", "SHL")
Then in your request, use:
<ns:ProviderId>${ProviderId}</ns:ProviderId>
Of course, this doesn't buy you much by itself. There are few ways to vary what the context.setProperty("ProviderId", "SHL") line will set. You can create a collection and iterate over it using something like:
def providers = ['ABC', 'DEF', 'GHI', 'JKL']
providers.each() {
context.setProperty("ProviderId", it)
testRunner.runTestStepByName( "nameofteststep" )
}
Where "nameofteststep" is the name of the Soap Request test step. This might sound odd, but if you right click the test step and disable it, the groovy script will still be able to execute it but it will not run sequentially. By that I mean that the groovy script will run it 4 times, but it won't run a fifth time when the script is complete because it is after the script. Then you just need to keep in mind that each load test thread makes four requests, but I am pretty sure that the SoapUI statistics will take this into account for you... might want to keep an eye out for it, though.
Alternatively, you could check the 'threadIndex' and set a the context variable based on that. A bit like this here: Log ThreadCount.
You could also use a collection without a loop and increment an index that you save as a testcase property and send the string corresponding to the index.
Personally, I think the first way is the most straightforward but I can provide an example of the other ones if you like.
There is a simple way of doing this without writing a groovy script.
After creating a test case you should include the below test steps:
1-Data source
2-Request
3-Loop
Data source will read an excel file (or other data source methods such as XML, groovy, JDBC, gird .. however the excel is the simplest one).
You should include the datas (that you need to change within the request)
Within the test request you need the right click and select "get data" . please notice that your test request should be as below
<ns:ProviderId>${ProviderId}</ns:ProviderId>
Then the last step is the "Loop" . This for returning to the first step until the data ends.
I hope this helps.