Mturk: transfer HIT from Sandbox to Production site - amazon-web-services

If I create a HIT in the Sandbox via Mturk's GUI, is it possible to transfer it to the Production site, or do I have to re-create the HIT manually in the Production site?
In particular, is it possible, to download .input, .question and .properties for HIT created via GUI in the sandbox, in order to use them to generate the same HIT on the Production site via the CLT?
The obvious way seems to be using Mturk HIT's layouts. However, reading the doc, I don't see how/ know whether it is possible to to do this using the CLT. The doc on HITLayoutParameter requires using CreateHIT, but this is not an available command in the CLT (only have loadHITs).
I have seen other questions Creating mTurk HIT from Layout with parameters using boto and python and Create a MTurk HIT from an existing template about ways to do it with boto but I am still wondering whether that's doable with the CLT.

The live and sandbox modes are completely separate and no transfer is possible from one to the other.
You will need to implement this programmatically by storing the specs of the sandbox HIT and creating a live HIT.
Another option is to use a service like TurkPrime.com which allows you to copy HITs from sandbox to live mode

Related

`gsutil equivalent` missing in GCP User Interface

I am doing a tutorial on Google Certified Associate Cloud Engineer 2020, which used to be on Udemy and now is on Cloud Guru. I am watching a video on GCS: Google Cloud Storage.
At one point tutor, while using GCP User Interface, is renaming a file. In the window Rename Object, a great feature shows gsutil equivalent.
This gsutil equivalent is not showing on my GCP User Interface. Is there any option to turn this on, or is this a feature that no longer exists?
I have tried to look at different options in User Interface, but I cannot find the option I am looking for. I have tried to Google this, but most things that come up are more related to gsutil itself rather than User Interface.
Related to your question if you have to activate something to be able to get this feature, the answer is that you don’t have to activate anything as there is no way to activate it because this is a feature from the GCP UI interface that has been changed since the video that you used as a reference was released.
If you want to get the same gsutil command you would be able if you click on move option instead of using rename. This will open another window where you would find the same gsutil command as you found in the image that you shared.
The reason why the same command is present in the move option as it was in the rename is because in the end a rename the same as a move, which is in fact a 2-step process: a copy and a delete, as can be seen in the steps to rename using the REST API as described in the docs.
In the case that you want this feature to be again available on the GCP UI you can always open a Feature Request in the Issue Tracker asking for it.
Rename feature is also available in GCP Console Just Chek following screenshots
Check This :
https://i.stack.imgur.com/b8pyW.png

Why we need to setup AWS and POSTgres db when we deploy our app using Heroku?

I'm building a web api by watching the youtube video below and until the AWS S3 bucket setup I understand everything fine. But he first deploy everything locally then after making sure everything works he is transferring all static files to AWS and for DB he switches from SQLdb3 to POSgres.
django portfolio
I still don't understand this part why we need to put our static files to AWS and create POSTgresql database even there is an SQLdb3 default database from django. I'm thinking that if I'm the only admin and just connecting my GitHub from Heroku should be enough and anytime I change something in the api just need to push those changes to github master and that should be it.
Why we need to use AWS to setup static file location and setup a rds (relational data base) and do the things from the beginning. Still not getting it!
Can anybody help to explain this ?
Thanks
Databases
There are several reasons a video guide would encourage you to switch from SQLite to a database server such as MySQL or PostgreSQL:
SQLite is great but doesn't scale well if you're expecting a lot of traffic
SQLite doesn't work if you want to distribute your app accross multiple servers. Going back to Heroky, if you serve your app with multiple Dynos, you'll have a problem because each Dyno will use a distinct SQLite database. If you edit something through the admin, it will happen on one of this databases, at random, leading to inconsistencies
Some Django features aren't available on SQLite
SQLite is the default database in Django because it works out of the box, and is extremely fast and easy to use in local/development environments for prototyping.
However, it is usually not suited for production websites. Additionally, while it can be tempting to store your sqlite.db file along with your code, for instance in a git repository, it is considered a bad practice because your database can contain sensitive data (such as passwords, usernames, emails, etc.). Hence, a strict separation between your code and data is a good practice.
Another way to put it is that your code and your data have different lifecycles. You want to be able to edit data in your database without redeploying your code, and update your code without touching your database.
Even if you can remove public access to some files through GitHub, this is not a good practice because when you work in a team with multiple developpers, developpers may have access to the code but not the production data, because it's usually sensitive. If you work with 5 people and each one of them has a copy of your database, it means the risk to lose it or have it stolen is 5x higher ;)
Static files
When you work locally, Django's built-in runserver command handles the serving of static assets such as CSS, Javascript and images for you.
However, this server is not designed for production use either. It works great in development, but will start to fail very fast on a production website, that should handle way more requests than your local version.
Because of that, you need to host these static files somewhere else, and AWS is one place where you can do that. AWS will serve those files for you, in a very efficient way. There are other options available, for instance configuring a reverse proxy with Nginx to serve the files for you, if you're using a dedicated server.
As far as I can tell, the progression you describe from the video is bringing you from a local, development enviromnent to a more efficient and scalable production setup. That is to be expected, because it's less daunting to start with something really simple (SQLite, Django's built-in runserver), and move on to more complex and abstract topics and tools later on.

Saving Interactive Bokeh Chart

I have created an interactive Bokeh chart with various widgets which allow manipulation of the data. I now want to understand what is the standard way of sharing such a plot or how do I save it for sharing.
The plot is created with the curdoc method and then output to the Bokeh server using session.show().
#create current visualization using plot p and widgets inputs
curdoc().add_root(HBox(inputs, p, width=1100))
#run the session
session = push_session(curdoc())
session.show() # open the document in a browser
session.loop_until_closed() # run forever
Does the app trigger actual python code?
If not, you might consider reworking it as a non-server standalone document (using CustomJS callbacks, for instance). That would just generate a self-contained static HTML file that you could publish or send anywhere, and have it be immediately accessible.
If your app does rely on executing actual python code to do the work, then it needs to actually be running somewhere for users to interact with it. First off, I would suggest you make a real app that runs in the server, like the ones in the demo app gallery (see also Use Case Scenarios in the User's Guide). A real server app, i.e. one you run like bokeh serve myapp.py, is definitely preferred over using bokeh.client, especially for "publishing" scenarios (it will also be simpler/less code and more performant). Then, distributing the app could mean a few things:
You give them the script and they run bokeh serve app.py locally themselves
You "deploy" the app by leaving it running on a server with a URL that is accessible to users who you want to be able to see it
Depending on how much compute the app does, and how many users you expect at a given time, the second option could be as simple as running bokeh serve app.py somewhere. But if there is heavy compute or you expect a lot of traffic, you may need more sophisticated "scale out" deployments behind a load balancer. More information is in Deployment Scenarios in the User's Guide, and of course we are happy to help wth more extended discussions on the public mailing list. Finally, I should mention that in the near future, automated scalable publishing of Bokeh applications will be available as a feature on https://anaconda.org/

MTurk external website example

I am looking for a example where a fully completed web app can be embedded into amazon mechanical turk. I am working on a "game-like" activity that does not really belong to a form structure.
Here is my game/activity:
http://52.91.100.69:3030/
I would like to embed such tasks inside mechanical turk. My code accepts url parameters such as assignmentId, workerId etc (which I have found form the aws mturk docks)
For example:
http://52.91.100.69:3030/?assignmentId=23423&workerId=34&hitId=455
Basically, I am handling all the data logging etc, I plan to generate codes for users to enter upon completion of a number of tasks.
I would like to know how I cam accomplish this? Preferably in python (Boto)?
I looked at this tutorial: http://kaflurbaleen.blogspot.com/2014/06/in-which-i-battle-mturk-external-hits.html
Using this I made this boto file: https://gist.github.com/arendu/631a416e4cb17decb9dd
When I run it I dont see any errors, but I can't seem to find out whether the hit is available? I checked my aws mturk requester console (looked at manage HITs individually) but no hits are present.
What am I doing wrong?

Missing SQL tab when using AD provider

I've deployed a copy of opserver, and it is working perfectly when using alladmin as the security setting. However, once I switch it to ad and configure the groups, the SQL tab goes away and I get an access denied message if I try browsing directly to it. The dashboard still displays all Solar Winds data as expected.
The build I'm using is actually from November. I tried a more recent build, but I lose the network information from Solar Winds (the CPU and Mem graphs show, but Net is all blank)
Is there a separate place to configure the SQL permissions that I'm missing?
I think perhaps there was some caching going on for the hub that wasn't happening for the provider, because they are both working now. Since it was a new security group, perhaps it hadn't replicated yet (causing the SQL auth to fail) but the dashboard provider was still using the previous authentication?
I also did discover a neat option while researching this though - the GitHub page mentions that you can also specify security at a provider level in the JSON using the AdminGroups and ViewGroups properties!