Can I use AWS as a virtual computer? - amazon-web-services

I know zilch about AWS, and everything I read about it is at a level of generality beyond my poor understanding.
So, to be specific, say I plan a peripatetic lifestyle. Or say I cross national borders frequently with an ever-present danger of having my laptop confiscated.
Can I keep only a barebones laptop computer locally, and put compilers (say perl, python, etc), editors, browsers and my own programs and data on AWS, edit my code and run it on AWS, and then view the output on my laptop from wherever I may be?
Does AWS provide any of these programs as a service, so I don't have to upload them?

Amazon WorkSpaces may have the functionality you are looking for.
https://aws.amazon.com/workspaces/?nc2=h_m1
There is a free trial version to test it out.

Related

Emacs Tramp access to AWS SageMaker instance

I do machine learning code development on an AWS SageMaker instance, and would like to make use of Emacs/Tramp†. My question: how, if at all, could that be done? I strongly suspect that some knowledge about security protocols / IAM roles, etc., would be important, but I am a mere SageMaker end-user.
A co-worker may be able to assist on some of these questions, but he is way over-subscribed. It is a big ask of him to indulge my personal preferences (however strong), so I start by posing the question, has anybody else has already solved this problem, or are there good starting points for consideration?
Already seen:
Martin Baillie's Emacs TRAMP over AWS SSM APIs, which seems light on key details, and may not be applicable for SageMaker environments
AWS's own Tutorial: Set Up PyCharm Professional with a Development Endpoint, which seems to be pretty specific to PyCharm, and which again may not be suitable for SageMaker environments.
†Why? Because I have a long lifetime of using emacs key bindings and macros, etc., that improve my efficiency greatly. Why not emacs within a terminal running on a SageMaker instance? That's what I'm doing now, but it leaves out important flexibility compared with a local windowed emacs client; the latter can be as tall or wide as my pixels permit, can have multiple frames simultaneously, wouldn't have as many networking latencies, etc.

Google API Speeds Slow in Cloud Run / Functions?

Bottom Line: Cloud Run and Cloud Functions seem to have bizarrely limited bandwidth to the Google Drive API endpoints. Looking for advice on how to work around, or, ideally, #Google support to fix the underlying issue(s) as I will not be the only like use case.
Background: I have what I think is a really simple use case. We're trying to automate private domain Google Drive users to take existing audio recordings and send them off to Speech API to generate a transcript on an ad hoc basis, and to dump the transcript back into the same Drive folder with email notification to the submitter. Easy, right? Only hard part is that Speech API will only read from Google Cloud Storage, so the 'hard part' should be moving the file over. 'Hard' doesn't really cover it...
Problem: Writing in nodejs and using the latest version of the official modules for Drive and GCS, the file copying was going extremely slow. When we broke things down, it became apparent that the GCS speed was acceptable (mostly -- honestly it didn't get a robust test, but was fast enough in limited testing); it was the Drive ingress which was causing the real problem. Using even the sample Google Drive Download app from the repo was slow as can be. Thinking the issue might be either my code or the library, though, I ran the same thing from the Cloud Console, and it was fast as lightning. Same with GCE. Same locally. But in Cloud Functions or Cloud Run, it's like molasses.
Request:
Has anyone in the community run into this or a like issue and found a workaround?
#Google -- Any chance that whatever the underlying performance bottleneck is, you can fix it? This is a quintessentially 'serverless' use case, and it's hard to believe that the folks who've been doing this the longest can't crack it.
Thank you all in advance!
Updated 1/4/19 -- GCS is also slow following more robust testing. Image base also makes no difference (tried nodejs10-alpine, nodejs12-slim, nodejs12-alpine without impact), and memory limits equally do not impact results locally or on GCP (256m works fine locally; 2Gi fails in GCP).
Google Issue at: https://issuetracker.google.com/147139116
Self-inflicted wound. Google-provided code seeks to be asynchronous and do work in the background. Cloud Run and Cloud Functions do not support that model (for now at least). Move to promise-chaining and all of a sudden it works like it should -- so long as the CPU keeps the attention it needs. Limits what we can do with CR / CF, but hopefully that too will evolve.

Railo Express for a portable web app on a USB stick

Here's my scenario. I am writing a web app for a client that needs to be portable, i.e. they need to plug it into different PCs (Windows) and have it simply work. Life would have been easier if they could just put it up on a domain, but no can do in this case, cause internet access might not always be available. So, I am trying out Railo Express with Jetty (http://www.getrailo.org/index.cfm/download/) which has everything I need. I actually managed to install (well, copy and configure really) the package on a USB stick, created a new site in the "/webapps" folder and wired that up, then downloaded the drivers for SQLITE and got that connected and working just fine.
This is not going to be a very intense web app at all, or does it need many users connected to it (max 2-3 at a time). I use Bootstrap and other than a Dashboard with a couple of graphs, all the pages are basically forms and read/write to the SQLITE db.
So, while everything seems to work do you think this is a viable solution? It seems to work fine, but will I run into any issues, like perhaps performance or compatibility issues with the different PCs the client might be using? And is there a better way of doing this?
EDIT:
Thanks for replying guys. Here's some more info to hopefully clear things out. I should have been more specific as to why use a portable web app. The app is for a car wash business to log the business going through. There is basically one computer at the counter where things will be accessed from (and the USB will be attached here), and possibly one iPod at the entrance where cars going in will be logged by the attendant (and will connect to the local computer via wireless). The reason for portability? They want to take the stick home with them and review stats, so it's either a full installation on the computer and a backup on the stick (extra work), or just everything on the stick. The reason for not simply going online and making things easier for everyone: tricky internet reception, which would mean downtime of the app.
From your descriptions it looks like a simple and not very intensive application. Based on my experience with Railo Express, I think you have the power needed to run this.
What I would do is to install the application on the computer at the counter since that is the main hub (you mention the iPad connecting wireless). Use the stick as a backup and before they take it home, make sure the stick is updated with data. You might also consider designed the app so that there is separation between writing data and consuming it (e.g. people at home running reports).
Will the app on the stick run at home, most likely it will work, or if you run into some problems will not be hard to fix.

Host for Wt C++ web framework, deplowment issue

I was wondering if justhost.com would be good enough to host a Wt C++ website/app on. It does allow FTP and SSH access as http://richelbilderbeek.nl/CppWtDeployGlobalHosted.htm tells me a host should, but I am just looking to get more input, or if you know of a better host?
I'd also ask them if you can install libraries on there, if not you'd have to compile yourself a giant static app, which could be a bit of annoying restriction.
It looks to me like their site is basically designed to host standard php style apps more than anything.
I use slicehost and Rackspace Cloud Servers.
The thing is they are full VPS's and give you full root access.
I would go with a true VPS plan, rather than a chroot style shared hosting plan, with ssh access added on top. The main problem would be neighbouring bloated applications using all the shared resources and giving you inconsistent performance.
Also with full root access, you can set up your app to start on boot, and sort out your own DB backup plan etc..
You still can get neighbours slowing you down on VPS accounts, but it's much reduced.
One thing I like with Witty is that my app running with 100 threads, even with the cheapest VPS plan it runs consistently and smooth up to 50 concurrent users (tested using load impact) with hardly any load on the machine at all.
My general pro c++ statement: Some c# and java people say c++ is only really useful for embedded, low powered hardware. I'd like to add that it's also useful for VPSs. Although hardware power is always growing, with virtualization, there's always more cheaper lower powered plans coming out that c++ is perfect for.
I used to run php, perl and python web servers on VPSs but my C++ witty app really does leave them all in the dust performance wise. The idea being you can pay less per month to host a c++ Web site that scales really well, rather than rails or other interpreted or byte compiled languages.
Also, I used to use a larger, 4 GB Slice to do my compiling until I bought myself a decent 6 core home box. The 256 MB (the smallest plan) is no good for compiling, but excellent for running.

Development Environment in a VM against an isolated development/test network [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 3 years ago.
Improve this question
I currently work in an organization that forces all software development to be done inside a VM. This is for a variety of risk/governance/security/compliance reasons.
The standard setup is something like:
VMWare image given to devs with tools installed
VM is customized to suit project/stream needs
VM sits in a network & domain that is isolated from the live/production network
SCM connectivity is only possible through dev/test network
Email and office tools need to be on live network so this means having two separate desktops going at once
Heavyweight dev tools in use on VMs so they are very resource hungry
Some problems that people complain about are:
Development environment runs slower than normal (host OS is windows XP so memory is limited)
Switching between DEV machine and Email/Office machine is a pain, simple things like cut and paste are made harder. This is less efficient from a usability perspective.
Mouse in particular doesn't seem to work properly using VMWare player or RDP.
Need a separate login to Dev/Test network/domain
Has anyone seen or worked in other (hopefully better) setups to this that have similar constraints (as mentioned at the top)?
In particular are there viable options that would remove the need for running stuff in a VM altogether?
In particular are there viable options
that would remove the need for running
stuff in a VM altogether?
Given that you said there are unspecified risk/governance/security/compliance reasons for your organization's use of VMs, I doubt any option we could provide could negate those. Ultimately it sounds like they just need their development team as sandboxed as possible.
(And even so, the question/answers would probably be better off at serverfault since it's more networking/security oriented.)
It sounds like a big problem is not having enough horsepower on the host OS. WinXP should be fine, but you need to have adequate hardware. i.e. at least 3 GB RAM, dual core CPU, and hardware that supports virtualization. Clipboard sync should be working with the VM.
I am not currently doing this, but I've thought about it, and we're kind of kicking this idea around with the idea of making it easier to standardize the dev environment, and to avoid wasting a day when you get a new PC. I'm dismayed to hear that it's not the utopia that I had dreamed...
I've been using VMs as a development environment for a long time. There's nothing inherently wrong with it, and it presents lots of benefits.
Ensuring a consistent environment
Separating file systems for different backup scenarios
Added security
Potentially gives developers access to more raw computing power.
There is a lot of innovation in the VM world, as evidenced by the growing popularity of VM farms, hardware support for virtualization, and controlled "turnkey" solutions, like MS's VirtualPC images for testing browser compatibility and the TurnKey set of appliances.
As others have said, your issues are probably due to insufficient hardware or sub-optimal configurations.
Development environment runs slower than normal (host OS is windows XP so memory is limited)
This should not be noticeable. XP vs. Windows Vista or Win7 is a marginal comparison. I would check the amount of physical RAM allocated to the VM.
Switching between DEV machine and Email/Office machine is a pain, simple things like cut and paste are made harder. This is less efficient from a usability perspective.
There are VM-specific optimizations/configurations that can make these tasks seamless. I would consult your VM maintenance staff.
Mouse in particular doesn't seem to work properly using VMWare player or RDP.
Again, should be seamless, but consult VM staff.
Need a separate login to Dev/Test network/domain
I would see this as a business decision: your company could obviously set up virtual machines with the same domain poicies as your own personal workstation, but may have other (big brother?) purposes for forcing you to login separately.
As far as using VM's as an agent of control, I think there are better solutions, like well-designed authorization controls around the production machines. There's nothing like paper trails to make people behave themselves.