AWS S3 Get cli command line from each operation done by GUI - amazon-web-services

When I do an operation like an S3 upload, using the AWS GUI from browser, is it possible to retrieve the relative CLI command for generating the same operation already done?
Thanks.

There is an extension for chrome and Firefox (As far as I know) that records the changes made in the AWS console and translates it to CLI commands.
Plugin Link
Although not all the services / actions are supported It does a pretty good job. Here you can check the service coverage of the plugin
Service Coverage

Related

Selling Partner API using command line interface or tool

We're new to Amazon Seller Partner-API. Need to invoke certain Amazon SP-APIs for an integration workflow. For some internal reasons, using Amazon SDKs is a secondary option. With our conventional approach, we're able to interact with most APIs, in this case the AWS Request signing & Signature generation is where we're stuck.
As per Amazon using SDK handles it all internally. Is it possible to use a command line utility like - AWS CLI to interact with SP-APIs? Not sure if this is feasible. Found this - amazon-sp-api but not sure if it is stable / reliable.
I believe there should be ways to interact with SP-API from command line. If not, atleast there should be a tool that is able to produce AWS Request signature (given the request info, key etc...).
Kindly share your experience and expertise. We're new to AWS, so if I'm confusing AWS with SP-API (esp for Request signing - I believe both use the same mechanism) pls point it out.
The link you shared to amz.tools does not look like a command line interface. It is just an SDK generated in NodeJS. There is not way to connect to the API via command line. You can use Postman if you want to avoid SDKs.
And yes, AWS is not the same thing as SP API.
You can search github for SDKs generated on other languages; some seem to have a lot of use.
We generated our own SDK in C# because others didn't fit out criteria.

Google Cloud Run service deployment, is it the best direction in my situation?

I have some experience with Google Cloud Functions (CF). I tried to deploy a CF function recently with a Python app, but it uses an NLP model so the 8GB memory limit is exceeded when the model is triggered. The function is triggered when a JSON file is uploaded to a bucket.
So, I plan to try Google Cloud Run but I have no experience with it. Also, I am not completely sure if it is the best course of action.
If it is, what is the best way of implementing provided that the Run service will be triggered by a file uploaded to a bucket? In CF, you can select the triggering event, in Run I didn't see anything like that. I could use some starting points as I couldn't find my case in the GCP documentation.
Any help will be appreciated.
You can use at least these two things:
The legacy one: Create a GCS notification in PubSub. Then create a push subscription and add the Cloud Run URL in the HTTP push destination
A more recent way is to use Eventarc to invoke directly a Cloud Run endpoint from an event (it roughly create the same thing with a PubSub topic and push subscription, but it's fully configured for you)
EDIT 1
When you use Push notification, you will received a standard PubSub message. The format is described in the documentation for the attributes and for the body content; keep in mind that the raw content is base64 encoded and you have to decode it to get the final format
I personally have a Cloud Run service that log the contents of any requests to be able to get in the logs all the data that I need to develop. When I have a new message format, I configure the push to that Cloud Run endpoint and I automatically get the format
For Eventarc, the format will be added to the UI soon (I view that feature in preview, but it's not yet available). The best solution is to log the content to know what you get to know what to do!

Where is the visual config view in AWS Lambda?

I have been trying to look at the code that is deployed in an aws lambda.
There is an existing go function that is running in the go lambda.
However, I am not able to. AWS docs says we can look at the code through the visual config view, where is this view? This is the screen that I see, where is the view to see the code?
Please help.
Or is it because we are using a go server, only the executable which is a binary is running in the lambda and hence we are not able to see the code?
Code inline is supported only for interpreted languages (js for example) and not compiled languages.
Beside below lamda limitation, It seems console lamda editor does not support go.
However the documentation suggest to use code star.
You can also get started with AWS Lambda Go support through AWS
CodeStar. AWS CodeStar lets you quickly launch development projects
that include a sample application, source control and release
automation. With this announcement, AWS CodeStar introduced new
project templates for Go running on AWS Lambda. Select one of the
CodeStar Go project templates to get started. CodeStar makes it easy
to begin editing your Go project code in AWS Cloud9, an online IDE,
with just a few clicks.
announcing-go-support-for-aws-lambda
Q: How do I create an AWS Lambda function using the Lambda console?
If you are using Node.js or Python, you can author the code for your
function using code editor in the AWS Lambda console which lets you
author and test your functions, and view the results of function
executions in a robust, IDE-like environment
lambda-faqs
Deployment package size
50 MB (zipped, for direct upload)
250 MB (unzipped, including layers)
3 MB (console editor)
lambda limits
lambda-go-how-to-create-deployment-package
Based on the code size the AWS code editor will display the code. Since the size of the code/package is large AWS Code editor can't display the same.
But you can download the package from AWS Lambda function using export.
Kindly follow the below steps:
Go to the lambda
Select the lambda
Click on Actions
Select Export function
You will get few options. Select Download deployment package.

How to stream a log files in a Windows Box to S3 in real time

In AWS, How to stream a log files in a Windows Box to S3 in real time?
Thought of having a Kinesis Firehose configured to a S3 but later realized there is no Windows Agent for Kenesis.
But I feel we can acheive the same using API Calls. Any pointer or code examples will help us.
Language : .NET /
Platform : Windows 2012 R2 Server
This is a product from SAP, It is using Log4Net. Is there any example on how to configure the xml?
Note : Currently we are using aws sync command to push log files to S3 uisng a scheduled task (which runs every minute). I feel there should be a better way to implement this schenario.
Thanks,
Vijay

Serverless compute on Windows in AWS

I've got a piece of code that I need to make available over the 'Net. It's a perfect fit for an AWS Lambda with an HTTP API on top - a stateless, side effect free, rather CPU intensive function, blob in, blob out. It's written in C#/.NET, but it's not pure .NET, it makes use of the UWP API, therefore requires Windows Server 2016.
AWS Lambdas only run on Linux hosts, even C# ones. Is there any way to deploy this piece in the Amazon cloud in serverless manner - maybe something other than a Lambda? I know I can go with a EC2 VM, but this is the very kind of thing serverless architecture was invented for.
Lambda is the only option for serverless computing on AWS and Lambda functions run only on Linux machines.
If you need to run serverless functions in a Windows machine, try Azure Functions. That's the Lambda equivalent in the Microsoft cloud. I'm not sure if it runs in a Windows Server 2016 machine and couldn't find any reference to the platform, but I would expect that, as a brand new service, they are using their own edge tech.
To confirm if the platform is what you need, try this function:
using System.Management;
using System.Net;
using System.Threading.Tasks;
public static async Task<HttpResponseMessage> Run(HttpRequestMessage req, TraceWriter log)
{
// Get OS friendly name
// http://stackoverflow.com/questions/577634/how-to-get-the-friendly-os-version-name
var caption = (from x in new ManagementObjectSearcher("SELECT Caption FROM Win32_OperatingSystem").Get().Cast<ManagementObject>()
select x.GetPropertyValue("Caption")).FirstOrDefault();
string name = caption != null ? caption.ToString() : "Unknown";
// the function response
return req.CreateResponse(HttpStatusCode.OK, name);
}
I think yoy can achieve this via combination of CodeDeploy service and AWS CodePipeline.
Refer to this article:
http://docs.aws.amazon.com/codedeploy/latest/userguide/getting-started-windows.html
to learn how to deploy code via CodeDeploy. Later see this article:
http://docs.aws.amazon.com/codepipeline/latest/userguide/getting-started-4.html
to learn how you can configure aws Pipline to call Code Deploy and later execute your batch job on created windows machine (note: you will probably want to use
S3 instead of Github - which is possible with CodePipeline).
I would consider to bootstrap whole such configuration via script - using aws cli - this way you can clean up easily your resources like this:
:aws codepipeline delete-pipeline --name "MyJob"
Of course you can configure the pipeline via aws web console and leave the pipeline configured to run your code on regular basis.