View real-time logs from NAOqi application with SSH - pepper

Is it possible to view logs from my application without using Choregraphe?
At the moment I am limited to log files from '/var/log/naoqi/servicemanager/'.
I am implementing qi.logger() and would like to connect to the robot IP with SSH and get logs from a specific service.
qicli log-view
only shows system logs. I would like to attach the logger to a my application, maybe using the serivce PID?

Did you try to log into a specific places, like for instance if you start it from an independant python script.
logging.basicConfig(filename='some_files.log',
level=logging.DEBUG,
format='%(levelname)s %(relativeCreated)6d %(threadName)s %(message)s (%(module)s.%(lineno)d)',
filemode='w')
then some tail -f -n /var/log/naoqi/servicemanager/some_files.log
WRN: this is just an hint, I haven't tested this solution...

Related

AWS Lambda Powershell to create mailbox in Hybrid(run powershell commands in both Office 365 and On-Prem)

Now that AWS Lambda supports PowerShell core according to this blog, has anybody tried running PowerShell commands to create Mailbox in Hybrid env(run PS cmdlets in both On-prem and office 365 env) using lambda? I couldn't find anything online which does that. Most of the Lambda Powershell usecases seems to be related to using PowerShell scripts to automate and manage AWS resources.
I'm working on a POC for a REST service which does all of the mailbox creation operations and was planning to use API gateway to invoke lambda powershell.
I did setup my environment following aws documentation and created a PowerShell script which performs mailbox operation and created and deployed lambda. Upon testing, i'm getting the following errors while creating a PowerShell session for O365 env.
Script snippet:
$Session = New-PSSession -ConfigurationName Microsoft.Exchange -ConnectionUri https://outlook.office365.com/powershell-liveid/ -Credential $mycreds -Authentication Basic -AllowRedirection
Write-Host "Created session for PS"
Import-PSSession $Session
Write-Host "Imported Session"
Write-Host "Getting Mailbox"
Get-Mailbox -Identity 'mailbox'
Cloudwatch Logs:
[Error] - This parameter set requires WSMan, and no supported WSMan client library was found. WSMan is either not installed or unavailable for this system.
[Information] - Created session for PS
[Error] - Cannot validate argument on parameter 'Session'. The argument is null. Provide a valid value for the argument, and then try running the command again.
[Information] - Imported Session
[Information] - Getting Mailbox
[Error] - The term 'Get-Mailbox' is not recognized as the name of a cmdlet, function, script file, or operable program. Check the spelling of the name, or if a path was included, verify that the path is correct and try again.
Wondering if anyone has tried invoking Office 365/on-prem mailbox creation PS scripts using lambda or point me to the right direction? Thanks
I would also like to know, if with AWS lambda powershell core can i winrm into another windows box so that i can execute powershell mailbox commands? According to the ans dated in 10/2018 we cannot, but wondering if anyone knows anything latest on this.
I am working the same task. API->Lambda->C#/PowerShell->Office360->CreateMailbox.
However I'm hung up on the same line as well, but slightly different message.
What do you have for a Requires line in your ps1 file?
Requires -Modules #{ModuleName='AWS.Tools.Common';ModuleVersion='4.0.5.0'}
I am assuming you are using ModuleVersion='3.3.618.0' per the linked blog post, but there is a '4.0.5.0' version available. ... However it hasn't help me yet, but perhaps it would help you. Here is link with the upgrade information. https://docs.aws.amazon.com/powershell/latest/userguide/v4migration.html

How to schedule task to call gRPC method?

I have .Net server running in Google Kubernetes Engine. It is configured to use gRPC through Google Cloud Endpoints. Now I need to schedule task to call my gRPC method once per day.
The first thing I tried was to use Google Cloud Scheduler to call http methods directly. For that I have:
Set up HTTP to gRPC transcoding on my server to call my gRPC method through http.
Created and enabled SSL certificate as described here.
Created service account in IAM & admin console with Service Account Token Creator and Service Account User permissions.
Created Cloud Scheduler job with my url and Auth header as OIDC token and created above service account.
Deployed Google Cloud Endpoints configuration with following parameters (not only them):
authentication:
providers:
- id: google_service_account
issuer: MY_SERVICE_ACCOUNT_EMAIL
jwks_uri: https://www.googleapis.com/robot/v1/metadata/x509/MY_SERVICE_ACCOUNT_EMAIL
rules:
- selector: "*"
requirements:
- provider_id: google_service_account
After that when I run scheduler job it returns result "Failed". In logs it writes ERROR with status UNKNOWN.
The second thing I tried was to use Google Cloud Scheduler to publish message in Pub Sub topic with my server as subscriber.
Unsuccesfully too because I can't verify ownership of Google Cloud Endpoints domain. I asked regarding question here: How to verify ownership of Google Cloud Endpoints service URL?
Now the question: what is the best way to schedule task that would call gRPC method assuming following environment:
.Net server running on GKE
gRPC
Automated periodical call of that task (I can call manually but it's meaningless)
So you were able to make a HTTP call manually, but not automatically by Google Cloud Scheduler, is that correct?
If so, check to see if the request reach the Cloud Endpoint Proxy in the cloud console Endpoint Logging, it may give you some hints.
Distributed scheduler
more details refer sourcedcode Distributed scheduler
This application can be run on different hosts and offers functionality to
schedule execution of arbitrary command at particular time or periodically.
There are two ways to communicate with application: gRPC and REST. Remote
interfaces are
specified in dsched.proto file
Corresponding REST API could be also found over there in form of API
annotations. We also provide generated Swagger files.
To specify task execution timing, we are using notation adopted by cron.
Scheduled tasks are stored in file and loaded automatically during startup.
Building
Install gRPC
Install gRPC gateway
To parse crontab statements and schedule task execution, we are using gopkg.in/robfig/cron.v2 library.
So it should be installed also: go get -u gopkg.in/robfig/cron.v2. Documentation could be found here
Get dsched package: go get
-u gitlab.com/andreynech/dsched
Now it is possible to run standard go build command in dscheduler and
gateway directories to generate binaries for scheduler and REST/JSON API
gateway. It might be also helpful to examine our
CI configuration file to see how we
set up building environment.
Running
All the scheduling functionality is implemented by dscheduler executable. So
it could be run on system startup or on demand. As described by dscheduler --help,
there are two command line parameters:
-i string - File name to store task list (default "/var/run/dscheduler.db")
-p string - Endpoint to listen (default ":50051")
If there is a need to offer REST/JSON API, gateway application located in
gateway directory should be run. It could reside on the same host as
dscheduler, but typically it would be other host which is accessible over
HTTP from outside and at the same way can talk to dscheduler running in
internal network. This setup was also the reason to split scheduler and
gateway in two executables. gateway is mostly generated application and
supports several command-line parameters described by running gateway --help.
Important parameter is -sched_endpoint string which is endpoint of Scheduler
service (default "localhost:50051"). It specifies the host name and port
where dscheduler is listening for requests.
Scheduling tasks (testing)
There are three ways to control scheduler server:
Using Go client implemented in cli/ directory
Using Python client implemented in py_cli directory
Using REST/JSON API gateway and curl
Go and Python clients have similar set of command line parameters.
$ ./cli --help
Usage of cli:
-a string
The command to execute at time specified by -c parameter
-c string
Statement in crontab format describes when to execute the command
-e string
Host:port to connect (default "localhost:50051")
-l List scheduled tasks
-p Purge all scheduled tasks
-r int
Remove the task with specified id from schedule
-s Schedule task. -c and -a arguments are required in this case
They are using gRPC protocol to talk to scheduler server. Here are several
example invocations:
$ ./cli -l list currently scheduled tasks
$ ./cli -s -c "#every 0h00m10s" -a "df" schedule df command for
execution every 10 seconds
$ ./cli -s -c "0 30 * * * *" -a "ls -l" schedule ls -l command to
run every 30 minutes
$ ./cli -r 3 remove task with ID 3
$ ./cli -p remove all scheduled tasks
It is also possible to use curl to invoke dscheduler functionality over
REST/JSON API gateway. Assuming that dscheduler and gateway applications
are running, here are some invocations to list, add and remove scheduling
entries from the same host (localhost):
curl 'http://localhost:8080/v1/scheduler/list' list currently scheduled tasks
curl -d '{"id":0, "cron":"#every 0h00m10s", "action":"ls"}' -X POST 'http://localhost:8080/v1/scheduler/add' schedule ls command for execution every 10 seconds
curl -d '{"id":0, "cron":"0 30 * * * *", "action":"ls -l"}' -X POST 'http://localhost:8080/v1/scheduler/add' schedule ls -l to run every 30 minutes
curl -d '{"id":2}' -X POST 'http://localhost:8080/v1/scheduler/remove' remove task with ID 2.
curl -X POST 'http://localhost:8080/v1/scheduler/removeall' remove all scheduled tasks
All changes are automatically saved in file.
Thoughts on scheduler service discovery
In large deployment scenarios (like hundreds of hosts) it might be
challenging problem to find out all IP addresses and ports where scheduler
service is started. It would be pretty easy to add support for Zeroconf
(Bonjour/Avahi) technology to simplify service discovery. As alternative, it
might be possible to implement something similar to CORBA Naming Service
where running services register themself and location of naming service is
well known. We decide to collect feedback before deciding for particular
service discovery implementation. So your input very welcome!

uknown reason for google cloud compute engine VM shutdown?

One of the vm instances on google cloud compute was shutdown, with an event log in stackdriver without ip or actor (user or service or system) which initiated the event. The instance has onHostMaintenance set to migrate and automaticRestart set to true. This particular instance has migrated on maintenance without error before. The stackdriver event log looks like
{
actor: {
user: ""
},
event_subtype: "compute.instances.stop",
event_timestamp_us: "1531781734907624",
event_type: "GCE_API_CALL",
ip_address: "",
}
The user and ip_address fields are NOT redacted. They have empty values on actual log.
is this common? how does one identify the cause for shutdown in these peculiar cases ?
According to your event type, I dive in the doc of Activity logs,
Compute Engine API calls - GCE_API_CALL events are API calls that change the state of a resource.
It seems like someone may use API calls to shutdown your VM. Looking your settings and logs on the VM instance, it can't be a hostError or Maintenance event. As far as you turn on the onHostMaintenance set to migrate and automaticRestart set to true. Your VM will always be migrated to another hardware.
Out of curiosity, did you restart your VM? did it shutdown again? How often does it happen?
Seems like app engine had an outage during that time, incident .

Amazon EC2 custom AMI not running bootstrap (user-data)

I have encountered an issue when creating custom AMIs (images) on EC2 instances. If I start up a Windows default 2012 server instance with a custom bootstrap/user-data script such as;
<powershell>
PowerShell "(New-Object System.Net.WebClient).DownloadFile('http://download.microsoft.com/download/3/2/2/3224B87F-CFA0-4E70-BDA3-3DE650EFEBA5/vcredist_x64.exe','C:\vcredist_x64.exe')"
</powershell>
It will work as intended and go to the URL and download the file, and store it on the C: Drive.
But if I setup a Windows Server Instance, then create a image from it, and store it as a Custom AMI, then deploy it with the exact same custom user-data script it will not work. But if I go to the instance url (http://169.254.169.254/latest/user-data) it will show the script has imported successfully but has not been executed.
After checking the error logs I have noticed this on a regular occasion:
Failed to fetch instance metadata http://169.254.169.254/latest/user-data with exception The remote server returned an error: (404) Not Found.
Update 4/15/2017: For EC2Launch and Windows Server 2016 AMIs
Per AWS documentation for EC2Launch, Windows Server 2016 users can continue using the persist tags introduced in EC2Config 2.1.10:
For EC2Config version 2.1.10 and later, or for EC2Launch, you can use
true in the user data to enable the plug-in after
user data execution.
User data example:
<powershell>
insert script here
</powershell>
<persist>true</persist>
For subsequent boots:
Windows Server 2016 users must additionally enable configure and enable EC2Launch instead of EC2Config. EC2Config was deprecated on Windows Server 2016 AMIs in favor of EC2Launch.
Run the following powershell to schedule a Windows Task that will run the user data on next boot:
C:\ProgramData\Amazon\EC2-Windows\Launch\Scripts\InitializeInstance.ps1 –Schedule
By design, this task is disabled after it is run for the first time. However, using the persist tag causes Invoke-UserData to schedule a separate task via Register-FunctionScheduler, to persist your user data on subsequent boots. You can see this for yourself at C:\ProgramData\Amazon\EC2-Windows\Launch\Module\Scripts\Invoke-Userdata.ps1.
Further troubleshooting:
If you're having additional issues with your user data scripts, you can find the user data execution logs at C:\ProgramData\Amazon\EC2-Windows\Launch\Log\UserdataExecution.log for instances sourced from the WS 2016 base AMI.
Original Answer: For EC2Config and older versions of Windows Server
User data execution is automatically disabled after the initial boot. When you created your image, it is probable that execution had already been disabled. This is configurable manually within C:\Program Files\Amazon\Ec2ConfigService\Settings\Config.xml.
The documentation for "Configuring a Windows Instance Using the EC2Config Service" suggests several options:
Programmatically create a scheduled task to run at system start using schtasks.exe /Create, and point the scheduled task to the user data script (or another script) at C:\Program Files\Amazon\Ec2ConfigServer\Scripts\UserScript.ps1.
Programmatically enable the user data plug-in in Config.xml.
Example, from the documentation:
<powershell>
$EC2SettingsFile="C:\Program Files\Amazon\Ec2ConfigService\Settings\Config.xml"
$xml = [xml](get-content $EC2SettingsFile)
$xmlElement = $xml.get_DocumentElement()
$xmlElementToModify = $xmlElement.Plugins
foreach ($element in $xmlElementToModify.Plugin)
{
if ($element.name -eq "Ec2SetPassword")
{
$element.State="Enabled"
}
elseif ($element.name -eq "Ec2HandleUserData")
{
$element.State="Enabled"
}
}
$xml.Save($EC2SettingsFile)
</powershell>
Starting with EC2Config version 2.1.10, you can use <persist>true</persist> to enable the plug-in after user data execution.
Example, from the documentation:
<powershell>
insert script here
</powershell>
<persist>true</persist>
Another solution that worked for me is to run Sysprep with EC2Launch.
The issue is that AWS doesn't reestablish the route to the profile service (169.254.169.254) in your custom AMI. See response by SanjitPatel in this post. So when I tried to use my custom AMI to create spot requests, my new instances were failing to find user data.
Shutting down with Sysprep, essentially forces AWS re-do all setup work on the instance, as if it were run for the first time. So when you create your instance, shut it down with Sysprep and then create your custom AMI, AWS will setup the profile service route correctly for the new instances and execute your user data. This also avoids manually changing Windows Tasks and executing user data on subsequent boots, as persist tag does.
Here is a quick step-by-step:
Create an instance using one of the AWS Windows AMIs (Windows Server 2016 Nano Server doesn't support Sysprep) and passing your desired user data (this may be optional, but good to make sure AWS wires setup scripts correctly to handle user data).
Customize your instance as needed.
Shut down your instance with Sysprep. Just open EC2LaunchSettings application and click "Shutdown with Sysprep". Full instructions here.
Create your custom AMI from the instance you just shut down.
Use your custom AMI to create other instances, passing user data on instance creation. User data will be executed on instance launch. In my case, I used Spot Request screen, which had a User Data text box.
Hope this helps!
At the end of initial bootstrap (UserData) script, just append persist tag as shown below.
Works perfectly.
<powershell>
insert script here
</powershell>
<persist>true</persist>
For those people that got here from Google and are running a Server 2016 instance, it seems that this is no longer possible.
Server2016 doesn't have ec2config service and so you can't use the persist flag.
<persist>true</persist>
Described in Anthony Neace's post.
Server 2016 uses EC2Launch and I haven't yet seen how it's possible to run a script at every boot. You can run a script on the first boot, but subsequent boots will not run it.
I added below powershell script to run during the AMI bake process which helped me fix this issue. This was Windows server 2019.
$EC2LaunchInitInstance = "C:\ProgramData\Amazon\EC2-Windows\Launch\Scripts\InitializeInstance.ps1"
$EC2LaunchSysprep = "C:\ProgramData\Amazon\EC2-Windows\Launch\Scripts\SysprepInstance.ps1"
Invoke-Expression -Command "$EC2LaunchInitInstance -Schedule"
Invoke-Expression -Command "$EC2LaunchSysprep -NoShutdown"

How to determine if application/application container has started on AWS elastic beanstalk?

I am writing a generic application deployment tool. It takes an application from the user and deploys it to Elastic Beanstalk. That part is working. The issue is that the users want to compose the use of the deployment tool with other operations, and right now my tool reports success when it has told the Beanstalk APIs to start the application.
Unfortunately, it is thus returning before the application itself has started. So the user is forced to write polling logic themselves to await the starting of their application.
Looking at the AWS Elastic Beanstalk API and I cannot see any methods that return any indication of such a state reporting. The closest I can find is DescribeEvents... which looks hopeful, however it seems from the examples that the granularity of the application / application container starting within the environment is not part of that API:
<DescribeEventsResponse xmlns="https://elasticbeanstalk.amazonaws.com/docs/2010-12-01/">
<DescribeEventsResult>
<Events>
<member>
<Message>Successfully completed createEnvironment activity.</Message>
<EventDate>2010-11-17T20:25:35.191Z</EventDate>
<VersionLabel>New Version</VersionLabel>
<RequestId>bb01fa74-f287-11df-8a78-9f77047e0d0c</RequestId>
<ApplicationName>SampleApp</ApplicationName>
<EnvironmentName>SampleAppVersion</EnvironmentName>
<Severity>INFO</Severity>
</member>
Note: the INFO level event is that the environment was created, nothing at the lower level of the application container starting within the environment appears to be reported...
I could mandate that the applications deployed with this tool expose a status REST endpoint, but that puts restrictions on the application.
Is there some API that I am missing that will report when the application container (e.g. Tomcat, Node, etc) is started... or better yet when the application deployed within the container is started... but I can live with the application container
Every application is supposed to expose a health URL (Beanstalk/ELB will have problems any case otherwise - it will think the instances are not responding, and might replace). This is typically a HEAD request expecting a 200 OK.
Since this is anyway expected to be there in all apps, you can probably hit this URL and check the deployment is OK. I guess Beanstalk console itself is using this method.
You can also poll using DescribeEnvironments API call which will give you the Environment CNAME (the URL to check), Health of the environment (RED, GREEN), Status (Launching | Updating | Ready | Terminating | Terminated). This API takes Environment Name as an argument. So you can just get the description of one environment.
API Documentation: http://docs.aws.amazon.com/elasticbeanstalk/latest/APIReference/API_DescribeEnvironments.html
Explanation of Environment Description in response: http://docs.aws.amazon.com/elasticbeanstalk/latest/APIReference/API_EnvironmentDescription.html
Sample Response Below:
<DescribeEnvironmentsResponse xmlns="https://elasticbeanstalk.amazonaws.com/docs/2010-12-01/">
<DescribeEnvironmentsResult>
<Environments>
<member>
<VersionLabel>Version1</VersionLabel>
<Status>Available</Status>
<ApplicationName>SampleApp</ApplicationName>
<EndpointURL>elasticbeanstalk-SampleApp-1394386994.us-east-1.elb.amazonaws.com</EndpointURL>
<CNAME>SampleApp-jxb293wg7n.elasticbeanstalk.amazonaws.com</CNAME>
<Health>Green</Health>
<EnvironmentId>e-icsgecu3wf</EnvironmentId>
<DateUpdated>2010-11-17T04:01:40.668Z</DateUpdated>
<SolutionStackName>32bit Amazon Linux running Tomcat 7</SolutionStackName>
<Description>EnvDescrip</Description>
<EnvironmentName>SampleApp</EnvironmentName>
<DateCreated>2010-11-17T03:59:33.520Z</DateCreated>
</member>
</Environments>
</DescribeEnvironmentsResult>
<ResponseMetadata>
<RequestId>44790c68-f260-11df-8a78-9f77047e0d0c</RequestId>
In your case you may want to read following documentation:
Monitoring Application Health
You can also configure Application Health Check URL for your environment. By default AWS Elastic Beanstalk uses TCP:80 check on your instances. But using the Application Health Check URL you can override this health check to use HTTP:80 by using the Application Health Check URL option as described here.
Using Status/Health from DescribeEnvironments you can check if the application has been deployed.