Can I get RabbitMq connection id from my Python/kombu client? - python-2.7

When I run this command:
rabbitmqctl list_connections pid
I get output like the following:
<rabbit#my_box.2.1234.0>
<rabbit#my_box.2.1235.0>
Is there a way to read this pid from my kombu client?

RabbitMQ does not expose this kind of internal details through AMQP.
You can get many informations about connections using the management plugin and its REST API. Here is an example:
// JSON returned by http://localhost:15672/api/connections
// ...
{
"connected_at": 1458292870638,
"client_properties":
{
"product": "RabbitMQ",
"copyright": "Copyright (c) 2007-2016 Pivotal Software, Inc.",
"capabilities":
{
"exchange_exchange_bindings": true,
"connection.blocked": true,
"authentication_failure_close": true,
"basic.nack": true,
"publisher_confirms": true,
"consumer_cancel_notify": true
},
"information": "Licensed under the MPL. See http://www.rabbitmq.com/",
"version": "0.0.0",
"platform": "Java"
},
"channel_max": 0,
"frame_max": 131072,
"timeout": 60,
"vhost": "/",
"user": "guest",
"protocol": "AMQP 0-9-1",
"ssl_hash": null,
"ssl_cipher": null,
"ssl_key_exchange": null,
"ssl_protocol": null,
"auth_mechanism": "PLAIN",
"peer_cert_validity": null,
"peer_cert_issuer": null,
"peer_cert_subject": null,
"ssl": false,
"peer_host": "127.0.0.1",
"host": "127.0.0.1",
"peer_port": 54872,
"port": 5672,
"name": "127.0.0.1:54872 -> 127.0.0.1:5672",
"node": "rabbit#localhost",
"type": "network",
"channels": 1,
"state": "running",
"send_pend": 0,
"send_cnt": 108973,
"recv_cnt": 99426,
"recv_oct_details":
{
"rate": 288892.8
},
"recv_oct": 5540646,
"send_oct_details":
{
"rate": 1912389.8
},
"send_oct": 36669998
},
// ...
However, PIDs are not exposed either through this mechanism.

Related

Can not connect to service inside ECS, health checks won't even pass

I want to run a Rust application in ECS. The Rust application has a health API and a websocket API. I can not connect to both from my machine to ECS and ECS itself even fails to query the health API using localhost, so I am doing a lot of stuff wrong and I do not know what. It works on my machine.
I checked my security groups, it allows everything. The whole Rust application can be found here: https://github.com/Jasperav/ssh-dockerfile/blob/master/src/main.rs. The relevant code (main.rs) can be found below.
I uploaded the Rust image to ECR public as well: https://eu-central-1.console.aws.amazon.com/ecr/repositories/public/465984508737/hello?region=eu-central-1. This can be deployed on an ARM machine.
The steps I took:
Made a Rust application which serves 2 endpoints, websocket and HTTP
Created an image and uploaded to ECR
In ECS, I created a new cluster with 1 public subnet. I used Fargate as infrastructure.
In the task definition, I mapped port 3014 and 3015 to TCP HTTP and selected my image. In the health api, I added CMD-SHELL curl -f localhost:3015/health
I deployed the task as a service to the cluster.
I verified I see the logging of the server that it is deployed successfully. However, I can not connect through the public ipv4 address to either the websocket API or health API.
What did I do wrong? This is the relevant code:
use std::net::SocketAddr;
use tokio::net::TcpListener;
use warp::Filter;
use warp::hyper::StatusCode;
#[tokio::main]
async fn main() {
println!("Starting up...");
let url = "0.0.0.0:3014";
let listener = TcpListener::bind(url)
.await
.unwrap();
println!("Listing on default URL");
tokio::spawn(async move {
run_health_check().await;
});
loop {
match listener.accept().await {
Ok((stream, _)) => {
let addr = stream.peer_addr().expect("connected streams should have a peer address");
println!("Peer address: {}", addr);
let ws_stream = tokio_tungstenite::accept_async(stream)
.await
.expect("Error during the websocket handshake occurred");
println!("New WebSocket connection: {}", addr);
drop(ws_stream);
}
Err(e) => panic!("{:#?}", e),
}
}
}
async fn run_health_check() {
let routes = warp::get()
.and(warp::path("health"))
.map(move || Ok(warp::reply::with_status("", StatusCode::OK)))
.with(warp::cors().allow_any_origin());
let socket_address: SocketAddr = "0.0.0.0:3015".to_string().parse().unwrap();
warp::serve(routes).run(socket_address).await;
}
This is the ECS configuration:
{
"taskDefinitionArn": "arn:aws:ecs:eu-central-1:XXXX:task-definition/websocket:3",
"containerDefinitions": [
{
"name": "websocket",
"image": "XXX.dkr.ecr.eu-central-1.amazonaws.com/hello3",
"cpu": 0,
"links": [],
"portMappings": [
{
"name": "websocket-3015-tcp",
"containerPort": 3015,
"hostPort": 3015,
"protocol": "tcp",
"appProtocol": "http"
},
{
"name": "websocket-3014-tcp",
"containerPort": 3014,
"hostPort": 3014,
"protocol": "tcp",
"appProtocol": "http"
}
],
"essential": true,
"entryPoint": [],
"command": [],
"environment": [],
"environmentFiles": [],
"mountPoints": [],
"volumesFrom": [],
"secrets": [],
"dnsServers": [],
"dnsSearchDomains": [],
"extraHosts": [],
"dockerSecurityOptions": [],
"dockerLabels": {},
"ulimits": [],
"logConfiguration": {
"logDriver": "awslogs",
"options": {
"awslogs-create-group": "true",
"awslogs-group": "/ecs/websocket",
"awslogs-region": "eu-central-1",
"awslogs-stream-prefix": "ecs"
},
"secretOptions": []
},
"healthCheck": {
"command": [
"CMD-SHELL curl -f localhost:3015/health"
],
"interval": 30,
"timeout": 5,
"retries": 3
},
"systemControls": []
}
],
"family": "websocket",
"executionRoleArn": "arn:aws:iam::XXX:role/ecsTaskExecutionRole",
"networkMode": "awsvpc",
"revision": 3,
"volumes": [],
"status": "ACTIVE",
"requiresAttributes": [
{
"name": "com.amazonaws.ecs.capability.logging-driver.awslogs"
},
{
"name": "com.amazonaws.ecs.capability.docker-remote-api.1.24"
},
{
"name": "ecs.capability.execution-role-awslogs"
},
{
"name": "com.amazonaws.ecs.capability.ecr-auth"
},
{
"name": "com.amazonaws.ecs.capability.docker-remote-api.1.19"
},
{
"name": "com.amazonaws.ecs.capability.docker-remote-api.1.17"
},
{
"name": "ecs.capability.container-health-check"
},
{
"name": "ecs.capability.execution-role-ecr-pull"
},
{
"name": "com.amazonaws.ecs.capability.docker-remote-api.1.18"
},
{
"name": "ecs.capability.task-eni"
},
{
"name": "com.amazonaws.ecs.capability.docker-remote-api.1.29"
}
],
"placementConstraints": [],
"compatibilities": [
"EC2",
"FARGATE"
],
"requiresCompatibilities": [
"FARGATE"
],
"cpu": "1024",
"memory": "3072",
"runtimePlatform": {
"cpuArchitecture": "ARM64",
"operatingSystemFamily": "LINUX"
},
"registeredAt": "2022-12-11T11:19:54.622Z",
"registeredBy": "arn:aws:iam::465984508737:root",
"tags": [
{
"key": "ecs:taskDefinition:createdFrom",
"value": "ecs-console-v2"
},
{
"key": "ecs:taskDefinition:stackId",
"value": "arn:aws:cloudformation:eu-central-1:XX:stack/ECS-Console-V2-TaskDefinition-c246faa1-4ba5-4b54-ac4a-49ae00ab2f0d/8fd98e10-7944-11ed-abb2-0a564ca58c2a"
}
]
}
Dockerfile:
FROM rustlang/rust:nightly-bullseye AS builder
WORKDIR app
COPY . .
RUN cargo build --bin hello -Z sparse-registry
FROM debian:11.5
COPY --from=builder ./app/target/debug/hello .
CMD ["./hello"]

SSRS Subscription using REST API

Can we create/modify SSRS Subscription using REST API as SQL Server Reporting services 2017 supports REST API calls?
here is an online swagger example they built, but some properties are invalid or missing. Ex: Schedule property is missing and ExtensionSettings.ParameterValues is an array instead of an object
https://app.swaggerhub.com/apis/microsoft-rs/SSRS/2.0#/Subscriptions/GetSubscriptions
here is an example of a body message that works for me when posting:
{ "DataQuery": null, "DeliveryExtension": "Report Server Email", "Description": "test3", "EventType": "TimedSubscription", "ExtensionSettings": { "Extension": "Report Server Email", "ParameterValues": [ { "IsValueFieldReference": false, "Name": "TO", "Value": "userName" }, { "IsValueFieldReference": false, "Name": "IncludeReport", "Value": "True" }, { "IsValueFieldReference": false, "Name": "RenderFormat", "Value": "PDF" }, { "IsValueFieldReference": false, "Name": "Subject", "Value": "#ReportName was executed at #ExecutionTime" }, { "IsValueFieldReference": false, "Name": "IncludeLink", "Value": "True" }, { "IsValueFieldReference": false, "Name": "Priority", "Value": "NORMAL" } ] }, "IsActive": true, "IsDataDriven": false, "LastRunTime": null, "LastStatus": "New Subscription", "LocalizedDeliveryExtensionName": "E-Mail", "ModifiedBy": "userName", "ModifiedDate": "2020-09-04T13:45:51.343-05:00", "Owner": "userName", "ParameterValues": [], "Report": "/Folder Name/Report Name", "Schedule": { "Definition": { "EndDate": "0001-01-01T00:00:00Z", "EndDateSpecified": false, "Recurrence": { "DailyRecurrence": { "DaysInterval": 1 }, "MinuteRecurrence": null, "MonthlyDOWRecurrence": null, "MonthlyRecurrence": null, "WeeklyRecurrence": null }, "StartDateTime": "2020-09-04T02:00:00-05:00" }, "ScheduleID": null }, "ScheduleDescription": "At 2:00 AM every day, starting 9/4/2020" }

Parsing complex JSON using Kinesis Analytics

I have the following JSON stream coming from Twitter.
{
"created_at": "Thu Sep 27 21:02:00 +0000 2018",
"id": 1045418301336244224,
"id_str": "1045418301336244224",
"text": "Conditional Branching Now Supported in AWS Systems Manager Automation - #awscloud #amazon #aws",
"source": "Buffer",
"truncated": false,
"in_reply_to_status_id": null,
"in_reply_to_status_id_str": null,
"in_reply_to_user_id": null,
"in_reply_to_user_id_str": null,
"in_reply_to_screen_name": null,
"user": {
"id": 14687423,
"id_str": "14687423",
"name": "Casey Becking",
"screen_name": "caseybecking",
"location": "Huntington Beach, CA",
"url": "http://caseybecking.com",
"description": "I do stuff with computers for #rackspace , geek at heart! play and watch to much hockey, someday I'll make a personal website.",
"translator_type": "none",
"protected": false,
"verified": false,
"followers_count": 4191,
"friends_count": 2412,
"listed_count": 90,
"favourites_count": 794,
"statuses_count": 12995,
"created_at": "Wed May 07 15:03:23 +0000 2008",
"utc_offset": null,
"time_zone": null,
"geo_enabled": true,
"lang": "en",
"contributors_enabled": false,
"is_translator": false,
"profile_background_color": "000000",
"profile_background_image_url": "http://abs.twimg.com/images/themes/theme15/bg.png",
"profile_background_image_url_https": "https://abs.twimg.com/images/themes/theme15/bg.png",
"profile_background_tile": false,
"profile_link_color": "ABB8C2",
"profile_sidebar_border_color": "000000",
"profile_sidebar_fill_color": "000000",
"profile_text_color": "000000",
"profile_use_background_image": false,
"profile_image_url": "http://pbs.twimg.com/profile_images/981617292546060289/RMX0GQFe_normal.jpg",
"profile_image_url_https": "https://pbs.twimg.com/profile_images/981617292546060289/RMX0GQFe_normal.jpg",
"profile_banner_url": "https://pbs.twimg.com/profile_banners/14687423/1439137746",
"default_profile": false,
"default_profile_image": false,
"following": null,
"follow_request_sent": null,
"notifications": null
},
"geo": null,
"coordinates": null,
"place": null,
"contributors": null,
"is_quote_status": false,
"quote_count": 0,
"reply_count": 0,
"retweet_count": 0,
"favorite_count": 0,
"entities": {
"hashtags": [{
"text": "amazon",
"indices": [106, 113]
}, {
"text": "aws",
"indices": [114, 118]
}],
"urls": [{
"url": "",
"expanded_url": "https://buff.ly/2zwRyBx",
"display_url": "buff.ly/2zwRyBx",
"indices": [72, 95]
}],
"user_mentions": [{
"screen_name": "awscloud",
"name": "Amazon Web Services",
"id": 66780587,
"id_str": "66780587",
"indices": [96, 105]
}],
"symbols": []
},
"favorited": false,
"retweeted": false,
"possibly_sensitive": false,
"filter_level": "low",
"lang": "en",
"timestamp_ms": "1538082120628",
"emoticons": [],
"sentiments": "Neutral"
}
How do I parse, analyze and process this JSON using Kinesis Analytics?
The arrays should be flattened and this is very doable in Hive but need to do the same in Kinesis Analytics.

Ansible Tower is not registering variable correctly in ec2-remote-facts module

The ec2-remote-facts module works correctly when I do not run it on Ansible Tower. The first example below (not using Tower) includes all of the block_device_mapping information that I use in subsequent tasks.
This is a big issue I were to use Tower in the long run. My code is the same for both examples. Any thoughts that could lead me in the right direction.
My only thought is that since it is not a core module, Ansible Tower is not synced to the module's most recent code perfectly. But I am baffled. Thanks!
Ansible Version - ansible 2.2.0.0 (running on Ubuntu)
Ansible Tower Version - Tower Version 3.0.3 (running on Centos)
---examples below----
-Ansible (not using Tower)-
ok: [localhost -> localhost] => {
"changed": false,
"instances": [
{
"ami_launch_index": "0",
"architecture": "x86_64",
"block_device_mapping": [
{
"attach_time": "2017-01-13T17:05:31.000Z",
"delete_on_termination": false,
"device_name": "/dev/sdb",
"status": "attached",
"volume_id": "vol-132312313212313"
},
{
"attach_time": "2017-01-13T17:05:31.000Z",
"delete_on_termination": true,
"device_name": "/dev/sda1",
"status": "attached",
"volume_id": "vol-123123123123"
},
{
"attach_time": "2017-01-13T17:05:31.000Z",
"delete_on_termination": false,
"device_name": "/dev/sdc",
"status": "attached",
"volume_id": "vol-123123123123"
}
],
"client_token": "",
"ebs_optimized": false,
"groups": [
{
"id": "sg-12312313",
"name": "n123123123
}
],
"hypervisor": "xen",
"id": "i-123123123123",
"image_id": "ami-123123123123",
"instance_profile": null,
"interfaces": [
{
"id": "eni-123123123",
"mac_address": "123123123"
}
],
"kernel": null,
"key_name": "my-v123123",
"launch_time": "2017-01-13T17:05:30.000Z",
"monitoring_state": "disabled",
"persistent": false,
"placement": {
"tenancy": "default",
"zone": "us-east-1b"
},
"private_dns_name": "ip-112312312",
"private_ip_address": "10.1.1.4",
"public_dns_name": "",
"public_ip_address": null,
"ramdisk": null,
"region": "us-east-1",
"requester_id": null,
"root_device_type": "ebs",
"source_destination_check": "true",
"spot_instance_request_id": null,
"state": "running",
"tags": {
"CurrentIP": "10.1.1.1.4",
"Name": "d1",
"Type": "d2"
},
"virtualization_type": "hvm",
"vpc_id": "vpc-123123123"
},
Ansible Tower (notice that its missing the block_device_mapping block of code)
TASK [debug] **********************
ok: [localhost] => {
"db_id.instances": [
{
"ami_launch_index": "0",
"architecture": "x86_64",
"client_token": "",
"ebs_optimized": false,
"groups": [
{
"id": "sg-123123",
"name": "n123123123"
}
],
"hypervisor": "xen",
"id": "i-123123123",
"image_id": "ami-123123",
"instance_profile": null,
"interfaces": [
{
"id": "eni-123123123",
"mac_address": "123123123"
}
],
"kernel": null,
"key_name": "m123123",
"launch_time": "2017-01-13T17:05:30.000Z",
"monitoring_state": "disabled",
"persistent": false,
"placement": {
"tenancy": "default",
"zone": "us-east-1b"
},
"private_dns_name": "ip-1123123123123",
"private_ip_address": "10.1.1.4",
"public_dns_name": "",
"ramdisk": null,
"region": "us-east-1",
"requester_id": null,
"root_device_type": "ebs",
"source_destination_check": "true",
"spot_instance_request_id": null,
"state": "running",
"tags": {
"Name": "123123",
"Type": "123123"
},
"virtualization_type": "hvm",
"vpc_id": "vpc-123123123"
},
I guess you indeed have old Ansible version on your Tower box.
As of today, official Ansible Tower Vagrant box (ansible/tower (virtualbox, 3.0.3)) has ver 2.1.2 inside:
[vagrant#ansible-tower ~]$ ansible --version
ansible 2.1.2.0
config file = /etc/ansible/ansible.cfg
configured module search path = Default w/o overrides
And ec2_remote_facts has no block_device_mapping in this version.
So update Ansible on your Tower box or apply this patch.

Retrieving specif fields from nested object returned by elasticsearch

I am new to elasticsearch. I am trying to retrieve the selected fields from nested object returned by elasticsearch. Below is the object stored in elasticsearch index:
{
"_index": "xxx",
"_type": "user",
"_id": "2",
"_version": 1,
"exists": true,
"_source": {
"user": {
"user_auth": {
"username": "nickcoolster#gmail.com",
"first_name": "",
"last_name": "",
"is_active": false,
"_state": {
"adding": false,
"db": "default"
},
"id": 2,
"is_superuser": false,
"is_staff": false,
"last_login": "2012-07-10 21:11:53",
"password": "sha1$a6caa$cba2f821678ccddc4d70c8bf0c8e0655ab5c279b",
"email": "nickcoolster#gmail.com",
"date_joined": "2012-07-10 21:11:53"
},
"user_account": {},
"user_profile": {
"username": null,
"user_id": 2,
"following_count": 0,
"sqwag_count": 0,
"pwd_reset_key": null,
"_state": {
"adding": false,
"db": "default"
},
"personal_message": null,
"followed_by_count": 0,
"displayname": "nikhil11",
"fullname": "nikhil11",
"sqwag_image_url": null,
"id": 27,
"sqwag_cover_image_url": null
}
}
}
}
Now I want only certain fields to be returned from user.user_auth(like password,superuser etc should not be returned).
I am using django PyES and below is the code that i tried:
fields = ["user.user_auth.username","user.user_auth.first_name","user.user_auth.last_name","user.user_auth.email"]
result = con.search(query=q,indices="xxx",doc_types="user",fields=fields)
but the result that I get is only email being retrieved (i:e only last field being returned):
{
"user.user_auth.email": "nikhiltyagi.eng#gmail.com"
}
I want this abstraction for both the nested objects (i:e user_auth,user_profile)
how do I do this?
What about using latest django-haystack? It covers also ElasticSearch as a backend so you can get well binded ElasticSearch FTS to your project.