I want to see some status messages from my drone using "ros2 topic echo /mavros/statustext/recv" in the terminal.
After subscription, I just waiting for a message,(in this time my drone do mission, do takeoff, do landing...) but there's nothing received in that topic.
How to get any message in this topic?
ros2 param set /mavros/sys heartbeat_mav_type "GCS"
Related
How do I peek Scheduled messages in an Azure Service bus topic?
According to this post:
https://github.com/MicrosoftDocs/azure-docs/issues/59641
Scheduled messages reside in the topic until the scheduled time, and users should peek into the topic if they want to see the scheduled messages.
So I can't use the 'PeekMessagesAsync' on a 'ServiceBusReceiver'
Using a 'ServiceBusAdministrationClient' I'm able to get the number of Scheduled messages in the Topic, but I'd really like to get hold of the messages them self....
Any help would be greatly appreciated....
Short answer - you can’t peek messages based on the status.
There’s a now 4 year old issue asking for this feature.
I am trying to consume Google PubSub messages using synchronous PULL API. This is available in Apache Beam Google PubSub IO connector library.
I want to write the consumed messages to Kafka using KafkaIO. I want to use FlinkRunner to execute the job, since we run this application outside GCP.
The problem I am facing is that the consumed messages are not getting ACK'd in GCP PubSub. I have confirmed that the local Kafka instance has the messages consumed from GCP PubSub. The documentation in GCP DataFlow indicates that the data bundle gets finalized when the pipeline is terminated with a data sink, which is Kafka in my case.
But since code is running in Apache Flink and not GCP DataFlow, I think some sort of callback is not getting fired related to ACK'ing the committed message.
What am I doing wrong here?
pipeline
.apply("Read GCP PubSub Messages", PubsubIO.readStrings()
.fromSubscription(subscription)
)
.apply(ParseJsons.of(User.class))
.setCoder(SerializableCoder.of(User.class))
.apply("Filter-1", ParDo.of(new FilterTextFn()))
.apply(AsJsons.of(User.class).withMapper(new ObjectMapper()))
.apply("Write to Local Kafka",
KafkaIO.<Void,String>write()
.withBootstrapServers("127.0.0.1:9092,127.0.0.1:9093,127.0.0.1:9094")
.withTopic("test-topic")
.withValueSerializer((StringSerializer.class))
.values()
);
In the Beam documentation on the PubSub IO class it's mentioned this:
Checkpoints are used both to ACK received messages back to Pubsub (so that they may be retired on the Pubsub end), and to NACK already consumed messages should a checkpoint need to be restored (so that Pubsub will resend those messages promptly).
The ACK are not linked to Dataflow, you should have the same behavior on dataflow. The ack are sent on Checkpoints. Usually the Checkpoints are the windows that you set on your stream flow.
But, you didn't set window! By default, the windows is global, and it closed only at the end, if you stop gracefully your job (and even, I'm not sure about this). Anyway, a better solution is to have fixed windows (for example of 5 minutes) to ack the messages on each of these windows.
The way I fixed this solution was by using Guillaume Blaquiere's (https://stackoverflow.com/users/11372593/guillaume-blaquiere) suggestion of looking at Checkpoints. Even after adding the Window.into() function in the pipeline, the source PubSub subscription endpoint did not receive ACKs.
The problem was in the Flink server configuration I had failed to mention checkpoint configuration. Without these parameters, checkpoints are disabled.
state.backend: rocksdb
state.checkpoints.dir: file:///tmp/flink-1.9.3/state/checkpoints/
These configs should go in the flink_home/conf/flink-conf.yaml.
After adding these entries and restarting flink. All the backlogged (unack'd messages) went to 0 in the GCP pubsub monitoring chart.
How can I bulk move messages from one topic to another in GCP Pub/Sub?
I am aware of the Dataflow templates that provide this, however unfortunately restrictions do not allow me to use Dataflow API.
Any suggestions on ad-hoc movement of messages between topics (besides one-by-one copy and pasting?)
Specifically, the use case is for moving messages in a deadletter topic back into the original topic for reprocessing.
You can't use snapshots, because snapshots can be applied only on subscriptions of the same topics (to avoid message ID overlapping).
The easiest way is to write a function that pull your subscription. Here, how I will do it:
Create a topic (named, for example, "transfer-topic") with a push subscription. Set the timeout to 10 minutes
Create a Cloud Functions HTTP triggered by PubSub push subscription (or a CLoud Run service). When you deploy it, set the timeout to 9 minutes for Cloud Function and to 10 minutes for Cloud Run. The content of the processing is the following
Read a chunk of messages (for examples 1000) from the deadletter pull subscription
Publish the messages (in bulk mode) into the initial topic
Acknowledge the messages of the dead letter subscription
Repeat this up to the pull subscription is empty
Return code 200.
The global process:
Publish a message in the transfer-topic
The message trigger the function/cloud run with a push HTTP
The process pull the messages and republish them into the initial topic
If the timeout is reached, the function crash and PubSub perform a retry of the HTTP request (according with an exponential backoff).
If all the message are processed, the HTTP 200 response code is returned and the process stopped (and the message into the transfer-topic subscription is acked)
this process allow you to process a very large amount of message without being worried about the timeout.
I suggest that you use a Python script for that.
You can use the PubSub CLI to read the messages and publish to another topic like below:
from google.cloud import pubsub
from google.cloud.pubsub import types
# Defining parameters
PROJECT = "<your_project_id>"
SUBSCRIPTION = "<your_current_subscription_name>"
NEW_TOPIC = "projects/<your_project_id>/topics/<your_new_topic_name>"
# Creating clients for publishing and subscribing. Adjust the max_messages for your purpose
subscriber = pubsub.SubscriberClient()
publisher = pubsub.PublisherClient(
batch_settings=types.BatchSettings(max_messages=500),
)
# Get your messages. Adjust the max_messages for your purpose
subscription_path = subscriber.subscription_path(PROJECT, SUBSCRIPTION)
response = subscriber.pull(subscription_path, max_messages=500)
# Publish your messages to the new topic
for msg in response.received_messages:
publisher.publish(NEW_TOPIC, msg.message.data)
# Ack the old subscription if necessary
ack_ids = [msg.ack_id for msg in response.received_messages]
subscriber.acknowledge(subscription_path, ack_ids)
Before running this code you will need to install the PubSub CLI in your Python environment. You can do that running pip install google-cloud-pubsub
An approach to execute your code is using Cloud Functions. If you decide to use it, pay attention in two points:
The maximum time that you function can take to run is 9 minutes. If this timeout get exceeded, your function will terminate without finishing the job.
In Cloud Functions you can just put google-cloud-pubsub in a new line of your requirements file instead of running a pip command.
I would like to know if i can execute a script based on a SQS message.
My requirement is to execute script in EC2 when i receive a queue message in SQS.
I am using AWS SDK to interact with SQS.
Thanks & Regards,
Srivignesh KN
An SQS message cannot automatically trigger a script to execute on your EC2 server. An SQS message does not automatically do anything. You have to create an application that will poll for SQS messages and then perform the necessary actions based on the messages it receives.
#Mark, thank you for the suggestions.
I was able to find a solution and implement this in the following manner using boto library.
import boto.sqs
#--------------Establishing Connection ------------------
conn = boto.sqs.connect_to_region(
"myregion",
aws_access_key_id='myaccesskey',
aws_secret_access_key='mysecretaccesskey')
print conn.get_all_queues()
#----------------Processing the queue ------------------
my_queue=conn.get_queue('MySqsQueue')
print "My Queue is ",my_queue
rs=my_queue.get_messages()
length=len(rs)
print "Number of messages in the queue is",length
m=rs[0]
MsgBody=m.get_body()
print "Message Body is ",MsgBody
---My Script Execution/Processing---
#--------------Clearing the queue---------------
conn.purge_queue(my_queue)
Thanks & Regards,
Srivignesh KN
I am new to Google Compute/Google App Engine platform. I am currently migrating a python flask application using celery for async tasks to Google Compute/Google App Engine platform. However in the docs it's written I should use Google Pub/Sub instead of celery. In my application whenever I run an async task I have a page to monitor the status of the job using the same principle as http://blog.miguelgrinberg.com/post/using-celery-with-flask. I have checked the documents for google pub/sub, but I am at loss how to implement the same using google pub/sub. Can anybody help or point me to the right direction to implement the same in google pub/sub.
You might be able to use psq for this, which is designed to look like celery. From a general Cloud Pub/Sub perspective, you would follow these steps:
Create a topic for your status update messages.
In the async task whose status you want to monitor, periodically publish a message with the status. This message will be of some format of your choosing that would indicate percentage completion or specific message to display.
Create a subscription for your monitoring page that will receive messages on the topic.
In your monitoring page (or a background process that will supply the data to your monitoring page), pull messages for the subscription.
Process the messages and update the state of your jobs for your monitoring page.
Ack the messages you pulled and processed.
A couple of things to keep in mind in this workflow:
Cloud Pub/Sub guarantees at-least-once delivery. That means you could potentially receive the same message more than once.
Cloud Pub/Sub does not provide any guarantees on ordering. Therefore, if you are periodically publishing status updates, your subscriber could potentially receive these out of order. For your case, you'll probably want your message to include some sort of timestamp or strictly-increasing identifier in your message to sequence your status updates per task. If you keep track of the most recent status update received, then you can disregard older messages and ack them immediately.