I'm developing an application with microservice architecture running on Google Cloud Run (fully managed). I want to add communication over events to my services. As I know, the only option is to use Eventarc. I'm curious what is the best way to reproduce the event-driven design when developing locally and how to make deployment as seamless as possible.
Not familiar with Google cloud explicitly, but I assume they all work similarly. As long as you can get your code running locally, then you can still use the cloud hosted message queue / pub/sub interface from your local code.
This way you can debug and try things out on your local machine while still using the messaging / eventing infrastructure.
Related
I'm making a project where temperature and humidity levels are sensored by Arduino and send those data to AWS with ESP-8266-01s. At the same time, those data are also shown on the web application (it may be on Node.js/Java, etc.).
So what I'm asking is how the architecture should be. What is the best practice? Does AWS also provide a web app where I can use it for both database cloud as a web application or should I make a separate project as a web app to connect to AWS?
I searched on Google but the only answers I can find are two ways: Arduino and AWS without another aspect connected to it in my case the web app.
Make use of MQTT protocol.
Components required -
Pubsubclient.h library on esp8266 that will be used to publish temp and humidity data to MQTT Broker on AWS
mosquitto MQTT broker setup on AWS used to accept data from esp8266
Python script that will subscribe to data from the mosquitto broker and dumps into any database(my suggestion is influxdb)
Graphing platform to query database and display visual timeseries-graphs(my suggestion grafana)
Use AWS only for purchasing a virtual machine. Rest can be taken care using open-source Platforms.
Assuming you want to display graphs of temperature and humidity, Using grafana is the best practice.
You will not find a silver bullet here. A proper architecture for your case depends on many things and there can be different approaches with their own pros and cons.
There are many aspects to cover including connectivity, security, update, availability, costs.
Usually IoT devices are not connected directly to the cloud, because they don't have a constant connection, or any network connection. There is a hub (or middleware) that collects data from sensors/devices and send them to the cloud for processing.
But many cloud vendors provide some out of the box complex solutions here (including AWS).
I listed just examples.
I am a beginner at cloud computing, and I'm hoping to get some guidance or advice as to how I can set up a cloud connected to IoT devices and a running application to control the behavior of these devices.
Firstly, there are 5 devices that have to connected via 3G or LTE because of the distance among the devices, so the way I am thinking of is connecting them to the internet using dynamic public ip addresses and using a dynamic DNS server. It seems like I should be using AWS-IoT service to manage these devices. How should I go about doing that, or is there a better approach? The devices all use MQTT and/or REST API.
The next step is to write an application and I was suggested to use AWS Lambda, am I heading towards the correct direction? How do I link the connected devices on AWS-IoT to AWS Lambda?
I know the question may sound vague but I am still new and exploring different solutions. Any guidance or recommendations for the right step forward is appreciated.
I assume your devices (or, one of them) has 64-bit CPU (x86 or Arm) that run Linux.
It's a kind of 70:30 balance where:
- 70% of the work needs to focus on building and testing edge-logic.
- 30% of the work on the rest (IoT Cloud, Lambda etc).
Here is what I suggest.
1/ Code your edge-logic first! (the piece of code that you want to execute ultimately on your devices).
2/ Test it on-the-edge by logging on to the devices (if you can) via SSH and running it.
3/ Once you have that done, 70% of the job is over.
4/ Rest 30% is to complete the jigsaw in cloud. Best place to start: Lambda and Greengrass.
5/ To summarize it all, you will create greengrass components on cloud, install AWS Greengrass Core software on your device, followed by deploying your configuration on your device over-the-air (OTA).
Now, you can use any MQTT client (or) biult-in MQTTTester of AWS IoT -> Test wizard to send a message to your topic to trigger your edge-logic on the device!
Good luck!
cheers,
ram
I am trying to understand Serverless architecture which says 2 distinct things:
you as an app developer think about your function only and not about the server responsibilities. Well, the server still has got to be somewhere. By servers, I understand here that its mean both:
on the infrastructure side Physical Server/VM/container
as well as the on the software side: say, Tomcat
Now, I have worked on Cloud Foundry and studied the ER i.e. Diego Architecture of Cloud Foundry and the buildpack and open Service Broker API facility of Cloud foundry. Effectively, Cloud Foundry also already works on a "similar" model where the application developer focuses on his code and the deployment model with the help of buildpack prepares a droplet with the needed Java runtime and Tomcat runtime and then uses it to create a garden container that serves user requests. So, the developer does not have to worry about where the Tomcat server or the VM/container will come from. So, aren't we already meeting that mandate in Cloud Foundry?
your code comes into existence for the duration of execution and then dies. This I agree is different from the apps/microserevices that we write in Cloud Foundry in that they are long running server processes instead. Now, if I were to develop a Java webapp/microservice with 3 REST endpoints (myapp/resource1, myapp/resource2, myapp/resource3) possibly on a Tomcat Web Server, I need:
a physical machine or a VM or a container,
the Java runtime
the Tomcat container to be able to run my war file.
Going by what Serverless suggests, I infer I am supposed to concentrate only on the very specific function say handling the request to myapp/resource1. Now, in such a scenario:
What is my corresponding Java class supposed to look like?
Where do I get access to the J2EE objects like HttpServletRequest or HttpServletResponse objects and other http or servlet or JAX-RS or Spring MVC provided objects that are created by the Tomcat runtime?
Is my Java class executed within a container that is created for the duration of execution and then destroyed after execution? If yes, who manages the creation/destruction of such a container?
Would Tomcat even be required? Is there an altogether different generic way of handling requests to these three REST endpoints? Is it somewhat like httpd servers using python/Java CGI scripts to handle http requests?
so there is a set of applications that position itself as a distributed cluster O/S called DCOS.
It has an MPI and spark running on top of it.
I am a developer and I have a set of distributed services running connected via socket or ZeroMQ communication system.
How can I port my existing services to DCOS?
Meaning use it's communication facilities instead of sockets/zmq.
Is there any API \ Docs on how not to run it but develop for it?
There are a number of ways to get your application to run on DCOS (and/or Mesos).
First for legacy applications you can use the marathon framework which you can view as kind of the init system of DCOS/Mesos.
If you need more elaborate applications and want to really program against the apis you would write a mesos framework: see the framework development guide for more details.
For deeper integration of your framework into DCOS as for example using the package repository/ command line install option check out/contact mesosphere for more details.
Hope this helps!
Joerg
I have some code that needs to run as a result of a call to a service bus. This particular code is CPU intensive and it is possible that 100s of these will need to run at the same time. Does Azure Web Jobs use computing resources from one machine, or does it use any available computing resources from several machines?
Web Jobs uses your web app resources, why don't you try the Azure Functions which can be scaled and their pricing is almost zero. They are in Technical preview i have tried using it, Azure functions is very cheap. If your service bus call can be out of process service meaning it does not need a instance results from your application you can try azure functions. Azure functions are mostly used for maintenance and night time running jobs. I have used it to minify my images to thumbnail. It worked perfectly fine
Azure Webjobs is designed to run on as many servers as you've scaled up the website to run on. By default it will run up to 16 tasks from a queue concurrently but this is configurable as shown here.
public class Program
{
static void Main()
{
JobHostConfiguration config = new JobHostConfiguration();
config.Queues.BatchSize= 1;
JobHost host = new JobHost(config);
host.RunAndBlock();
}
}
A web job deployed as part of an Azure WebSite will share the resources with the web application. You have the option to scale the website up to 10 (I think?) instances if you need parallelism.
As I mentioned in the comment to Matthew's post, if you use the Azure WebJobs SDK to have functions triggered by ServiceBus queue messages, we don't parallelize as part of the same host. This means that messages will be processed sequentially as long as you have a single host.