How to build an IoT system with mobile app using AWS - amazon-web-services

My project is to develop an IoT system that uses sensors to collect data which can be monitored by the user using a mobile app. I want to use AWS for this project but since I am a beginner, I am confused on where to start. Do you have some tips or some chronological steps that I have to learn and do to be able to create this project?

Not a criticism, but an observation. If you don't know the conceptual steps to achieve this, understand this before you jump into any technology. If you can't do this right now without any AWS technology, jumping in with AWS is going to make your life 100x harder.
Break it down into key challenges.
IoT Sensors. Start with a single sensor. Build a POC with a Raspberry PI that sends data from the device to www.example.com
Build a listening service on your localhost. i.e. HelloWorldIoTDevice simply responds with a 200 OK, don't worry about the data and payloads right now
Save this to a database that simple shows 200 OK messages every time a successful message is received from the IoT device
Build a HelloWorldWebApp that reads this data from the database
Build an API that reads the data from the database
Build a HelloWorldMobileApp that reads data from the API
Build it in full - Add payloads, add authentication and authorisation, get a POC published on the web so it works end to end
Productionise it
Then you need to get actual data flowing. Conceptually this is a relatively simple thing. But in reality this requires an extremely in-depth understanding of technology layers across the entire stack which is extremely challenging if you are a beginner as this is an enormous learning curve.
Take a look at Ngrok to help with building these types of POCs, https://www.contradodigital.com/2016/04/09/access-localhost-internet/
Hope that provides a bit of guidance. Take one step at a time.

Related

AWS IoT Device Onboarding

I'm working on a learning project for IoT with AWS IoT Things and ESP32 using Arduino/C (no micro-python). While I have shadows and messages working well, the part I'm not sure about is the best approach to onboard new devices.
Currently the onboarding process is:
I create the Thing in the AWS Console
I create the certs
I save the certs to laptop
I copy cert contents into the Shadow.h then upload the sketch to the ESP32
This feels incredibly manual :(
Hypothetically how would a reseller of ESP32-based IoT devices automate the onboarding process? How can the Things and certs be automated?
Many thanks in advance
Ant
We're talking about provisioning devices in cloud.
If you (or your organization) is adding your own devices to your own cloud, then it's quite easy to automate. Steps 1 and 2 are the cloud-side part of provisioning - just install the required SDK-s and write a script in your favourite supported scripting language to do the dirty work. For steps 3 and 4 you just use the device's own Flash to store the device certificates. Espressif has a useful non-volatile storage system called NVS - it's fairly easy to use and supports Flash encryption (this bit could be more elegant, but it works). You can use their NVS Partition Generator to pre-create the required storage with the device's certs in it, then flash it into the device when setting it up. Device-side provisioning can be scripted together with cloud-side provisioning so you can do the whole thing in a single step. The Arduino IDE is not the tool to use, though. You just need the final program binaries, but everything else you need to create on your own.
If you're talking about a third party taking your device and provisioning it in their cloud, this is a bit more difficult (but not impossible). Presumably they need to do steps 1 & 2 on their own and you need to give them a way to configure their AWS endpoints and certificates on the device. So you need to build some interface which allows them to do it.

What are my technical requirements?

My goal is to build an application that can dynamically monitor my Stock Portfolio (Stock Options actually). So, I am building my business logic in a TDD approach using C# on .NET core. I haven't much thought about the interface because the following is true:
1) My broker is ETrade so I will have to authenticate and use their api for my position information
2) I need this application to run from 9:30 AM - 4:00 PM EST Monday - Friday
As I am nearing completion of my 1st MVP business logic, I am now starting to think about where I will delpoy the final solution and hence I am seeking the community for feedback.
I have heard, but not worked much with Microservices (AWS, Azure, etc.) so I'm not sure if that is the direction I want to look. (Also, I have a tight timeline and don't want to have to learn too much to get this thing deployed - but I am open to any solution). Excluding Microservices and the Cloud I have considered the following:
a) "I could run the program from a Console application"?
(answer) I would have to either:
(a) get a dedicated server to do or
(b) try to ensure that I can leave a laptop running at home or something, blah, blah
(conclusion) Both are plausible decisions.
b) "I could run the program as a Windows Service"
(answer) I would have to either
(a) (same as above)
(b) (same as above)
(conclusion) Both are plausible decisions.
c) "I could run the program as a Web Site"
(answer) I would have to either
(a) (same as above)
(b) (same as above)
(conclusion) Both are plausible decisions.
c) "I could investigate The Cloud (Microservices)"
(answer) ???
(conclusion)
So, in closing, basically, given the requirements of up-time between those hours and I would like to be able to access the app from any internet browser. I have logic that needs to ping various endpoints pretty much every minute during market hours. So I am not sure how I would handle this using a Web Application because if (by chance) the browser is closed, the Web Application stops running and thus would defeat my needs! Does the cloud help here? Maybe I should just use a Windows Service and make my logs accessible on the web. Or I deploy the TraderBot in a Windows Service and also build a Web Application to receive real-time intel from the TraderBot Windows Service / Logs / and-or DB? Not sure, but I appreciate any knee-jerk responses you all have!
I really like to connect pieces of tech to solve complex problems. Though it's not that complex.
Solution 1: Cloud-based, specifically on AWS
Use AWS Lambdas(Serverless compute) to hit the API to get prices or whatever info you are seeking and then store it in DynamoDb(A NoSQL DB). Use CloudWatch Rules(Serverless CRON job) to invoke your lambda periodically.
Then SPA Single page application to view values stored in DynamoDb. It can be a static website hosted on S3 also.
Or
A mobile app can also serve the purpose of viewing the data from DynamoDb.
Solution 2: Mobile-Only
Why not build the app purely for mobiles like iOS or Android. Check here I've coded one app just to track the price of different alt-coins of different exchanges.
With the mobile-only app, your app will fetch the prices periodically(Using alarms API in case of Android) and will store in its local database(SQLite in case of Android) and then you can open the app any time to see the latest values.
More solutions can be thought of, But I think above are a good approach for solving this problem rather than buying a VPS or blowing your laptop all 24X7 #ThinkCloud
PS: Initial thoughts only, Ask more to enhance the solution... :)

Getting data from local running java app to google cloud app and back

I wanted to dive into the world of distributed systems, cloud computing, IoT, etc., and I gotta be honest, I imagined everything being a little more intuitive than it finally turned out.
I had a tiny testing architecture in mind, that I'd like to set up with Google Clouds and their services, but I am kinda stuck since I can't get my head around some concepts.
What I basically wanted to do (as a first step) is writing a simple java application that would run locally on my computer. This application should just generate random numbers and send those numbers somehow to the google cloud. On the cloud I wanted to define another java application that would manipulate those random numbers in some kind of way (it doesn't matter actually). Afterwards, the output should somehow get back to me of course. And actually, at the moment, I don't even care about how exactly. It could be somehow back to my local app (with some kind of listener, would that be possible?). But it could also simply store the results somewhere on the google cloud? Or maybe upload them to my google drive?
I guess you already noticed that - at some points - I don't even know what i want exactly, since I'm not sure of what is possible, and what not.
Could you provide me some help to get this set up?
The most important questions for me right now are:
Do I need to use a pubsub system, where my generated numbers are sent
to, and which then forwards this to the cloud app, that transforms my
data?
How do I get my data from the local app to the cloud services?
Would my data transforming app run on Google Dataflow?
Above I wrote "as a first step"... because later I would also like to send config files (for example in json format, or xml) to the cloud, and the
cloud application should transform those config files... if I get the
first scenario running the I guess this woul also be no problem
right?
Those are just a few of the questions that are on my mind currently. The most important ones I guess.
It would be a big help. Sorry, if the questions are not very precise, but I really need some kind of pointing into the right direction.
Thank you in advance!
I think it would be good to read up on some of the technologies you mention here:
Google Cloud Pubsub: Pub/Sub enables you to publish messages to a topic, and consume them in another place in the (Google) Cloud. You can see some different examples of publishers and consumers in the link. In your case you could for example write a Java application that writes random numbers to the Pub/Sub queue, where they will sit for 7 days to be consumed by another component (for example, Google Cloud Dataflow). To get started developing, you can find the SDKs here (there is a Java SDK).
Google Cloud Dataflow is managed service running Apache Beam pipelines to process your data at scale. You can learn about the different concepts here and get started designing your pipeline here. I suggest taking a look at some examples first though, which will make it more easy to grasp what is actually going on. Dataflow has a PubSub connector, so in your application you will be able to read from the topic you created before. In Dataflow you can for example multiply all your random numbers and write them to a certain sink (for example Google Cloud Storage, or even BigQuery or PubSub again).
Google Cloud Storage: is a cloud storage where you can put files, for example the output of your Dataflow pipeline. You will be able to manually download the files using the Cloud Console UI, or you can use one of the SDKs to download the output programmatically.
Hope this gives you an overview and some pointers to start. Whenever you are ready and have a more concrete use case in mind, you can start looking at some more components.

Suggestions for real-time charting with SignalR in C# application

I am building a dashboard that will monitor production data, and am able to access this data via web services. The data changes every 1 minute, so I would like to have a page with 4 charts/gauges (the number of systems I am monitoring) that would get the data pushed to them with a successive web service call.
Can anyone suggest a good charting kit that would work well with C#? And would SignalR be a good fit here do you think? I have read that node.js and socket.io are options, but I have no experience with node yet. I would like something along the lines of DevExpress. Perhaps jquery and something on the front end works here as well? Thanks!
For such a dashboard SignalR is definitely a good fit if you already work with .NET and ASP.NET. For a web dashboard in particular, a good graphic library is Raphael, which is open source and pure JavaScript. It's simple and straight to the point, but often less is more. You can build interesting kinds of charts with it.
This project is maybe interesting for you as a sample of those 2 technologies together. If you press the skulls to raise errors, they will be triggered on a backend simulator and pushed to the dashboard using SignalR. You will notice a piechart graph there, which is done using Raphael and updates live when new errors are received.
The code of the project is here, it's a bit complex but maybe you want to have a look anyway. It's based on SignalR 1.x, but overall concepts are still the same.

What are the pros and cons of developing a web app using Parse vs. AWS?

From what I know, Parse offers convenient communication stacks for various platforms such as iOS, so it is easy to build clients that use your web app.
But Parse also seems to be tightly integrated with Facebook. If you were to build a web app that does not need Facebook, but that may integrate with Facebook in the long term, is Parse the clear winner over deploying directly to AWS, or are there important disadvantages to consider?
As far as I understand their page Parse is a PaaS (platform as a service) provider like Heroku and others while AWS is a IaaS (infrastructure as a service) provider.
Pros for PaaS:
They care about the infrastructure
You build your app on an existing platform
For the start you don't need "ops-guys" as you don't do ops
You can take their knowledge and prebuilt tools for your advance
Pros for IaaS:
You have full control about the underlaying infrastructure
You can start with a greenfield and build what ever you want
You can use tools like Puppet / Chef / ... to control your servers
You don't have to pay for the additional stuff you get when using PaaS
(but have to pay your people for it)
So there is not a winner of this "battle" but you have to decide whether you want to use prebuilt tooling and give some independence for this or whether you want to have the absolute control over everything (nearly as you can't touch the hardware) and invest time and manpower into building your own tooling.
"Better, Faster, Cheaper.."
If you are pursuing mobile first strategy, Parse is a great tool for bootstrapping a mature, full web-presence from nothing more than an original beta app.
I dont have direct experience with AWS.
I have used Heroku/Parse integrating (very quickly) a stand alone mobile app with the back-end where the back end needs to cover following:
DB/persistence/noSql
Workflow - async tasks
REST API interface HTTP
Once the mobile app existed with only stubbed local data , Parse allowed a single engineer to build out ALL infrastructure mentioned above very quickly, taking the app from single user to multi-user with full DB and workflow that backs client side events with considerable server-side and cloud side business logic and process. Scaling related startup stuff that used to take weeks took only days.
The compression (time&money) when scaling up an app stack is really something. The Parse API did almost everything that i needed with one small exception (remuxing UGC media).
Personally, i abandoned the parse/android SDK in favor of a more robust REST API (threading on client-side and heavy HTTP activity ).
Developers used to Curl/REST dev stacks will take to Parse.