I am trying to solve the smart-car solution, I have a situation as below
Device 1 and Device 2 are Raspberry Pi OS device
Device 1 captured an image using the camera attached to it, and Device 2 finds the Device 1 and should be capable to view the captured image generated by the Device 1.
In the above situation since both the devices are taken as Raspberry Pi OS device,
1. In which device should I download Postman and its associated collection and environment?
I tried to install postman in one of the Raspberry Pi OS devices, but got some error as below.
2. Since it showed some support/stable issue, Is there any other Pi image that is compatible with Postman?
Please look at official documentation from postman to get their recommendation on which image to use that is stable.
Related
I've done some research on this and have already tried using IP WebCam and gstreamer to pipe the video stream from the IP based source to /dev/videoX and reading from that.
It works fine, but it isn't exactly what I am looking for. I have several old useless phones that I would like to use on a robot as video sources from multiple angles for computer vision based tasks.
I've also read that DroidCam does this, but I have read terrible reviews about the app and especially the USB based part.
Most likely this robot will not be near any WiFi (or rather should not require it), and the phones will not have a network connection (no SIM card), so no USB tethering.
It seems like there should be a very simple way to do this. Is there a way to uselibusb and maybe adb to capture the video stream from an Android phone connected to an Ubuntu box with USB?
I am doing a speech processing project with a Raspberry Pi 3 (running Raspbian) using a USB Microphone. I can see the Microphone show up as a selectable audio device for the Pi and it produces/captures sound perfectly.
I cannot figure out how to use this in my code; I have done a ton of research regarding this and have found some tutorials but nothing that making sense. I come from more of a hardware background and have done something like this with controllers where I hook up an actual Mic and process the Analog Signal into Digital on IO Pins; I am so frustrated with this that I am about to pump data over from an Arduino using a Mic and A2D conversion.
-------------------------------------------------------My questions----------------------------------------------------
1) I want to know how to access a USB data stream or USB device in C or C++. My Linux abilities are not the best. Do I open a Serial Connection or open a filestream in "/dev/USB/...."? Can you provide a code example?
2) Regardless of the fidelity of the USB Mic Input, I want to know how to access its Input in C/C++. I have been looking at ALSA but cannot really understand a lot of its complexity. Is there something that gives me access to a raw input signal on the USB Port that I can process ( where I extrapolate out frequency, amplitude, etc.)?
I have already gone through a lot of the similar posts on here. I am really stuck on this one. I 'm really looking to understand what is going on from the OS perspective; I'll use a library given but I want to understand how it works.
Thanks!
So an update:
So I did all of my code in C with some .sh scripts. I went ahead and figured out how to use the Alsa asoundlib (asound.h specifically). As of now, I am able to generate and record sound via USB Mic/Headset with my Pi 3. Doing so is rather arduous but here is a useful link (1).
For my project, I also found a CMU tutorial/repos for their PocketSphinx Audio Recognition Device at link (2) and video link (3). This project utilizes the Alsa asoundlib as well and was a great help for me. It takes a while to download and you need to crawl through its .sh scripts to figure out its gcc linking. But I am able to now give audio cues which are interpreted by my Pi 3 and pushed to speaker output and GPIO pins.
LINKS(1)http://www.alsa-project.org/alsa-doc/alsa-lib/_2test_2pcm_8c-example.html
(2)https://wolfpaulus.com/embedded/raspberrypi2-sr/
(3)https://www.youtube.com/watch?v=5kp5qpwVh_8
I'm developing a portable hardware/software application to use 2 cameras in a stereo vision configuration, and process the raw data for information to output.
For this reason I have a Raspberry pi Compute module kit, and a Raspberry pi 3.
The compute module kit will operate the two cameras
The pi 3 will run the code as it has the computational power
OpenCV (C++) is the preferred CV package
As this is a portable application, internet based streaming is not a suitable option.
I've not had time to play around with the GPIO pins, or find a method of streaming the two camera feeds from the compute module to the pi 3.
How would you suggest I proceed with this? Has anyone performed such a project? What links can you provide to help me implement this?
This is for a dissertation project, and will hopefully help in the long run when developing as a full prototype.
Frame Size: 640x480
Frame Rate: 15 fps
The cameras are 5cm apart from each other
Updated Answer
I have been doing some further tests on this. Using the iperf tool and my own simple TCP connection code as well, I connected two Raspberry Pis directly to each other over wired Ethernet and measured the TCP performance.
Using the standard, built-in 10/100 interface on a Raspberry Pi 2 and a Raspberry Pi 3, you can achieve 94Mbits/s.
If, however, you put a TRENDnet USB3 Gigabit adaptor on each Pi, and repeat the test, you can get 189Mbits/s and almost 200 if you set the MTU to 4088.
Original Answer
I made a quick test - not a full answer - but more than I can add as a comment or format correctly!
I set up 2 Raspberry Pi 2s with a wired Ethernet connection. I took a 640x480 picture on one as a JPEG - and it came out at 178,000 bytes.
Then, on the receiving Pi, I set up to receive 1,000 frames. Like this:
#!/bin/bash
for ((i=0;i<1000;i++)); do
echo $i
nc -l 1234 > pic-${i}.jpg
done
On the sending Pi, I set up to transit the picture 1,000 times:
for ((i=0;i<1000;i++)) ; do nc 192.168.0.65 1234 < pipic1.jpg ;done
That took 34 seconds, so it does 33 fps roughly but it stuttered a lot because of writing to the filesystem and therefore SD card. So, I removed the
nc -l 1234 > pic-${i}.jpg
and didn't write the data to disk - which is what you will need as you are writing to the screen, as follows:
nc -l 1234 > /dev/null
I am going to connect:
a: a wifi dongle to Raspberry Pi and make an Ad Hoc network
b: a raspberry pi camera to the raspberry pi
I want to connect to the RPi's ad hoc network from my PC. Once the PC and the RPi are on the same network,an application on the PC should be able to stream video from the camera and be able to control the GPIO pins on the Raspberry. My problem is that I don't know how to get started with the application.
Can anyone tell me what topics to read,and what libraries to use for streaming?anything that can help me at all will be highly appreciated as I am completely in the dark right now.I am comfortable with programming in C++ so the application will naturally be in C++.
Thanks.
a possible starting point for the video streaming could be mjpg-streamer
https://code.google.com/p/mjpg-streamer/
I used this without problem to stream video from the pi so that I could view it in a browser on a different PC on the same network.
I did it before using mjpg-streamer and also made a video show the way to do.
Check it here
Easiest way to stream Raspberry Pi Camera Module
Pretty new to oF and C++, but trying to open up communications between Flash (AS3) and a Canon DSLR. We've successfully done it using a socket server (using https://github.com/roxlu/ofxFlashCommunication), so AS3 can trigger the DSLR's shutter, get the image path, etc. But we want to turn the live view preview (which is easy to view in the C++ app using Canon's SDK) into a webcam stream so that Flash can display a preview (via AS3's native Camera and Video classes) to the user. Unfortunately, passing the live view image data through the socket server is not an option as that requires converting the image to a byte array, passing it to flash, and having flash parse that back into an image. That method was way too slow (low FPS).
Current OS: Mac OSX 10.8.3
What is the best way to get the live view from C++ to Flash? Is there an easy to use library for oF/C++ that can help me turn a sequence of images (in real time) to a native hardware webcam stream?
There is a soft that can open a Canon DSLR and turn into an AS3 native camera
http://sparkosoft.com/
Unfortunately it doesn't seem to respect the frame rate settings on the cam. I don't know if theres is a hardware limitation which wouldn't allow for 60fps.
Maybe this will get you closer to where you want. If you're able to get it working #60fps let me know
http://www.monday8am.com/en/2012/05/29/canoneos_lib_extension/